Skip to content

Conversation

@KnVerey
Copy link
Contributor

@KnVerey KnVerey commented Mar 21, 2023

/cc @justinsb
/sig cli

@k8s-ci-robot k8s-ci-robot requested a review from justinsb March 21, 2023 21:17
@k8s-ci-robot k8s-ci-robot added sig/cli Categorizes an issue or PR as relevant to SIG CLI. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory labels Mar 21, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: KnVerey

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Mar 21, 2023
@@ -1,75 +1,5 @@
<!--
**Note:** When your KEP is complete, all of these comment blocks should be removed.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In accordance with this, I removed all the instructional comments for elements we've already completed.

- ConfigMap
- Secret
- custom resources, where the CRD registering them is labeled with `applyset.k8s.io/role/parent` (The value is currently ignored, but implementors should set an empty value to be forwards-compatible with future evolution of this convention.)
- custom resources, where the CRD registering them is labeled with `applyset.kubernetes.io/is-parent-type` (The value is currently ignored, but implementors should set it to "true" to be forwards-compatible with future evolution of this convention.)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docs PR currently instructs people to set it to "true". I think that is the logical default value given the name we ended up going with.

The `applyset.kubernetes.io/id` label is what makes the object an ApplySet parent object. Its value MUST match the value of `applyset.kubernetes.io/part-of` on the member objects (see [ApplySet Member Objects](#labels)), and MUST use the format outlined in [ApplySet Identification](#applyset-identification).
Additionally, ApplySet parents MUST be labelled with:
Additionally, ApplySet parents MUST be annotated with:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might have been an uncaught mistake on my part, but in the real code we are currently checking an annotation. While we don't have any reason to query on this, it might be useful? Only to a limited degree, given that the value includes version info. I could go either way.

If the contains-group-resources annotation is missing, kubectl will initially consider this an error. Based on feedback, we can consider either falling back on a (very slow) full-GR scan to populate the annotation (after confirming kubectl owns the parent), or pointing users to a separate command (similar in spirit to `fsck`) that will do so. If we do the latter, we will add warnings/suggestions to the main "apply" flow when we detect problems that might require a full-scan / discovery. We may extend this based on user-feedback from the alpha.
Based on performance feedback, we can also consider switching to the alternative `applyset.k8s.io/inventory` hint annotation. Even if we do not trust the GKNN list for deletion purposes (we cannot, as it is not the source of truth), it could be used to optimize certain specific cases, most notably the no-op scenario where the current set exactly matches the list.
Based on performance feedback, we can also consider switching to the alternative `applyset.kubernetes.io/inventory` hint annotation. Even if we do not trust the GKNN list for deletion purposes (we cannot, as it is not the source of truth), it could be used to optimize certain specific cases, most notably the no-op scenario where the current set exactly matches the list.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wdyt about adding testing perf and making a decision on whether to support this as a beta criterion?

# Simple apply with prune, with cluster-scoped ApplySet
# The parent object will be the "myapp" Namespace itself.
kubectl apply -n myapp --prune --applyset=namespaces/myapp -f .
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we decided not to allow anything other than Secret/ConfigMap even during the last KEP update, and this seemingly got missed

Server-side apply (SSA) will be used to create the Secret even if the main operation used client-side apply. The first request will disable conflict forcing, and output any errors encountered. If a conflict is encountered, a second request will be made with conflicts forced and an additional warning message. Taking over an existing Secret is allowed, as long as it does not have any conflicting fields (no special criteria vs subsequent operations). We may revisit this in the release in which the applyset verification and repair subcommand is introduced.
Since there is no obvious choice for a cluster-scoped built-in resource that could be similarly chosen as the default ApplySet kind, we will allow the kind to optionally be specified in the `--applyset` flag itself: `--applyset=mykind.v1.mygroup/name`. This is the same format used by `kubectl get`. When a GVR is specified in this manner, kubectl will look up the referenced object and attempt to use it as the parent (using SSA as described above for the Secret case). The referenced object MUST already exist on the cluster by the time the pruning phase begins (it may be created by the main apply operation), as it is not possible for kubectl to sanely construct arbitrary object types from scratch.
Since there is no obvious choice for a cluster-scoped built-in resource that could be similarly chosen as the default ApplySet kind, we will allow a GR to optionally be specified in the `--applyset` flag itself: `--applyset=[RESOURCE][.GROUP]/NAME`. This is one of the formats accepted by `kubectl get`. When a GR is specified in this manner, kubectl will look up the referenced object and attempt to use it as the parent (using SSA as described above for the Secret case). The referenced object MUST already exist on the cluster by the time the pruning phase begins (it may be created by the main apply operation), as it is not possible for kubectl to sanely construct arbitrary object types from scratch.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that we never said that the required annotations must also pre-exist. The fact that our implementation currently expects that is a problem for the CR parent flow, and we should revisit that (and potentially be explicit about that somewhere around here).

an error. During implementation of the alpha we will explore to what extent we can
optimize this overlap discovery, particularly in conjunction with
server-side-apply which does not require an object read before applying.
A richer apply tooling than kubectl does will likely establish watches
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing this, because this whole section is only about kubectl.

However, this is out of scope for kubectl and thus we will likely have to
optimize differently for kubectl. In the worst case, we will have to fetch
the objects before applying (with a set of label-filtered LIST requests),
we will explore to what extent that can be amortized over other kubectl
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Being more familiar with the requests made during CSA and SSA, I don't think there's any way to amortize it in SSA.

a set of objects created with the existing pruning mechanism).

We will not support "adoption" of existing ApplySets initially, other than
by re-applying "over the top". Based on user feedback, we may require a flag
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note the change from "flag" to "subcommand or flag on repair command" -- same reasoning as above about overloading apply with flags for edge cases.

In the alpha scope, we will explore suitable "migration" tooling for moving
from existing `--prune` objects. Note that migration is not trivial, in that
different users may expect different behaviors with regard to the GKs selected
or the treatment of objects having/lacking the `last-application-configuration`
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what we meant here. The migration command will be subject to the same gotchas in set selection as the old one, but I think all we can do is ask for the same inputs they used on their own command, and go with the results. We could add interactive confirmation maybe?

```yaml
applyset.k8s.io/contains-group-kinds: <kind>.<group>[,]` # OPTIONAL
applyset.kubernetes.io/contains-group-resources: <resource>.<group>[,]` # OPTIONAL
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what we actually did in the alpha, but we may want to go back to the original proposal: #3661 (comment)

@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Apr 14, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 13, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cli Categorizes an issue or PR as relevant to SIG CLI. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

3 participants