From d18a7f076eeb25de5f9c079448c0495e55127f6c Mon Sep 17 00:00:00 2001 From: Michael Taufen Date: Fri, 25 May 2018 16:30:11 -0700 Subject: [PATCH 1/3] update dynamic kubelet config docs for v1.11 --- .../administer-cluster/reconfigure-kubelet.md | 388 +++++++----------- 1 file changed, 155 insertions(+), 233 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md b/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md index b14f641a529df..0af4a83dbe8f6 100644 --- a/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md +++ b/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md @@ -7,28 +7,27 @@ content_template: templates/task --- {{% capture overview %}} -{{< feature-state state="alpha" >}} -As of Kubernetes 1.8, the new -[Dynamic Kubelet Configuration](https://github.com/kubernetes/features/issues/281) -feature is available in alpha. This allows you to change the configuration of -Kubelets in a live Kubernetes cluster via first-class Kubernetes concepts. -Specifically, this feature allows you to configure individual Nodes' Kubelets -via ConfigMaps. +{{< feature-state for_k8s_version="v1.11" state="beta" >}} + +The [Dynamic Kubelet Configuration](https://github.com/kubernetes/features/issues/281) +feature allows you to change the configuration of each Kubelet in a live Kubernetes +cluster by deploying a ConfigMap and configuring each Node to use it. **Warning:** All Kubelet configuration parameters may be changed dynamically, but not all parameters are safe to change dynamically. This feature is intended for system experts who have a strong understanding of how configuration changes -will affect behavior. No documentation currently exists which plainly lists -"safe to change" fields, but we plan to add it before this feature graduates -from alpha. +will affect behavior. In general, you should always carefully test config changes +on a small set of nodes before rolling them out to your entire cluster. +Additional per-config-field advice can be found in the inline `KubeletConfiguration` +[type documentation](https://github.com/kubernetes/kubernetes/blob/release-1.11/pkg/kubelet/apis/kubeletconfig/v1beta1/types.go). {{% /capture %}} {{% capture prerequisites %}} -- A live Kubernetes cluster with both Master and Node at v1.8 or higher must be -running, with the `DynamicKubeletConfig` feature gate enabled and the Kubelet's -`--dynamic-config-dir` flag set to a writable directory on the Node. +- A live Kubernetes cluster with both Master and Node at v1.11 or higher must +be running and the Kubelet's `--dynamic-config-dir` flag must be set to a +writable directory on the Node. This flag must be set to enable Dynamic Kubelet Configuration. -- The kubectl command-line tool must be also v1.8 or higher, and must be +- The kubectl command-line tool must be v1.11 or higher, and must be configured to communicate with the cluster. {{% /capture %}} @@ -57,11 +56,10 @@ and is overridden by command-line flags. Unspecified values in the new configura will receive default values appropriate to the configuration version (e.g. `kubelet.config.k8s.io/v1beta1`), unless overridden by flags. -The status of the Node's Kubelet configuration is reported via the `KubeletConfigOK` -condition in the Node status. Once you have updated a Node to use the new -ConfigMap, you can observe this condition to confirm that the Node is using the -intended configuration. A table describing the possible conditions can be found -at the end of this article. +The status of the Node's Kubelet configuration is reported via +`Node.Spec.Status.Config`. Once you have updated a Node to use the new +ConfigMap, you can observe this status to confirm that the Node is using the +intended configuration. This document describes editing Nodes using `kubectl edit`. There are other ways to modify a Node's spec, including `kubectl patch`, for @@ -70,16 +68,17 @@ example, which facilitate scripted workflows. This document only describes a single Node consuming each ConfigMap. Keep in mind that it is also valid for multiple Nodes to consume the same ConfigMap. -### Node Authorizer Workarounds +**Warning:** Note that while it is *possible* to change the configuration by +updating the ConfigMap in-place, this will cause all Kubelets configured with +that ConfigMap to update simultaneously. It is much safer to treat ConfigMaps +as immutable by convention, aided by `kubectl`'s `--append-hash` option, +and incrementally roll out updates to `Node.Spec.ConfigSource`. -The Node Authorizer does not yet pay attention to which ConfigMaps are assigned -to which Nodes. If you currently use the Node authorizer, your Kubelets will not -be automatically granted permission to download their respective ConfigMaps. +### Note Regarding the Node Authorizer -The temporary workaround used in this document is to manually create the RBAC -Roles and RoleBindings for each ConfigMap. The Node Authorizer will be extended -before the Dynamic Kubelet Configuration feature graduates from alpha, so doing -this in production should never be necessary. +Old versions of this document required users to manually create RBAC rules +for Nodes to access their assigned ConfigMaps. The Node Authorizer now +automatically configures these rules, so this step is no longer necessary. ### Generating a file that contains the current configuration @@ -90,12 +89,13 @@ and debug issues. The compromise, however, is that you must start with knowledge of the existing configuration to ensure that you only change the fields you intend to change. -In the future, the Kubelet will be bootstrapped from a file on disk +In the future, the Kubelet will be bootstrapped from just a file on disk (see [Set Kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file)), and you will simply edit a copy of this file (which, as a best practice, should live in version control) while creating the first Kubelet ConfigMap. Today, -however, the Kubelet is still bootstrapped with command-line flags. Fortunately, -there is a dirty trick you can use to generate a config file containing a Node's +however, the Kubelet is bootstrapped with a combination of this file and command-line flags +that can override the configuration in the file. +Fortunately, there is a dirty trick you can use to generate a config file containing a Node's current configuration. The trick involves accessing the Kubelet server's `configz` endpoint via the kubectl proxy. This endpoint, in its current implementation, is intended to be used only as a debugging aid, which is part of why this is a @@ -152,39 +152,14 @@ metadata: uid: 946d785e-998a-11e7-a8dd-42010a800006 ``` -Note that the configuration data must appear under the ConfigMap's -`kubelet` key. - We create the ConfigMap in the `kube-system` namespace, which is appropriate because this ConfigMap configures a Kubernetes system component - the Kubelet. The `--append-hash` option appends a short checksum of the ConfigMap contents to the name. This is convenient for an edit->push workflow, as it will -automatically, yet deterministically, generate new names for new ConfigMaps. - -We use the `-o yaml` output format so that the name, namespace, and uid are all -reported following creation. We will need these in the next step. We will refer -to the name as CONFIG_MAP_NAME and the uid as CONFIG_MAP_UID. - -### Authorize your Node to read the new ConfigMap - -Now that you've created a new ConfigMap, you need to authorize your node to -read it. First, create a Role for your new ConfigMap with the -following commands: - -``` -$ export CONFIG_MAP_NAME=name-from-previous-output -$ kubectl -n kube-system create role ${CONFIG_MAP_NAME}-reader --verb=get --resource=configmap --resource-name=${CONFIG_MAP_NAME} -``` - -Next, create a RoleBinding to associate your Node with the new Role: - -``` -$ kubectl -n kube-system create rolebinding ${CONFIG_MAP_NAME}-reader --role=${CONFIG_MAP_NAME}-reader --user=system:node:${NODE_NAME} -``` - -Once the Node Authorizer is updated to do this automatically, you will -be able to skip this step. +automatically, yet deterministically, generate new names for new ConfigMaps. +We will refer to the name that includes this generated hash as +`CONFIG_MAP_NAME` below. ### Set the Node to use the new configuration @@ -199,46 +174,79 @@ Once in your editor, add the following YAML under `spec`: ``` configSource: - configMapRef: + configMap: name: CONFIG_MAP_NAME namespace: kube-system - uid: CONFIG_MAP_UID + kubeletConfigKey: kubelet ``` -Be sure to specify all three of `name`, `namespace`, and `uid`. +Be sure to specify all three of `name`, `namespace`, and `kubeletConfigKey`. +The last parameter tells the Kubelet which key of the ConfigMap it can find +its config in. ### Observe that the Node begins using the new configuration -Retrieve the Node with `kubectl get node ${NODE_NAME} -o yaml`, and look for the -`KubeletConfigOK` condition in `status.conditions`. You should see the message -`Using current (UID: CONFIG_MAP_UID)` when the Kubelet starts using the new -configuration. +Retrieve the Node with `kubectl get node ${NODE_NAME} -o yaml`, and inspect +`Node.Status.Config`. You should see the config sources corresponding to the `active`, +`assigned`, and `lastKnownGood` configurations reported in the status. The `active` +configuration is the version the Kubelet is currently running with, the `assigned` +configuration is the latest version the Kubelet has resolved based on +`Node.Spec.ConfigSource`, and the `lastKnownGood` configuration is the version the +Kubelet will fall back to if an invalid config is assigned in `Node.Spec.ConfigSource`. + +You might not see `lastKnownGood` appear in the status if it is set to its default value, +the local config deployed with the node. The status will update `lastKnownGood` to +match a valid `assigned` config after the Kubelet becomes comfortable with the config. +The details of how the Kubelet determines a config should become the `lastKnownGood` are +not guaranteed by the API, though it may be useful, for debugging purposes, to know that +this is presently implemented as a 10-minute grace period. For convenience, you can use the following command (using `jq`) to filter down -to the `KubeletConfigOK` condition: - -``` -$ kubectl get no ${NODE_NAME} -o json | jq '.status.conditions|map(select(.type=="KubeletConfigOK"))' -[ - { - "lastHeartbeatTime": "2017-09-20T18:08:29Z", - "lastTransitionTime": "2017-09-20T18:08:17Z", - "message": "using current: /api/v1/namespaces/kube-system/configmaps/my-node-config-gkt4c2m4b2", - "reason": "passing all checks", - "status": "True", - "type": "KubeletConfigOK" +to the config status: + +``` +$ kubectl get no ${NODE_NAME} -o json | jq '.status.config' +{ + "active": { + "configMap": { + "kubeletConfigKey": "kubelet", + "name": "my-node-config-9mbkccg2cc", + "namespace": "kube-system", + "resourceVersion": "1326", + "uid": "705ab4f5-6393-11e8-b7cc-42010a800002" + } + }, + "assigned": { + "configMap": { + "kubeletConfigKey": "kubelet", + "name": "my-node-config-9mbkccg2cc", + "namespace": "kube-system", + "resourceVersion": "1326", + "uid": "705ab4f5-6393-11e8-b7cc-42010a800002" + } + }, + "lastKnownGood": { + "configMap": { + "kubeletConfigKey": "kubelet", + "name": "my-node-config-9mbkccg2cc", + "namespace": "kube-system", + "resourceVersion": "1326", + "uid": "705ab4f5-6393-11e8-b7cc-42010a800002" + } } -] +} + ``` -If something goes wrong, you may see one of several different error conditions, -detailed in the table of KubeletConfigOK conditions, below. When this happens, you -should check the Kubelet's log for more details. +If something goes wrong, the Kubelet will report any configuration related errors +in `Node.Status.Config.Error`. You may see one of several possible errors, which +are detailed in a table at the end of this article. If you see any of these errors, +you can search for the same error message in the Kubelet's log for additional details. ### Edit the configuration file again To change the configuration again, we simply repeat the above workflow. -Try editing the `kubelet` file, changing the previously changed parameter to a +Try editing the `kubelet_configz_${NODE_NAME}` file, changing the previously changed parameter to a new value. ### Push the newly edited configuration to the control plane @@ -247,209 +255,123 @@ Push the new configuration to the control plane in a new ConfigMap with the following command: ``` -$ kubectl create configmap my-node-config --namespace=kube-system --from-file=kubelet=kubelet_configz_${NODE_NAME} --append-hash -o yaml +$ kubectl -n kube-system create configmap my-node-config --from-file=kubelet=kubelet_configz_${NODE_NAME} --append-hash -o yaml ``` This new ConfigMap will get a new name, as we have changed the contents. -We will refer to the new name as NEW_CONFIG_MAP_NAME and the new uid -as NEW_CONFIG_MAP_UID. - -### Authorize your Node to read the new ConfigMap - -Now that you've created a new ConfigMap, you need to authorize your node to -read it. First, create a Role for your new ConfigMap with the -following commands: - -``` -$ export NEW_CONFIG_MAP_NAME=name-from-previous-output -$ kubectl -n kube-system create role ${NEW_CONFIG_MAP_NAME}-reader --verb=get --resource=configmap --resource-name=${NEW_CONFIG_MAP_NAME} -``` - -Next, create a RoleBinding to associate your Node with the new Role: - -``` -$ kubectl -n kube-system create rolebinding ${NEW_CONFIG_MAP_NAME}-reader --role=${NEW_CONFIG_MAP_NAME}-reader --user=system:node:${NODE_NAME} -``` - -Once the Node Authorizer is updated to do this automatically, you will -be able to skip this step. +We will refer to the new name as `NEW_CONFIG_MAP_NAME`. ### Configure the Node to use the new configuration -Once more, edit the Node's `spec.configSource` with -`kubectl edit node ${NODE_NAME}`. Your new `spec.configSource` should look like -the following, with `name` and `uid` substituted as necessary: +Once more, edit `Node.Spec.ConfigSource` via `kubectl edit node ${NODE_NAME}`. +Your new `Node.Spec.ConfigSource` should look like the following, +with `${NEW_CONFIG_MAP_NAME}` substituted as necessary: ``` configSource: - configMapRef: + configMap: name: ${NEW_CONFIG_MAP_NAME} namespace: kube-system - uid: ${NEW_CONFIG_MAP_UID} + kubeletConfigKey: kubelet ``` ### Observe that the Kubelet is using the new configuration Once more, retrieve the Node with `kubectl get node ${NODE_NAME} -o yaml`, and -look for the `KubeletConfigOK` condition in `status.conditions`. You should see the message -`using current: /api/v1/namespaces/kube-system/configmaps/${NEW_CONFIG_MAP_NAME}` when the Kubelet starts using the -new configuration. - -### Deauthorize your Node from reading the old ConfigMap - -Once you know your Node is using the new configuration and are confident that -the new configuration has not caused any problems, it is a good idea to -deauthorize the node from reading the old ConfigMap. Run the following -commands to remove the RoleBinding and Role: - -``` -$ kubectl -n kube-system delete rolebinding ${CONFIG_MAP_NAME}-reader -$ kubectl -n kube-system delete role ${CONFIG_MAP_NAME}-reader -``` - -Note that this does not necessarily prevent the Node from reverting to the old -configuration, as it may locally cache the old ConfigMap for an indefinite -period of time. - -You may optionally also choose to remove the old ConfigMap: - -``` -$ kubectl -n kube-system delete configmap ${CONFIG_MAP_NAME} -``` - -Once the Node Authorizer is updated to do this automatically, you will -be able to skip this step. +look for a `Node.Status.Config` that reports the new configuration as `assigned` +and `active`, with no errors. ### Reset the Node to use its local default configuration Finally, if you wish to reset the Node to use the configuration it was provisioned with, simply edit the Node with `kubectl edit node ${NODE_NAME}` and -remove the `spec.configSource` subfield. +remove the `Node.Spec.ConfigSource` field. ### Observe that the Node is using its local default configuration -After removing this subfield, you should eventually observe that the KubeletConfigOK -condition's message reverts to `using current: local`. +After removing this subfield, you should eventually observe that `Node.Status.Config` +has become empty, as all config sources have been reset to `nil` (indicating the local +default config is `assigned`, `active`, and `lastKnownGood`), and no error is reported. -### Deauthorize your Node from reading the old ConfigMap +{{% /capture %}} -Once you know your Node is using the default configuration again, it is a good -idea to deauthorize the node from reading the old ConfigMap. Run the following -commands to remove the RoleBinding and Role: +{{% capture discussion %}} +## Kubectl Patch Example +As mentioned above, there are many ways to change a Node's configSource. +Here is an example command that uses `kubectl patch`: ``` -$ kubectl -n kube-system delete rolebinding ${NEW_CONFIG_MAP_NAME}-reader -$ kubectl -n kube-system delete role ${NEW_CONFIG_MAP_NAME}-reader +kubectl patch node ${NODE_NAME} -p "{\"spec\":{\"configSource\":{\"configMap\":{\"name\":\"${CONFIG_MAP_NAME}\",\"namespace\":\"kube-system\",\"kubeletConfigKey\":\"kubelet\"}}}}" ``` -Note that this does not necessarily prevent the Node from reverting to the old -ConfigMap, as it may locally cache the old ConfigMap for an indefinite -period of time. - -You may optionally also choose to remove the old ConfigMap: - -``` -$ kubectl -n kube-system delete configmap ${NEW_CONFIG_MAP_NAME} -``` +## Understanding how the Kubelet checkpoints config -Once the Node Authorizer is updated to do this automatically, you will -be able to skip this step. +When a new config is assigned to the Node, the Kubelet downloads and unpacks the +config payload as a set of files on local disk. The Kubelet also records metadata +that locally tracks the assigned and last-known-good config sources, so that the +Kubelet knows which config to use across restarts, even if the API server becomes +unavailable. After checkpointing a config and the relevant metadata, the Kubelet +will exit if the assigned config has changed. When the Kubelet is restarted by the +babysitter process, it will read the new metadata, and use the new config. -{{% /capture %}} +The recorded metadata is fully resolved, meaning that it contains all necessary +information to choose a specific config version - typically a `UID` and `ResourceVersion`. +This is in contrast to `Node.Spec.ConfigSource`, where the intended config is declared +via the idempotent `namespace/name` that identifies the target ConfigMap; the Kubelet +tries to use the latest version of this ConfigMap. -{{% capture discussion %}} -## Kubectl Patch Example -As mentioned above, there are many ways to change a Node's configSource. -Here is an example command that uses `kubectl patch`: +It can sometimes be useful to inspect the Kubelet's config metadata and checkpoints +when debugging a Node. The structure of the Kubelet's checkpointing directory is as follows: ``` -kubectl patch node ${NODE_NAME} -p "{\"spec\":{\"configSource\":{\"configMapRef\":{\"name\":\"${CONFIG_MAP_NAME}\",\"namespace\":\"kube-system\",\"uid\":\"${CONFIG_MAP_UID}\"}}}}" +- --dynamic-config-dir (root for managing dynamic config) +| - meta + | - assigned (encoded kubeletconfig/v1beta1.SerializedNodeConfigSource object, indicating the assigned config) + | - last-known-good (encoded kubeletconfig/v1beta1.SerializedNodeConfigSource object, indicating the last-known-good config) +| - checkpoints + | - uid1 (dir for versions of object identified by uid1) + | - resourceVersion1 (dir for unpacked files from resourceVersion1 of object with uid1) + | - ... + | - ... ``` -## Understanding KubeletConfigOK Conditions +## Understanding Node.Status.Config.Error messages -The following table describes several of the `KubeletConfigOK` Node conditions you -might encounter in a cluster that has Dynamic Kubelet Config enabled. If you -observe a condition with `status=False`, you should check the Kubelet log for -more error details by searching for the message or reason text. +The following table describes the error messages you might encounter +when using Dynamic Kubelet Config. You can search for the same text +as the error message in the Kubelet log for additional details +on the error. - -
- - - + + + + + + - - - + + - - - + + - - - + + - - - + + - - - + +
Possible MessagesPossible ReasonsStatusError MessagePossible Causes

failed to load config, see Kubelet log for details

The Kubelet likely could not parse the downloaded config payload, or encountered a filesystem error attempting to load the payload from disk.

using current: local

when the config source is nil, the Kubelet uses its local config

True

failed to validate config, see Kubelet log for details

The configuration in the payload, combined with any command-line flag overrides, and the sum of feature gates from flags, the config file, and the remote payload, was determined to be invalid by the Kubelet.

using current: /api/v1/namespaces/${CURRENT_CONFIG_MAP_NAMESPACE}/configmaps/${CURRENT_CONFIG_MAP_NAME}

passing all checks

True

invalid NodeConfigSource, exactly one subfield must be non-nil, but all were nil

Since Node.Spec.ConfigSource is validated by the API server to contain at least one non-nil subfield, this likely means that the Kubelet is older than the API server and does not recognize a newer source type.

using last-known-good: local

-
    -
  • failed to load current: /api/v1/namespaces/${CURRENT_CONFIG_MAP_NAMESPACE}/configmaps/${CURRENT_CONFIG_MAP_NAME}
  • -
  • failed to parse current: /api/v1/namespaces/${CURRENT_CONFIG_MAP_NAMESPACE}/configmaps/${CURRENT_CONFIG_MAP_NAME}
  • -
  • failed to validate current: /api/v1/namespaces/${CURRENT_CONFIG_MAP_NAMESPACE}/configmaps/${CURRENT_CONFIG_MAP_NAME}
  • -
-

False

failed to sync: failed to download config, see Kubelet log for details

The Kubelet could not download the config. It is possible that Node.Spec.ConfigSource could not be resolved to a concrete API object, or that network errors disrupted the download attempt. The Kubelet will retry the download when in this error state.

using last-known-good: /api/v1/namespaces/${LAST_KNOWN_GOOD_CONFIG_MAP_NAMESPACE}/configmaps/${LAST_KNOWN_GOOD_CONFIG_MAP_NAME}

-
    -
  • failed to load current: /api/v1/namespaces/${CURRENT_CONFIG_MAP_NAMESPACE}/configmaps/${CURRENT_CONFIG_MAP_NAME}
  • -
  • failed to parse current: /api/v1/namespaces/${CURRENT_CONFIG_MAP_NAMESPACE}/configmaps/${CURRENT_CONFIG_MAP_NAME}
  • -
  • failed to validate current: /api/v1/namespaces/${CURRENT_CONFIG_MAP_NAMESPACE}/configmaps/${CURRENT_CONFIG_MAP_NAME}
  • -
-

False

failed to sync: internal failure, see Kubelet log for details

The Kubelet encountered some internal problem and failed to update its config as a result. Examples include filesystem errors and reading objects from the internal informer cache.

-

- The reasons in the next column could potentially appear for any of - the above messages. -

-

- This condition indicates that the Kubelet is having trouble - reconciling `spec.configSource`, and thus no change to the in-use - configuration has occurred. -

-

- The "failed to sync" reasons are specific to the failure that - occurred, and the next column does not necessarily contain all - possible failure reasons. -

-
-

failed to sync, reason:

-
    -
  • failed to read Node from informer object cache
  • -
  • failed to reset to local config
  • -
  • invalid NodeConfigSource, exactly one subfield must be non-nil, but all were nil
  • -
  • invalid ObjectReference, all of UID, Name, and Namespace must be specified
  • -
  • invalid ConfigSource.ConfigMapRef.UID: ${UID} does not match ${API_PATH}.UID: ${UID_OF_CONFIG_MAP_AT_API_PATH}
  • -
  • failed to determine whether object ${API_PATH} with UID ${UID} was already checkpointed
  • -
  • failed to download ConfigMap with name ${NAME} from namespace ${NAMESPACE}
  • -
  • failed to save config checkpoint for object ${API_PATH} with UID ${UID}
  • -
  • failed to set current config checkpoint to local config
  • -
  • failed to set current config checkpoint to object ${API_PATH} with UID ${UID}
  • -
-

False

internal failure, see Kubelet log for details

The Kubelet encountered some internal problem while manipulating config, outside of the configuration sync loop.

-{{% /capture %}} - - +{{% /capture %}} From bf7e6659bb08b7b0d1ff9037a7af0836cea2e96b Mon Sep 17 00:00:00 2001 From: Misty Stanley-Jones Date: Thu, 7 Jun 2018 14:51:56 -0700 Subject: [PATCH 2/3] Substantial copyedit --- .../administer-cluster/reconfigure-kubelet.md | 258 ++++++++---------- 1 file changed, 119 insertions(+), 139 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md b/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md index 0af4a83dbe8f6..56b727f163c90 100644 --- a/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md +++ b/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md @@ -9,26 +9,26 @@ content_template: templates/task {{% capture overview %}} {{< feature-state for_k8s_version="v1.11" state="beta" >}} -The [Dynamic Kubelet Configuration](https://github.com/kubernetes/features/issues/281) -feature allows you to change the configuration of each Kubelet in a live Kubernetes +[Dynamic Kubelet Configuration](https://github.com/kubernetes/features/issues/281) +allows you to change the configuration of each Kubelet in a live Kubernetes cluster by deploying a ConfigMap and configuring each Node to use it. -**Warning:** All Kubelet configuration parameters may be changed dynamically, -but not all parameters are safe to change dynamically. This feature is intended -for system experts who have a strong understanding of how configuration changes -will affect behavior. In general, you should always carefully test config changes -on a small set of nodes before rolling them out to your entire cluster. -Additional per-config-field advice can be found in the inline `KubeletConfiguration` +{{< warning >}} +**Warning:** All Kubelet configuration parameters can be changed dynamically, +but this is unsafe for some parameters. Before deciding to change a parameter +dynamically, you need a strong understanding of how that change will affect your +cluster's behavior. Always carefully test configuration changes on a small set +of nodes before rolling them out cluster-wide. Advice on configuring specific +fields is available in the inline `KubeletConfiguration` [type documentation](https://github.com/kubernetes/kubernetes/blob/release-1.11/pkg/kubelet/apis/kubeletconfig/v1beta1/types.go). +{{< /warning >}} {{% /capture %}} {{% capture prerequisites %}} -- A live Kubernetes cluster with both Master and Node at v1.11 or higher must -be running and the Kubelet's `--dynamic-config-dir` flag must be set to a -writable directory on the Node. -This flag must be set to enable Dynamic Kubelet Configuration. -- The kubectl command-line tool must be v1.11 or higher, and must be -configured to communicate with the cluster. +- Kubernetes v1.11 or higher on both the Master and the Nodes +- kubectl v1.11 or higher, configured to communicate with the cluster +- The Kubelet's `--dynamic-config-dir` flag must be set to a writable + directory on the Node. {{% /capture %}} {{% capture steps %}} @@ -68,17 +68,19 @@ example, which facilitate scripted workflows. This document only describes a single Node consuming each ConfigMap. Keep in mind that it is also valid for multiple Nodes to consume the same ConfigMap. -**Warning:** Note that while it is *possible* to change the configuration by -updating the ConfigMap in-place, this will cause all Kubelets configured with +{{< warning >}} +**Warning:** While it is *possible* to change the configuration by +updating the ConfigMap in-place, this causes all Kubelets configured with that ConfigMap to update simultaneously. It is much safer to treat ConfigMaps as immutable by convention, aided by `kubectl`'s `--append-hash` option, and incrementally roll out updates to `Node.Spec.ConfigSource`. +{{< /warning >}} -### Note Regarding the Node Authorizer +### Automatic RBAC rules for Node Authorizer -Old versions of this document required users to manually create RBAC rules -for Nodes to access their assigned ConfigMaps. The Node Authorizer now -automatically configures these rules, so this step is no longer necessary. +Previously, you were required to manually create RBAC rules +to allow Nodes to access their assigned ConfigMaps. The Node Authorizer now +automatically configures these rules. ### Generating a file that contains the current configuration @@ -89,55 +91,58 @@ and debug issues. The compromise, however, is that you must start with knowledge of the existing configuration to ensure that you only change the fields you intend to change. -In the future, the Kubelet will be bootstrapped from just a file on disk +Ideally, the Kubelet would be bootstrapped from a file on disk +and you could edit this file (which could also be version-controlled), +to create the first Kubelet ConfigMap (see [Set Kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file)), -and you will simply edit a copy of this file (which, as a best practice, should -live in version control) while creating the first Kubelet ConfigMap. Today, -however, the Kubelet is bootstrapped with a combination of this file and command-line flags +Currently, the Kubelet is bootstrapped with **a combination of this file and command-line flags** that can override the configuration in the file. -Fortunately, there is a dirty trick you can use to generate a config file containing a Node's -current configuration. The trick involves accessing the Kubelet server's `configz` +As a workaround, you can use to generate a config file containing a Node's +current configuration by accessing the Kubelet server's `configz` endpoint via the kubectl proxy. This endpoint, in its current implementation, is -intended to be used only as a debugging aid, which is part of why this is a -dirty trick. The endpoint may be improved in the future, but until then -it should not be relied on for production scenarios. -This trick also requires the `jq` command to be installed on your machine, -for unpacking and editing the JSON response from the endpoint. - -Do the following to generate the file: - -1. Pick a Node to reconfigure. We will refer to this Node's name as NODE_NAME. -2. Start the kubectl proxy in the background with `kubectl proxy --port=8001 &` +intended to be used only as a debugging aid. Do not rely on the behavior of this +endpoint for production scenarios. +The `jq` command needs to be installed on your system, to unpack and edit the +JSON response from the endpoint. + +#### Generate the configuration file + +1. Pick a Node to reconfigure. In this example, this Node is named `NODE_NAME`. +2. Start the kubectl proxy in the background using the following command: + ```bash + kubectl proxy --port=8001 & + ``` 3. Run the following command to download and unpack the configuration from the -configz endpoint: + `configz` endpoint. The command is long, so be careful when copying and + pasting. -``` -$ export NODE_NAME=the-name-of-the-node-you-are-reconfiguring -$ curl -sSL http://localhost:8001/api/v1/nodes/${NODE_NAME}/proxy/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"' > kubelet_configz_${NODE_NAME} -``` + ```bash + NODE_NAME=the-name-of-the-node-you-are-reconfiguring; curl -sSL http://localhost:8001/api/v1/nodes/${NODE_NAME}/proxy/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"' > kubelet_configz_${NODE_NAME} + ``` -Note that we have to manually add the `kind` and `apiVersion` to the downloaded -object, as these are not reported by the configz endpoint. This is one of the -limitations of the endpoint. +{{< note >}} +You need to manually add the `kind` and `apiVersion` to the downloaded +object, because they are not reported by the `configz` endpoint. +{{< /note >}} -### Edit the configuration file +#### Edit the configuration file -Using your editor of choice, change one of the parameters in the -`kubelet_configz_${NODE_NAME}` file from the previous step. A QPS parameter, -`eventRecordQPS` for example, is a good candidate. +Using a text editor, change one of the parameters in the +file generated by the previous procedure. For example, you +might add the QPS parameter `eventRecordQPS`. -### Push the configuration file to the control plane +#### Push the configuration file to the control plane Push the edited configuration file to the control plane with the following command: -``` -$ kubectl -n kube-system create configmap my-node-config --from-file=kubelet=kubelet_configz_${NODE_NAME} --append-hash -o yaml +```bash +kubectl -n kube-system create configmap my-node-config --from-file=kubelet=kubelet_configz_${NODE_NAME} --append-hash -o yaml ``` -You should see a response similar to: +This is an example of a valid response: -``` +```none apiVersion: v1 data: kubelet: | @@ -152,27 +157,27 @@ metadata: uid: 946d785e-998a-11e7-a8dd-42010a800006 ``` -We create the ConfigMap in the `kube-system` namespace, which is appropriate -because this ConfigMap configures a Kubernetes system component - the Kubelet. +The ConfigMap is created in the `kube-system` namespace because this +ConfigMap configures a Kubelet, which is Kubernetes system component. The `--append-hash` option appends a short checksum of the ConfigMap contents -to the name. This is convenient for an edit->push workflow, as it will -automatically, yet deterministically, generate new names for new ConfigMaps. -We will refer to the name that includes this generated hash as -`CONFIG_MAP_NAME` below. +to the name. This is convenient for an edit-then-push workflow, because it +automatically, yet deterministically, generates new names for new ConfigMaps. +The name that includes this generated hash is referred to as `CONFIG_MAP_NAME` +in the following examples. -### Set the Node to use the new configuration +#### Set the Node to use the new configuration Edit the Node's reference to point to the new ConfigMap with the following command: -``` +```bash kubectl edit node ${NODE_NAME} ``` -Once in your editor, add the following YAML under `spec`: +In your text editor, add the following YAML under `spec`: -``` +```yaml configSource: configMap: name: CONFIG_MAP_NAME @@ -180,32 +185,38 @@ configSource: kubeletConfigKey: kubelet ``` -Be sure to specify all three of `name`, `namespace`, and `kubeletConfigKey`. -The last parameter tells the Kubelet which key of the ConfigMap it can find -its config in. +You must specify all three of `name`, `namespace`, and `kubeletConfigKey`. +The `kubeletConfigKey` parameter shows the Kubelet which key of the ConfigMap +contains its config. -### Observe that the Node begins using the new configuration +#### Observe that the Node begins using the new configuration -Retrieve the Node with `kubectl get node ${NODE_NAME} -o yaml`, and inspect -`Node.Status.Config`. You should see the config sources corresponding to the `active`, -`assigned`, and `lastKnownGood` configurations reported in the status. The `active` -configuration is the version the Kubelet is currently running with, the `assigned` -configuration is the latest version the Kubelet has resolved based on -`Node.Spec.ConfigSource`, and the `lastKnownGood` configuration is the version the -Kubelet will fall back to if an invalid config is assigned in `Node.Spec.ConfigSource`. +Retrieve the Node using the `kubectl get node ${NODE_NAME} -o yaml` command and inspect +`Node.Status.Config`. The config sources corresponding to the `active`, +`assigned`, and `lastKnownGood` configurations are reported in the status. -You might not see `lastKnownGood` appear in the status if it is set to its default value, +- The `active` configuration is the version the Kubelet is currently running with. +- The `assigned` configuration is the latest version the Kubelet has resolved based on + `Node.Spec.ConfigSource`. +- The `lastKnownGood` configuration is the version the + Kubelet will fall back to if an invalid config is assigned in `Node.Spec.ConfigSource`. + +The`lastKnownGood` configuration might not be present if it is set to its default value, the local config deployed with the node. The status will update `lastKnownGood` to match a valid `assigned` config after the Kubelet becomes comfortable with the config. The details of how the Kubelet determines a config should become the `lastKnownGood` are -not guaranteed by the API, though it may be useful, for debugging purposes, to know that -this is presently implemented as a 10-minute grace period. +not guaranteed by the API, but is currently implemented as a 10-minute grace period. -For convenience, you can use the following command (using `jq`) to filter down +You can use the following command (using `jq`) to filter down to the config status: +```bash +kubectl get no ${NODE_NAME} -o json | jq '.status.config' ``` -$ kubectl get no ${NODE_NAME} -o json | jq '.status.config' + +The following is an example response: + +```json { "active": { "configMap": { @@ -238,81 +249,51 @@ $ kubectl get no ${NODE_NAME} -o json | jq '.status.config' ``` -If something goes wrong, the Kubelet will report any configuration related errors -in `Node.Status.Config.Error`. You may see one of several possible errors, which -are detailed in a table at the end of this article. If you see any of these errors, -you can search for the same error message in the Kubelet's log for additional details. - -### Edit the configuration file again - -To change the configuration again, we simply repeat the above workflow. -Try editing the `kubelet_configz_${NODE_NAME}` file, changing the previously changed parameter to a -new value. - -### Push the newly edited configuration to the control plane - -Push the new configuration to the control plane in a new ConfigMap with the -following command: - -``` -$ kubectl -n kube-system create configmap my-node-config --from-file=kubelet=kubelet_configz_${NODE_NAME} --append-hash -o yaml -``` - -This new ConfigMap will get a new name, as we have changed the contents. -We will refer to the new name as `NEW_CONFIG_MAP_NAME`. - -### Configure the Node to use the new configuration - -Once more, edit `Node.Spec.ConfigSource` via `kubectl edit node ${NODE_NAME}`. -Your new `Node.Spec.ConfigSource` should look like the following, -with `${NEW_CONFIG_MAP_NAME}` substituted as necessary: - -``` -configSource: - configMap: - name: ${NEW_CONFIG_MAP_NAME} - namespace: kube-system - kubeletConfigKey: kubelet -``` +If an error occurs, the Kubelet reports it in the `Node.Status.Config.Error` +structure. Possible errors are listed in +[Understanding Node.Status.Config.Error messages](#understanding-node-status-config-error-messages). +If you see an error, you can search for it in the Kubelet's log for additional +details. -### Observe that the Kubelet is using the new configuration +#### Make more changes -Once more, retrieve the Node with `kubectl get node ${NODE_NAME} -o yaml`, and -look for a `Node.Status.Config` that reports the new configuration as `assigned` -and `active`, with no errors. +Follow the workflow above to make more changes and push them again. Each +time you change the ConfigMap's contents, it gets a new name. -### Reset the Node to use its local default configuration +#### Reset the Node to use its local default configuration -Finally, if you wish to reset the Node to use the configuration it was -provisioned with, simply edit the Node with `kubectl edit node ${NODE_NAME}` and -remove the `Node.Spec.ConfigSource` field. +To reset the Node to use the configuration it was provisioned with, edit the +Node using `kubectl edit node ${NODE_NAME}` and remove the +`Node.Spec.ConfigSource` field. -### Observe that the Node is using its local default configuration +#### Observe that the Node is using its local default configuration -After removing this subfield, you should eventually observe that `Node.Status.Config` -has become empty, as all config sources have been reset to `nil` (indicating the local +After removing this subfield, `Node.Status.Config` eventually becomes +empty, since all config sources have been reset to `nil`, which indicates that the local default config is `assigned`, `active`, and `lastKnownGood`), and no error is reported. {{% /capture %}} {{% capture discussion %}} ## Kubectl Patch Example -As mentioned above, there are many ways to change a Node's configSource. -Here is an example command that uses `kubectl patch`: -``` +You can change a Node's configSource using several different mechanisms. +This example uses `kubectl patch`: + +```bash kubectl patch node ${NODE_NAME} -p "{\"spec\":{\"configSource\":{\"configMap\":{\"name\":\"${CONFIG_MAP_NAME}\",\"namespace\":\"kube-system\",\"kubeletConfigKey\":\"kubelet\"}}}}" ``` ## Understanding how the Kubelet checkpoints config When a new config is assigned to the Node, the Kubelet downloads and unpacks the -config payload as a set of files on local disk. The Kubelet also records metadata +config payload as a set of files on the local disk. The Kubelet also records metadata that locally tracks the assigned and last-known-good config sources, so that the Kubelet knows which config to use across restarts, even if the API server becomes unavailable. After checkpointing a config and the relevant metadata, the Kubelet -will exit if the assigned config has changed. When the Kubelet is restarted by the -babysitter process, it will read the new metadata, and use the new config. +exits if it detects that the assigned config has changed. When the Kubelet is +restarted by the OS-level service manager (such as `systemd`), it reads the new +metadata and uses the new config. The recorded metadata is fully resolved, meaning that it contains all necessary information to choose a specific config version - typically a `UID` and `ResourceVersion`. @@ -320,10 +301,10 @@ This is in contrast to `Node.Spec.ConfigSource`, where the intended config is de via the idempotent `namespace/name` that identifies the target ConfigMap; the Kubelet tries to use the latest version of this ConfigMap. -It can sometimes be useful to inspect the Kubelet's config metadata and checkpoints -when debugging a Node. The structure of the Kubelet's checkpointing directory is as follows: +When you are debugging problems on a node, you can inspect the Kubelet's config +metadata and checkpoints. The structure of the Kubelet's checkpointing directory is: -``` +```none - --dynamic-config-dir (root for managing dynamic config) | - meta | - assigned (encoded kubeletconfig/v1beta1.SerializedNodeConfigSource object, indicating the assigned config) @@ -337,10 +318,9 @@ when debugging a Node. The structure of the Kubelet's checkpointing directory is ## Understanding Node.Status.Config.Error messages -The following table describes the error messages you might encounter -when using Dynamic Kubelet Config. You can search for the same text -as the error message in the Kubelet log for additional details -on the error. +The following table describes error messages that can occur +when using Dynamic Kubelet Config. You can search for the identical text +in the Kubelet log for additional details and context about the error.
From 0b6ddd3ebbf972d5474fd72f9dc38ff22e8cb64a Mon Sep 17 00:00:00 2001 From: Misty Stanley-Jones Date: Tue, 12 Jun 2018 15:40:25 -0700 Subject: [PATCH 3/3] Address feedback --- .../administer-cluster/reconfigure-kubelet.md | 65 ++++++++++--------- 1 file changed, 36 insertions(+), 29 deletions(-) diff --git a/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md b/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md index 56b727f163c90..ad74354a69892 100644 --- a/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md +++ b/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md @@ -47,9 +47,9 @@ Kubelet's configuration. Each Kubelet watches a configuration reference on its respective Node object. When this reference changes, the Kubelet downloads the new configuration, updates a local reference to refer to the file, and exits. -For the feature to work correctly, you must be running a process manager -(like systemd) which will restart the Kubelet when it exits. When the Kubelet is -restarted, it will begin using the new configuration. +For the feature to work correctly, you must be running an OS-level service +manager (such as systemd), which will restart the Kubelet if it exits. When the +Kubelet is restarted, it will begin using the new configuration. The new configuration completely overrides configuration provided by `--config`, and is overridden by command-line flags. Unspecified values in the new configuration @@ -97,28 +97,32 @@ to create the first Kubelet ConfigMap (see [Set Kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file)), Currently, the Kubelet is bootstrapped with **a combination of this file and command-line flags** that can override the configuration in the file. -As a workaround, you can use to generate a config file containing a Node's -current configuration by accessing the Kubelet server's `configz` -endpoint via the kubectl proxy. This endpoint, in its current implementation, is -intended to be used only as a debugging aid. Do not rely on the behavior of this -endpoint for production scenarios. -The `jq` command needs to be installed on your system, to unpack and edit the -JSON response from the endpoint. +As a workaround, you can generate a config file containing a Node's current +configuration by accessing the Kubelet server's `configz` endpoint via the +kubectl proxy. This endpoint, in its current implementation, is intended to be +used only as a debugging aid. Do not rely on the behavior of this endpoint for +production scenarios. The examples below use the `jq` command to streamline +working with JSON. To follow the tasks as written, you need to have `jq` +installed, but you can adapt the tasks if you prefer to extract the +`kubeletconfig` subobject manually. #### Generate the configuration file -1. Pick a Node to reconfigure. In this example, this Node is named `NODE_NAME`. -2. Start the kubectl proxy in the background using the following command: - ```bash - kubectl proxy --port=8001 & - ``` -3. Run the following command to download and unpack the configuration from the - `configz` endpoint. The command is long, so be careful when copying and - pasting. - - ```bash - NODE_NAME=the-name-of-the-node-you-are-reconfiguring; curl -sSL http://localhost:8001/api/v1/nodes/${NODE_NAME}/proxy/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"' > kubelet_configz_${NODE_NAME} - ``` +1. Choose a Node to reconfigure. In this example, the name of this Node is + referred to as `NODE_NAME`. +2. Start the kubectl proxy in the background using the following command: + ```bash + kubectl proxy --port=8001 & + ``` +3. Run the following command to download and unpack the configuration from the + `configz` endpoint. The command is long, so be careful when copying and + pasting. **If you use zsh**, replace the `${NODE_NAME}` in the URL with the + actual name of the node, because zsh automatically escapes opening curly + braces, which causes the command to fail. + + ```bash + NODE_NAME="the-name-of-the-node-you-are-reconfiguring"; curl -sSL "http://localhost:8001/api/v1/nodes/${NODE_NAME}/proxy/configz" | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"' > kubelet_configz_${NODE_NAME} + ``` {{< note >}} You need to manually add the `kind` and `apiVersion` to the downloaded @@ -129,7 +133,7 @@ object, because they are not reported by the `configz` endpoint. Using a text editor, change one of the parameters in the file generated by the previous procedure. For example, you -might add the QPS parameter `eventRecordQPS`. +might edit the QPS parameter `eventRecordQPS`. #### Push the configuration file to the control plane @@ -252,13 +256,15 @@ The following is an example response: If an error occurs, the Kubelet reports it in the `Node.Status.Config.Error` structure. Possible errors are listed in [Understanding Node.Status.Config.Error messages](#understanding-node-status-config-error-messages). -If you see an error, you can search for it in the Kubelet's log for additional -details. +You can search for the identical text in the Kubelet log for additional details +and context about the error. #### Make more changes -Follow the workflow above to make more changes and push them again. Each -time you change the ConfigMap's contents, it gets a new name. +Follow the workflow above to make more changes and push them again. Each time +you push a ConfigMap with new contents, the --append-hash kubectl option creates +the ConfigMap with a new name. The safest rollout strategy is to first create a +new ConfigMap, and then update the Node to use the new ConfigMap. #### Reset the Node to use its local default configuration @@ -269,8 +275,9 @@ Node using `kubectl edit node ${NODE_NAME}` and remove the #### Observe that the Node is using its local default configuration After removing this subfield, `Node.Status.Config` eventually becomes -empty, since all config sources have been reset to `nil`, which indicates that the local -default config is `assigned`, `active`, and `lastKnownGood`), and no error is reported. +empty, since all config sources have been reset to `nil`, which indicates that +the local default config is `assigned`, `active`, and `lastKnownGood`, and no +error is reported. {{% /capture %}}