diff --git a/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md b/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md index b14f641a529df..ad74354a69892 100644 --- a/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md +++ b/content/en/docs/tasks/administer-cluster/reconfigure-kubelet.md @@ -7,29 +7,28 @@ content_template: templates/task --- {{% capture overview %}} -{{< feature-state state="alpha" >}} -As of Kubernetes 1.8, the new +{{< feature-state for_k8s_version="v1.11" state="beta" >}} + [Dynamic Kubelet Configuration](https://github.com/kubernetes/features/issues/281) -feature is available in alpha. This allows you to change the configuration of -Kubelets in a live Kubernetes cluster via first-class Kubernetes concepts. -Specifically, this feature allows you to configure individual Nodes' Kubelets -via ConfigMaps. - -**Warning:** All Kubelet configuration parameters may be changed dynamically, -but not all parameters are safe to change dynamically. This feature is intended -for system experts who have a strong understanding of how configuration changes -will affect behavior. No documentation currently exists which plainly lists -"safe to change" fields, but we plan to add it before this feature graduates -from alpha. +allows you to change the configuration of each Kubelet in a live Kubernetes +cluster by deploying a ConfigMap and configuring each Node to use it. + +{{< warning >}} +**Warning:** All Kubelet configuration parameters can be changed dynamically, +but this is unsafe for some parameters. Before deciding to change a parameter +dynamically, you need a strong understanding of how that change will affect your +cluster's behavior. Always carefully test configuration changes on a small set +of nodes before rolling them out cluster-wide. Advice on configuring specific +fields is available in the inline `KubeletConfiguration` +[type documentation](https://github.com/kubernetes/kubernetes/blob/release-1.11/pkg/kubelet/apis/kubeletconfig/v1beta1/types.go). +{{< /warning >}} {{% /capture %}} {{% capture prerequisites %}} -- A live Kubernetes cluster with both Master and Node at v1.8 or higher must be -running, with the `DynamicKubeletConfig` feature gate enabled and the Kubelet's -`--dynamic-config-dir` flag set to a writable directory on the Node. -This flag must be set to enable Dynamic Kubelet Configuration. -- The kubectl command-line tool must be also v1.8 or higher, and must be -configured to communicate with the cluster. +- Kubernetes v1.11 or higher on both the Master and the Nodes +- kubectl v1.11 or higher, configured to communicate with the cluster +- The Kubelet's `--dynamic-config-dir` flag must be set to a writable + directory on the Node. {{% /capture %}} {{% capture steps %}} @@ -48,20 +47,19 @@ Kubelet's configuration. Each Kubelet watches a configuration reference on its respective Node object. When this reference changes, the Kubelet downloads the new configuration, updates a local reference to refer to the file, and exits. -For the feature to work correctly, you must be running a process manager -(like systemd) which will restart the Kubelet when it exits. When the Kubelet is -restarted, it will begin using the new configuration. +For the feature to work correctly, you must be running an OS-level service +manager (such as systemd), which will restart the Kubelet if it exits. When the +Kubelet is restarted, it will begin using the new configuration. The new configuration completely overrides configuration provided by `--config`, and is overridden by command-line flags. Unspecified values in the new configuration will receive default values appropriate to the configuration version (e.g. `kubelet.config.k8s.io/v1beta1`), unless overridden by flags. -The status of the Node's Kubelet configuration is reported via the `KubeletConfigOK` -condition in the Node status. Once you have updated a Node to use the new -ConfigMap, you can observe this condition to confirm that the Node is using the -intended configuration. A table describing the possible conditions can be found -at the end of this article. +The status of the Node's Kubelet configuration is reported via +`Node.Spec.Status.Config`. Once you have updated a Node to use the new +ConfigMap, you can observe this status to confirm that the Node is using the +intended configuration. This document describes editing Nodes using `kubectl edit`. There are other ways to modify a Node's spec, including `kubectl patch`, for @@ -70,16 +68,19 @@ example, which facilitate scripted workflows. This document only describes a single Node consuming each ConfigMap. Keep in mind that it is also valid for multiple Nodes to consume the same ConfigMap. -### Node Authorizer Workarounds +{{< warning >}} +**Warning:** While it is *possible* to change the configuration by +updating the ConfigMap in-place, this causes all Kubelets configured with +that ConfigMap to update simultaneously. It is much safer to treat ConfigMaps +as immutable by convention, aided by `kubectl`'s `--append-hash` option, +and incrementally roll out updates to `Node.Spec.ConfigSource`. +{{< /warning >}} -The Node Authorizer does not yet pay attention to which ConfigMaps are assigned -to which Nodes. If you currently use the Node authorizer, your Kubelets will not -be automatically granted permission to download their respective ConfigMaps. +### Automatic RBAC rules for Node Authorizer -The temporary workaround used in this document is to manually create the RBAC -Roles and RoleBindings for each ConfigMap. The Node Authorizer will be extended -before the Dynamic Kubelet Configuration feature graduates from alpha, so doing -this in production should never be necessary. +Previously, you were required to manually create RBAC rules +to allow Nodes to access their assigned ConfigMaps. The Node Authorizer now +automatically configures these rules. ### Generating a file that contains the current configuration @@ -90,54 +91,62 @@ and debug issues. The compromise, however, is that you must start with knowledge of the existing configuration to ensure that you only change the fields you intend to change. -In the future, the Kubelet will be bootstrapped from a file on disk +Ideally, the Kubelet would be bootstrapped from a file on disk +and you could edit this file (which could also be version-controlled), +to create the first Kubelet ConfigMap (see [Set Kubelet parameters via a config file](/docs/tasks/administer-cluster/kubelet-config-file)), -and you will simply edit a copy of this file (which, as a best practice, should -live in version control) while creating the first Kubelet ConfigMap. Today, -however, the Kubelet is still bootstrapped with command-line flags. Fortunately, -there is a dirty trick you can use to generate a config file containing a Node's -current configuration. The trick involves accessing the Kubelet server's `configz` -endpoint via the kubectl proxy. This endpoint, in its current implementation, is -intended to be used only as a debugging aid, which is part of why this is a -dirty trick. The endpoint may be improved in the future, but until then -it should not be relied on for production scenarios. -This trick also requires the `jq` command to be installed on your machine, -for unpacking and editing the JSON response from the endpoint. - -Do the following to generate the file: - -1. Pick a Node to reconfigure. We will refer to this Node's name as NODE_NAME. -2. Start the kubectl proxy in the background with `kubectl proxy --port=8001 &` -3. Run the following command to download and unpack the configuration from the -configz endpoint: - -``` -$ export NODE_NAME=the-name-of-the-node-you-are-reconfiguring -$ curl -sSL http://localhost:8001/api/v1/nodes/${NODE_NAME}/proxy/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"' > kubelet_configz_${NODE_NAME} -``` - -Note that we have to manually add the `kind` and `apiVersion` to the downloaded -object, as these are not reported by the configz endpoint. This is one of the -limitations of the endpoint. - -### Edit the configuration file - -Using your editor of choice, change one of the parameters in the -`kubelet_configz_${NODE_NAME}` file from the previous step. A QPS parameter, -`eventRecordQPS` for example, is a good candidate. - -### Push the configuration file to the control plane +Currently, the Kubelet is bootstrapped with **a combination of this file and command-line flags** +that can override the configuration in the file. +As a workaround, you can generate a config file containing a Node's current +configuration by accessing the Kubelet server's `configz` endpoint via the +kubectl proxy. This endpoint, in its current implementation, is intended to be +used only as a debugging aid. Do not rely on the behavior of this endpoint for +production scenarios. The examples below use the `jq` command to streamline +working with JSON. To follow the tasks as written, you need to have `jq` +installed, but you can adapt the tasks if you prefer to extract the +`kubeletconfig` subobject manually. + +#### Generate the configuration file + +1. Choose a Node to reconfigure. In this example, the name of this Node is + referred to as `NODE_NAME`. +2. Start the kubectl proxy in the background using the following command: + ```bash + kubectl proxy --port=8001 & + ``` +3. Run the following command to download and unpack the configuration from the + `configz` endpoint. The command is long, so be careful when copying and + pasting. **If you use zsh**, replace the `${NODE_NAME}` in the URL with the + actual name of the node, because zsh automatically escapes opening curly + braces, which causes the command to fail. + + ```bash + NODE_NAME="the-name-of-the-node-you-are-reconfiguring"; curl -sSL "http://localhost:8001/api/v1/nodes/${NODE_NAME}/proxy/configz" | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"' > kubelet_configz_${NODE_NAME} + ``` + +{{< note >}} +You need to manually add the `kind` and `apiVersion` to the downloaded +object, because they are not reported by the `configz` endpoint. +{{< /note >}} + +#### Edit the configuration file + +Using a text editor, change one of the parameters in the +file generated by the previous procedure. For example, you +might edit the QPS parameter `eventRecordQPS`. + +#### Push the configuration file to the control plane Push the edited configuration file to the control plane with the following command: -``` -$ kubectl -n kube-system create configmap my-node-config --from-file=kubelet=kubelet_configz_${NODE_NAME} --append-hash -o yaml +```bash +kubectl -n kube-system create configmap my-node-config --from-file=kubelet=kubelet_configz_${NODE_NAME} --append-hash -o yaml ``` -You should see a response similar to: +This is an example of a valid response: -``` +```none apiVersion: v1 data: kubelet: | @@ -152,304 +161,204 @@ metadata: uid: 946d785e-998a-11e7-a8dd-42010a800006 ``` -Note that the configuration data must appear under the ConfigMap's -`kubelet` key. - -We create the ConfigMap in the `kube-system` namespace, which is appropriate -because this ConfigMap configures a Kubernetes system component - the Kubelet. +The ConfigMap is created in the `kube-system` namespace because this +ConfigMap configures a Kubelet, which is Kubernetes system component. The `--append-hash` option appends a short checksum of the ConfigMap contents -to the name. This is convenient for an edit->push workflow, as it will -automatically, yet deterministically, generate new names for new ConfigMaps. - -We use the `-o yaml` output format so that the name, namespace, and uid are all -reported following creation. We will need these in the next step. We will refer -to the name as CONFIG_MAP_NAME and the uid as CONFIG_MAP_UID. - -### Authorize your Node to read the new ConfigMap - -Now that you've created a new ConfigMap, you need to authorize your node to -read it. First, create a Role for your new ConfigMap with the -following commands: - -``` -$ export CONFIG_MAP_NAME=name-from-previous-output -$ kubectl -n kube-system create role ${CONFIG_MAP_NAME}-reader --verb=get --resource=configmap --resource-name=${CONFIG_MAP_NAME} -``` - -Next, create a RoleBinding to associate your Node with the new Role: - -``` -$ kubectl -n kube-system create rolebinding ${CONFIG_MAP_NAME}-reader --role=${CONFIG_MAP_NAME}-reader --user=system:node:${NODE_NAME} -``` - -Once the Node Authorizer is updated to do this automatically, you will -be able to skip this step. +to the name. This is convenient for an edit-then-push workflow, because it +automatically, yet deterministically, generates new names for new ConfigMaps. +The name that includes this generated hash is referred to as `CONFIG_MAP_NAME` +in the following examples. -### Set the Node to use the new configuration +#### Set the Node to use the new configuration Edit the Node's reference to point to the new ConfigMap with the following command: -``` +```bash kubectl edit node ${NODE_NAME} ``` -Once in your editor, add the following YAML under `spec`: +In your text editor, add the following YAML under `spec`: -``` +```yaml configSource: - configMapRef: + configMap: name: CONFIG_MAP_NAME namespace: kube-system - uid: CONFIG_MAP_UID -``` - -Be sure to specify all three of `name`, `namespace`, and `uid`. - -### Observe that the Node begins using the new configuration - -Retrieve the Node with `kubectl get node ${NODE_NAME} -o yaml`, and look for the -`KubeletConfigOK` condition in `status.conditions`. You should see the message -`Using current (UID: CONFIG_MAP_UID)` when the Kubelet starts using the new -configuration. - -For convenience, you can use the following command (using `jq`) to filter down -to the `KubeletConfigOK` condition: - -``` -$ kubectl get no ${NODE_NAME} -o json | jq '.status.conditions|map(select(.type=="KubeletConfigOK"))' -[ - { - "lastHeartbeatTime": "2017-09-20T18:08:29Z", - "lastTransitionTime": "2017-09-20T18:08:17Z", - "message": "using current: /api/v1/namespaces/kube-system/configmaps/my-node-config-gkt4c2m4b2", - "reason": "passing all checks", - "status": "True", - "type": "KubeletConfigOK" + kubeletConfigKey: kubelet +``` + +You must specify all three of `name`, `namespace`, and `kubeletConfigKey`. +The `kubeletConfigKey` parameter shows the Kubelet which key of the ConfigMap +contains its config. + +#### Observe that the Node begins using the new configuration + +Retrieve the Node using the `kubectl get node ${NODE_NAME} -o yaml` command and inspect +`Node.Status.Config`. The config sources corresponding to the `active`, +`assigned`, and `lastKnownGood` configurations are reported in the status. + +- The `active` configuration is the version the Kubelet is currently running with. +- The `assigned` configuration is the latest version the Kubelet has resolved based on + `Node.Spec.ConfigSource`. +- The `lastKnownGood` configuration is the version the + Kubelet will fall back to if an invalid config is assigned in `Node.Spec.ConfigSource`. + +The`lastKnownGood` configuration might not be present if it is set to its default value, +the local config deployed with the node. The status will update `lastKnownGood` to +match a valid `assigned` config after the Kubelet becomes comfortable with the config. +The details of how the Kubelet determines a config should become the `lastKnownGood` are +not guaranteed by the API, but is currently implemented as a 10-minute grace period. + +You can use the following command (using `jq`) to filter down +to the config status: + +```bash +kubectl get no ${NODE_NAME} -o json | jq '.status.config' +``` + +The following is an example response: + +```json +{ + "active": { + "configMap": { + "kubeletConfigKey": "kubelet", + "name": "my-node-config-9mbkccg2cc", + "namespace": "kube-system", + "resourceVersion": "1326", + "uid": "705ab4f5-6393-11e8-b7cc-42010a800002" + } + }, + "assigned": { + "configMap": { + "kubeletConfigKey": "kubelet", + "name": "my-node-config-9mbkccg2cc", + "namespace": "kube-system", + "resourceVersion": "1326", + "uid": "705ab4f5-6393-11e8-b7cc-42010a800002" + } + }, + "lastKnownGood": { + "configMap": { + "kubeletConfigKey": "kubelet", + "name": "my-node-config-9mbkccg2cc", + "namespace": "kube-system", + "resourceVersion": "1326", + "uid": "705ab4f5-6393-11e8-b7cc-42010a800002" + } } -] -``` - -If something goes wrong, you may see one of several different error conditions, -detailed in the table of KubeletConfigOK conditions, below. When this happens, you -should check the Kubelet's log for more details. - -### Edit the configuration file again - -To change the configuration again, we simply repeat the above workflow. -Try editing the `kubelet` file, changing the previously changed parameter to a -new value. - -### Push the newly edited configuration to the control plane - -Push the new configuration to the control plane in a new ConfigMap with the -following command: - -``` -$ kubectl create configmap my-node-config --namespace=kube-system --from-file=kubelet=kubelet_configz_${NODE_NAME} --append-hash -o yaml -``` - -This new ConfigMap will get a new name, as we have changed the contents. -We will refer to the new name as NEW_CONFIG_MAP_NAME and the new uid -as NEW_CONFIG_MAP_UID. - -### Authorize your Node to read the new ConfigMap - -Now that you've created a new ConfigMap, you need to authorize your node to -read it. First, create a Role for your new ConfigMap with the -following commands: - -``` -$ export NEW_CONFIG_MAP_NAME=name-from-previous-output -$ kubectl -n kube-system create role ${NEW_CONFIG_MAP_NAME}-reader --verb=get --resource=configmap --resource-name=${NEW_CONFIG_MAP_NAME} -``` - -Next, create a RoleBinding to associate your Node with the new Role: - -``` -$ kubectl -n kube-system create rolebinding ${NEW_CONFIG_MAP_NAME}-reader --role=${NEW_CONFIG_MAP_NAME}-reader --user=system:node:${NODE_NAME} -``` - -Once the Node Authorizer is updated to do this automatically, you will -be able to skip this step. - -### Configure the Node to use the new configuration - -Once more, edit the Node's `spec.configSource` with -`kubectl edit node ${NODE_NAME}`. Your new `spec.configSource` should look like -the following, with `name` and `uid` substituted as necessary: +} ``` -configSource: - configMapRef: - name: ${NEW_CONFIG_MAP_NAME} - namespace: kube-system - uid: ${NEW_CONFIG_MAP_UID} -``` -### Observe that the Kubelet is using the new configuration +If an error occurs, the Kubelet reports it in the `Node.Status.Config.Error` +structure. Possible errors are listed in +[Understanding Node.Status.Config.Error messages](#understanding-node-status-config-error-messages). +You can search for the identical text in the Kubelet log for additional details +and context about the error. -Once more, retrieve the Node with `kubectl get node ${NODE_NAME} -o yaml`, and -look for the `KubeletConfigOK` condition in `status.conditions`. You should see the message -`using current: /api/v1/namespaces/kube-system/configmaps/${NEW_CONFIG_MAP_NAME}` when the Kubelet starts using the -new configuration. +#### Make more changes -### Deauthorize your Node from reading the old ConfigMap +Follow the workflow above to make more changes and push them again. Each time +you push a ConfigMap with new contents, the --append-hash kubectl option creates +the ConfigMap with a new name. The safest rollout strategy is to first create a +new ConfigMap, and then update the Node to use the new ConfigMap. -Once you know your Node is using the new configuration and are confident that -the new configuration has not caused any problems, it is a good idea to -deauthorize the node from reading the old ConfigMap. Run the following -commands to remove the RoleBinding and Role: - -``` -$ kubectl -n kube-system delete rolebinding ${CONFIG_MAP_NAME}-reader -$ kubectl -n kube-system delete role ${CONFIG_MAP_NAME}-reader -``` - -Note that this does not necessarily prevent the Node from reverting to the old -configuration, as it may locally cache the old ConfigMap for an indefinite -period of time. - -You may optionally also choose to remove the old ConfigMap: - -``` -$ kubectl -n kube-system delete configmap ${CONFIG_MAP_NAME} -``` +#### Reset the Node to use its local default configuration -Once the Node Authorizer is updated to do this automatically, you will -be able to skip this step. +To reset the Node to use the configuration it was provisioned with, edit the +Node using `kubectl edit node ${NODE_NAME}` and remove the +`Node.Spec.ConfigSource` field. -### Reset the Node to use its local default configuration +#### Observe that the Node is using its local default configuration -Finally, if you wish to reset the Node to use the configuration it was -provisioned with, simply edit the Node with `kubectl edit node ${NODE_NAME}` and -remove the `spec.configSource` subfield. +After removing this subfield, `Node.Status.Config` eventually becomes +empty, since all config sources have been reset to `nil`, which indicates that +the local default config is `assigned`, `active`, and `lastKnownGood`, and no +error is reported. -### Observe that the Node is using its local default configuration - -After removing this subfield, you should eventually observe that the KubeletConfigOK -condition's message reverts to `using current: local`. +{{% /capture %}} -### Deauthorize your Node from reading the old ConfigMap +{{% capture discussion %}} +## Kubectl Patch Example -Once you know your Node is using the default configuration again, it is a good -idea to deauthorize the node from reading the old ConfigMap. Run the following -commands to remove the RoleBinding and Role: +You can change a Node's configSource using several different mechanisms. +This example uses `kubectl patch`: -``` -$ kubectl -n kube-system delete rolebinding ${NEW_CONFIG_MAP_NAME}-reader -$ kubectl -n kube-system delete role ${NEW_CONFIG_MAP_NAME}-reader +```bash +kubectl patch node ${NODE_NAME} -p "{\"spec\":{\"configSource\":{\"configMap\":{\"name\":\"${CONFIG_MAP_NAME}\",\"namespace\":\"kube-system\",\"kubeletConfigKey\":\"kubelet\"}}}}" ``` -Note that this does not necessarily prevent the Node from reverting to the old -ConfigMap, as it may locally cache the old ConfigMap for an indefinite -period of time. +## Understanding how the Kubelet checkpoints config -You may optionally also choose to remove the old ConfigMap: +When a new config is assigned to the Node, the Kubelet downloads and unpacks the +config payload as a set of files on the local disk. The Kubelet also records metadata +that locally tracks the assigned and last-known-good config sources, so that the +Kubelet knows which config to use across restarts, even if the API server becomes +unavailable. After checkpointing a config and the relevant metadata, the Kubelet +exits if it detects that the assigned config has changed. When the Kubelet is +restarted by the OS-level service manager (such as `systemd`), it reads the new +metadata and uses the new config. -``` -$ kubectl -n kube-system delete configmap ${NEW_CONFIG_MAP_NAME} -``` - -Once the Node Authorizer is updated to do this automatically, you will -be able to skip this step. +The recorded metadata is fully resolved, meaning that it contains all necessary +information to choose a specific config version - typically a `UID` and `ResourceVersion`. +This is in contrast to `Node.Spec.ConfigSource`, where the intended config is declared +via the idempotent `namespace/name` that identifies the target ConfigMap; the Kubelet +tries to use the latest version of this ConfigMap. -{{% /capture %}} +When you are debugging problems on a node, you can inspect the Kubelet's config +metadata and checkpoints. The structure of the Kubelet's checkpointing directory is: -{{% capture discussion %}} -## Kubectl Patch Example -As mentioned above, there are many ways to change a Node's configSource. -Here is an example command that uses `kubectl patch`: - -``` -kubectl patch node ${NODE_NAME} -p "{\"spec\":{\"configSource\":{\"configMapRef\":{\"name\":\"${CONFIG_MAP_NAME}\",\"namespace\":\"kube-system\",\"uid\":\"${CONFIG_MAP_UID}\"}}}}" +```none +- --dynamic-config-dir (root for managing dynamic config) +| - meta + | - assigned (encoded kubeletconfig/v1beta1.SerializedNodeConfigSource object, indicating the assigned config) + | - last-known-good (encoded kubeletconfig/v1beta1.SerializedNodeConfigSource object, indicating the last-known-good config) +| - checkpoints + | - uid1 (dir for versions of object identified by uid1) + | - resourceVersion1 (dir for unpacked files from resourceVersion1 of object with uid1) + | - ... + | - ... ``` -## Understanding KubeletConfigOK Conditions +## Understanding Node.Status.Config.Error messages -The following table describes several of the `KubeletConfigOK` Node conditions you -might encounter in a cluster that has Dynamic Kubelet Config enabled. If you -observe a condition with `status=False`, you should check the Kubelet log for -more error details by searching for the message or reason text. +The following table describes error messages that can occur +when using Dynamic Kubelet Config. You can search for the identical text +in the Kubelet log for additional details and context about the error.
| Possible Messages | -Possible Reasons | -Status | +Error Message | +Possible Causes | +
|---|---|---|---|---|
failed to load config, see Kubelet log for details |
+ The Kubelet likely could not parse the downloaded config payload, or encountered a filesystem error attempting to load the payload from disk. |
|||
using current: local |
- when the config source is nil, the Kubelet uses its local config |
- True |
+ failed to validate config, see Kubelet log for details |
+ The configuration in the payload, combined with any command-line flag overrides, and the sum of feature gates from flags, the config file, and the remote payload, was determined to be invalid by the Kubelet. |
using current: /api/v1/namespaces/${CURRENT_CONFIG_MAP_NAMESPACE}/configmaps/${CURRENT_CONFIG_MAP_NAME} |
- passing all checks |
- True |
+ invalid NodeConfigSource, exactly one subfield must be non-nil, but all were nil |
+ Since Node.Spec.ConfigSource is validated by the API server to contain at least one non-nil subfield, this likely means that the Kubelet is older than the API server and does not recognize a newer source type. |
using last-known-good: local |
-
-
|
- False |
+ failed to sync: failed to download config, see Kubelet log for details |
+ The Kubelet could not download the config. It is possible that Node.Spec.ConfigSource could not be resolved to a concrete API object, or that network errors disrupted the download attempt. The Kubelet will retry the download when in this error state. |
using last-known-good: /api/v1/namespaces/${LAST_KNOWN_GOOD_CONFIG_MAP_NAMESPACE}/configmaps/${LAST_KNOWN_GOOD_CONFIG_MAP_NAME} |
-
-
|
- False |
+ failed to sync: internal failure, see Kubelet log for details |
+ The Kubelet encountered some internal problem and failed to update its config as a result. Examples include filesystem errors and reading objects from the internal informer cache. |
|
- - The reasons in the next column could potentially appear for any of - the above messages. - -- This condition indicates that the Kubelet is having trouble - reconciling `spec.configSource`, and thus no change to the in-use - configuration has occurred. - -- The "failed to sync" reasons are specific to the failure that - occurred, and the next column does not necessarily contain all - possible failure reasons. - - |
-
- failed to sync, reason: -
|
- False |
+ internal failure, see Kubelet log for details |
+ The Kubelet encountered some internal problem while manipulating config, outside of the configuration sync loop. |