|  | 
|  | 1 | +--- | 
|  | 2 | +title: Resize CPU and Memory Resources assigned to Containers | 
|  | 3 | +content_type: task | 
|  | 4 | +weight: 30 | 
|  | 5 | +min-kubernetes-server-version: 1.27 | 
|  | 6 | +--- | 
|  | 7 | + | 
|  | 8 | + | 
|  | 9 | +<!-- overview --> | 
|  | 10 | + | 
|  | 11 | +{{< feature-state state="alpha" for_k8s_version="v1.27" >}} | 
|  | 12 | + | 
|  | 13 | +This page assumes that you are familiar with [Quality of Service](/docs/tasks/configure-pod-container/quality-service-pod/) | 
|  | 14 | +for Kubernetes Pods. | 
|  | 15 | + | 
|  | 16 | +This page shows how to resize CPU and memory resources assigned to containers | 
|  | 17 | +of a running pod without restarting the pod or its containers. A Kubernetes node | 
|  | 18 | +allocates resources for a pod based on its `requests`, and restricts the pod's | 
|  | 19 | +resource usage based on the `limits` specified in the pod's containers. | 
|  | 20 | + | 
|  | 21 | +For in-place resize of pod resources: | 
|  | 22 | +- Container's resource `requests` and `limits` are _mutable_ for CPU | 
|  | 23 | +  and memory resources. | 
|  | 24 | +- `allocatedResources` field in `containerStatuses` of the Pod's status reflects | 
|  | 25 | +  the resources allocated to the pod's containers. | 
|  | 26 | +- `resources` field in `containerStatuses` of the Pod's status reflects the | 
|  | 27 | +  actual resource `requests` and `limits` that are configured on the running | 
|  | 28 | +  containers as reported by the container runtime. | 
|  | 29 | +- `resize` field in the Pod's status shows the status of the last requested | 
|  | 30 | +  pending resize. It can have the following values: | 
|  | 31 | +  - `Proposed`: This value indicates an acknowledgement of the requested resize | 
|  | 32 | +    and that the request was validated and recorded. | 
|  | 33 | +  - `InProgress`: This value indicates that the node has accepted the resize | 
|  | 34 | +    request and is in the process of applying it to the pod's containers. | 
|  | 35 | +  - `Deferred`: This value means that the requested resize cannot be granted at | 
|  | 36 | +    this time, and the node will keep retrying. The resize may be granted when | 
|  | 37 | +    other pods leave and free up node resources. | 
|  | 38 | +  - `Infeasible`: is a signal that the node cannot accommodate the requested | 
|  | 39 | +    resize. This can happen if the requested resize exceeds the maximum | 
|  | 40 | +    resources the node can ever allocate for a pod. | 
|  | 41 | + | 
|  | 42 | + | 
|  | 43 | +## {{% heading "prerequisites" %}} | 
|  | 44 | + | 
|  | 45 | + | 
|  | 46 | +{{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} | 
|  | 47 | + | 
|  | 48 | + | 
|  | 49 | +## Container Resize Policies | 
|  | 50 | + | 
|  | 51 | +Resize policies allow for a more fine-grained control over how pod's containers | 
|  | 52 | +are resized for CPU and memory resources. For example, the container's | 
|  | 53 | +application may be able to handle CPU resources resized without being restarted, | 
|  | 54 | +but resizing memory may require that the application hence the containers be restarted. | 
|  | 55 | + | 
|  | 56 | +To enable this, the Container specification allows users to specify a `resizePolicy`. | 
|  | 57 | +The following restart policies can be specified for resizing CPU and memory: | 
|  | 58 | +* `NotRequired`: Resize the container's resources while it is running. | 
|  | 59 | +* `RestartContainer`: Restart the container and apply new resources upon restart. | 
|  | 60 | + | 
|  | 61 | +If `resizePolicy[*].restartPolicy` is not specified, it defaults to `NotRequired`. | 
|  | 62 | + | 
|  | 63 | +{{< note >}} | 
|  | 64 | +If the Pod's `restartPolicy` is `Never`, container's resize restart policy must be | 
|  | 65 | +set to `NotRequired` for all Containers in the Pod. | 
|  | 66 | +{{< /note >}} | 
|  | 67 | + | 
|  | 68 | +Below example shows a Pod whose Container's CPU can be resized without restart, but | 
|  | 69 | +memory resize memory requires the container to be restarted. | 
|  | 70 | + | 
|  | 71 | +```yaml | 
|  | 72 | +apiVersion: v1 | 
|  | 73 | +kind: Pod | 
|  | 74 | +metadata: | 
|  | 75 | +  name: qos-demo-5 | 
|  | 76 | +  namespace: qos-example | 
|  | 77 | +spec: | 
|  | 78 | +  containers: | 
|  | 79 | +  - name: qos-demo-ctr-5 | 
|  | 80 | +    image: nginx | 
|  | 81 | +    resizePolicy: | 
|  | 82 | +    - resourceName: cpu | 
|  | 83 | +      restartPolicy: NotRequired | 
|  | 84 | +    - resourceName: memory | 
|  | 85 | +      restartPolicy: RestartContainer | 
|  | 86 | +    resources: | 
|  | 87 | +      limits: | 
|  | 88 | +        memory: "200Mi" | 
|  | 89 | +        cpu: "700m" | 
|  | 90 | +      requests: | 
|  | 91 | +        memory: "200Mi" | 
|  | 92 | +        cpu: "700m" | 
|  | 93 | +``` | 
|  | 94 | +
 | 
|  | 95 | +{{< note >}} | 
|  | 96 | +In the above example, if desired requests or limits for both CPU _and_ memory | 
|  | 97 | +have changed, the container will be restarted in order to resize its memory. | 
|  | 98 | +{{< /note >}} | 
|  | 99 | +
 | 
|  | 100 | +<!-- steps --> | 
|  | 101 | +
 | 
|  | 102 | +
 | 
|  | 103 | +## Create a pod with resource requests and limits | 
|  | 104 | +
 | 
|  | 105 | +You can create a Guaranteed or Burstable [Quality of Service](/docs/tasks/configure-pod-container/quality-service-pod/) | 
|  | 106 | +class pod by specifying requests and/or limits for a pod's containers. | 
|  | 107 | +
 | 
|  | 108 | +Consider the following manifest for a Pod that has one Container. | 
|  | 109 | +
 | 
|  | 110 | +{{< codenew file="pods/qos/qos-pod-5.yaml" >}} | 
|  | 111 | +
 | 
|  | 112 | +Create the pod in the `qos-example` namespace: | 
|  | 113 | + | 
|  | 114 | +```shell | 
|  | 115 | +kubectl create namespace qos-example | 
|  | 116 | +kubectl create -f https://k8s.io/examples/pods/qos/qos-pod-5.yaml | 
|  | 117 | +``` | 
|  | 118 | + | 
|  | 119 | +This pod is classified as a Guaranteed QoS class requesting 700m CPU and 200Mi | 
|  | 120 | +memory. | 
|  | 121 | + | 
|  | 122 | +View detailed information about the pod: | 
|  | 123 | + | 
|  | 124 | +```shell | 
|  | 125 | +kubectl get pod qos-demo-5 --output=yaml --namespace=qos-example | 
|  | 126 | +``` | 
|  | 127 | + | 
|  | 128 | +Also notice that the values of `resizePolicy[*].restartPolicy` defaulted to | 
|  | 129 | +`NotRequired`, indicating that CPU and memory can be resized while container | 
|  | 130 | +is running. | 
|  | 131 | + | 
|  | 132 | +```yaml | 
|  | 133 | +spec: | 
|  | 134 | +  containers: | 
|  | 135 | +    ... | 
|  | 136 | +    resizePolicy: | 
|  | 137 | +    - resourceName: cpu | 
|  | 138 | +      restartPolicy: NotRequired | 
|  | 139 | +    - resourceName: memory | 
|  | 140 | +      restartPolicy: NotRequired | 
|  | 141 | +    resources: | 
|  | 142 | +      limits: | 
|  | 143 | +        cpu: 700m | 
|  | 144 | +        memory: 200Mi | 
|  | 145 | +      requests: | 
|  | 146 | +        cpu: 700m | 
|  | 147 | +        memory: 200Mi | 
|  | 148 | +... | 
|  | 149 | +  containerStatuses: | 
|  | 150 | +... | 
|  | 151 | +    name: qos-demo-ctr-5 | 
|  | 152 | +    ready: true | 
|  | 153 | +... | 
|  | 154 | +    allocatedResources: | 
|  | 155 | +      cpu: 700m | 
|  | 156 | +      memory: 200Mi | 
|  | 157 | +    resources: | 
|  | 158 | +      limits: | 
|  | 159 | +        cpu: 700m | 
|  | 160 | +        memory: 200Mi | 
|  | 161 | +      requests: | 
|  | 162 | +        cpu: 700m | 
|  | 163 | +        memory: 200Mi | 
|  | 164 | +    restartCount: 0 | 
|  | 165 | +    started: true | 
|  | 166 | +... | 
|  | 167 | +  qosClass: Guaranteed | 
|  | 168 | +``` | 
|  | 169 | + | 
|  | 170 | + | 
|  | 171 | +## Updating the pod's resources | 
|  | 172 | + | 
|  | 173 | +Let's say the CPU requirements have increased, and 0.8 CPU is now desired. This | 
|  | 174 | +is typically determined, and may be programmatically applied, by an entity such as | 
|  | 175 | +[VerticalPodAutoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#readme) (VPA). | 
|  | 176 | + | 
|  | 177 | +{{< note >}} | 
|  | 178 | +While you can change a Pod's requests and limits to express new desired | 
|  | 179 | +resources, you cannot change the QoS class in which the Pod was created. | 
|  | 180 | +{{< /note >}} | 
|  | 181 | + | 
|  | 182 | +Now, patch the Pod's Container with CPU requests & limits both set to `800m`: | 
|  | 183 | + | 
|  | 184 | +```shell | 
|  | 185 | +kubectl -n qos-example patch pod qos-demo-5 --patch '{"spec":{"containers":[{"name":"qos-demo-ctr-5", "resources":{"requests":{"cpu":"800m"}, "limits":{"cpu":"800m"}}}]}}' | 
|  | 186 | +``` | 
|  | 187 | + | 
|  | 188 | +Query the Pod's detailed information after the Pod has been patched. | 
|  | 189 | + | 
|  | 190 | +```shell | 
|  | 191 | +kubectl get pod qos-demo-5 --output=yaml --namespace=qos-example | 
|  | 192 | +``` | 
|  | 193 | + | 
|  | 194 | +The Pod's spec below reflects the updated CPU requests and limits. | 
|  | 195 | + | 
|  | 196 | +```yaml | 
|  | 197 | +spec: | 
|  | 198 | +  containers: | 
|  | 199 | +    ... | 
|  | 200 | +    resources: | 
|  | 201 | +      limits: | 
|  | 202 | +        cpu: 800m | 
|  | 203 | +        memory: 200Mi | 
|  | 204 | +      requests: | 
|  | 205 | +        cpu: 800m | 
|  | 206 | +        memory: 200Mi | 
|  | 207 | +... | 
|  | 208 | +  containerStatuses: | 
|  | 209 | +... | 
|  | 210 | +    allocatedResources: | 
|  | 211 | +      cpu: 800m | 
|  | 212 | +      memory: 200Mi | 
|  | 213 | +    resources: | 
|  | 214 | +      limits: | 
|  | 215 | +        cpu: 800m | 
|  | 216 | +        memory: 200Mi | 
|  | 217 | +      requests: | 
|  | 218 | +        cpu: 800m | 
|  | 219 | +        memory: 200Mi | 
|  | 220 | +    restartCount: 0 | 
|  | 221 | +    started: true | 
|  | 222 | +``` | 
|  | 223 | + | 
|  | 224 | +Observe that the `allocatedResources` values have been updated to reflect the new | 
|  | 225 | +desired CPU requests. This indicates that node was able to accommodate the | 
|  | 226 | +increased CPU resource needs. | 
|  | 227 | + | 
|  | 228 | +In the Container's status, updated CPU resource values shows that new CPU | 
|  | 229 | +resources have been applied. The Container's `restartCount` remains unchanged, | 
|  | 230 | +indicating that container's CPU resources were resized without restarting the container. | 
|  | 231 | + | 
|  | 232 | + | 
|  | 233 | +## Clean up | 
|  | 234 | + | 
|  | 235 | +Delete your namespace: | 
|  | 236 | + | 
|  | 237 | +```shell | 
|  | 238 | +kubectl delete namespace qos-example | 
|  | 239 | +``` | 
|  | 240 | + | 
|  | 241 | + | 
|  | 242 | +## {{% heading "whatsnext" %}} | 
|  | 243 | + | 
|  | 244 | + | 
|  | 245 | +### For application developers | 
|  | 246 | + | 
|  | 247 | +* [Assign Memory Resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-memory-resource/) | 
|  | 248 | + | 
|  | 249 | +* [Assign CPU Resources to Containers and Pods](/docs/tasks/configure-pod-container/assign-cpu-resource/) | 
|  | 250 | + | 
|  | 251 | +### For cluster administrators | 
|  | 252 | + | 
|  | 253 | +* [Configure Default Memory Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/) | 
|  | 254 | + | 
|  | 255 | +* [Configure Default CPU Requests and Limits for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/) | 
|  | 256 | + | 
|  | 257 | +* [Configure Minimum and Maximum Memory Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/memory-constraint-namespace/) | 
|  | 258 | + | 
|  | 259 | +* [Configure Minimum and Maximum CPU Constraints for a Namespace](/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/) | 
|  | 260 | + | 
|  | 261 | +* [Configure Memory and CPU Quotas for a Namespace](/docs/tasks/administer-cluster/manage-resources/quota-memory-cpu-namespace/) | 
0 commit comments