@@ -152,70 +152,3 @@ spec:
152152
153153This will ensure that the pod will be scheduled to a node that has the GPU type
154154you specified.
155-
156- ## v1.6 and v1.7
157- To enable GPU support in 1.6 and 1.7, a special **alpha** feature gate
158- ` Accelerators` has to be set to true across the system:
159- ` --feature-gates="Accelerators=true"` . It also requires using the Docker
160- Engine as the container runtime.
161-
162- Further, the Kubernetes nodes have to be pre-installed with NVIDIA drivers.
163- Kubelet will not detect NVIDIA GPUs otherwise.
164-
165- When you start Kubernetes components after all the above conditions are true,
166- Kubernetes will expose `alpha.kubernetes.io/nvidia-gpu` as a schedulable
167- resource.
168-
169- You can consume these GPUs from your containers by requesting
170- ` alpha.kubernetes.io/nvidia-gpu` just like you request `cpu` or `memory`.
171- However, there are some limitations in how you specify the resource requirements
172- when using GPUs :
173- - GPUs are only supposed to be specified in the `limits` section, which means :
174- * You can specify GPU `limits` without specifying `requests` because
175- Kubernetes will use the limit as the request value by default.
176- * You can specify GPU in both `limits` and `requests` but these two values
177- must be equal.
178- * You cannot specify GPU `requests` without specifying `limits`.
179- - Containers (and pods) do not share GPUs. There's no overcommitting of GPUs.
180- - Each container can request one or more GPUs. It is not possible to request a
181- fraction of a GPU.
182-
183- When using `alpha.kubernetes.io/nvidia-gpu` as the resource, you also have to
184- mount host directories containing NVIDIA libraries (libcuda.so, libnvidia.so
185- etc.) to the container.
186-
187- Here's an example :
188-
189- ` ` ` yaml
190- apiVersion: v1
191- kind: Pod
192- metadata:
193- name: cuda-vector-add
194- spec:
195- restartPolicy: OnFailure
196- containers:
197- - name: cuda-vector-add
198- # https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile
199- image: "k8s.gcr.io/cuda-vector-add:v0.1"
200- resources:
201- limits:
202- alpha.kubernetes.io/nvidia-gpu: 1 # requesting 1 GPU
203- volumeMounts:
204- - name: "nvidia-libraries"
205- mountPath: "/usr/local/nvidia/lib64"
206- volumes:
207- - name: "nvidia-libraries"
208- hostPath:
209- path: "/usr/lib/nvidia-375"
210- ` ` `
211-
212- The `Accelerators` feature gate and `alpha.kubernetes.io/nvidia-gpu` resource
213- works on 1.8 and 1.9 as well. It will be deprecated in 1.10 and removed in
214- 1.11.
215-
216- # # Future
217- - Support for hardware accelerators in Kubernetes is still in alpha.
218- - Better APIs will be introduced to provision and consume accelerators in a scalable manner.
219- - Kubernetes will automatically ensure that applications consuming GPUs get the best possible performance.
220-
221- {{% /capture %}}
0 commit comments