-
Notifications
You must be signed in to change notification settings - Fork 5.1k
Closed
Labels
area/gpuGPU related itemsGPU related itemsgood first issueDenotes an issue ready for a new contributor, according to the "help wanted" guidelines.Denotes an issue ready for a new contributor, according to the "help wanted" guidelines.kind/improvementCategorizes issue or PR as related to improving upon a current feature.Categorizes issue or PR as related to improving upon a current feature.priority/important-soonMust be staffed and worked on either currently, or very soon, ideally in time for the next release.Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Description
What Happened?
I am trying to integrate GPU with Minikube, My host machine is Windows and I am using docker desktop in wsl2 and trying to provision Minikube with the support of GPU.
I was able to integrate GPU with docker,
sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
- Minikube start command
minikube start --driver=docker --addons=ingress --addons=nvidia-gpu-device-plugin

- Pod failing with the following error
Warning FailedScheduling 45s default-scheduler 0/1 nodes are available: 1 Insufficient nvidia.com/gpu. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
Attach the log file
Operating System
Windows
Driver
Docker
step I followed
Metadata
Metadata
Assignees
Labels
area/gpuGPU related itemsGPU related itemsgood first issueDenotes an issue ready for a new contributor, according to the "help wanted" guidelines.Denotes an issue ready for a new contributor, according to the "help wanted" guidelines.kind/improvementCategorizes issue or PR as related to improving upon a current feature.Categorizes issue or PR as related to improving upon a current feature.priority/important-soonMust be staffed and worked on either currently, or very soon, ideally in time for the next release.Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

