Skip to content

cri-docker.service - CRI Interface for Docker Application Container Engine Failed #16799

@amrit07nara

Description

@amrit07nara

What Happened?

sudo minikube start --vm-driver=none

  • minikube v1.30.1 on Ubuntu 20.04 (xen/amd64)
  • Using the none driver based on existing profile
  • Starting control plane node minikube in cluster minikube
  • Running on localhost (CPUs=2, Memory=3921MB, Disk=29587MB) ...
  • OS release is Ubuntu 20.04.6 LTS

X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker: exit status 1
stdout:

stderr:
Job for cri-docker.service failed because the control process exited with error code.
See "systemctl status cri-docker.service" and "journalctl -xe" for details.

╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰──────────────────────────────────────────��──────────────────────────────────────────────────╯

root@ip-172-31-83-61:/usr/local/bin# systemctl status cri-docker.service
● cri-docker.service - CRI Interface for Docker Application Container Engine
Loaded: loaded (/etc/systemd/system/cri-docker.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/cri-docker.service.d
└─10-cni.conf
Active: failed (Result: exit-code) since Fri 2023-06-30 17:22:09 UTC; 1min 25s ago
TriggeredBy: ● cri-docker.socket
Docs: https://docs.mirantis.com
Process: 5658 ExecStart=/usr/local/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.k8s.io/pause:3.9 --network-pl>
Main PID: 5658 (code=exited, status=1/FAILURE)

Jun 30 17:22:09 ip-172-31-83-61 systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
Jun 30 17:22:09 ip-172-31-83-61 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
Jun 30 17:22:09 ip-172-31-83-61 systemd[1]: cri-docker.service: Start request repeated too quickly.
Jun 30 17:22:09 ip-172-31-83-61 systemd[1]: cri-docker.service: Failed with result 'exit-code'.
Jun 30 17:22:09 ip-172-31-83-61 systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.

Attach the log file

  • ==> Audit <==

  • |---------|------------------|----------|------|---------|---------------------|----------|
    | Command | Args | Profile | User | Version | Start Time | End Time |
    |---------|------------------|----------|------|---------|---------------------|----------|
    | start | --vm-driver=none | minikube | root | v1.30.1 | 30 Jun 23 17:05 UTC | |
    | start | --vm-driver=none | minikube | root | v1.30.1 | 30 Jun 23 17:06 UTC | |
    | start | --vm-driver=none | minikube | root | v1.30.1 | 30 Jun 23 17:10 UTC | |
    | start | --vm-driver=none | minikube | root | v1.30.1 | 30 Jun 23 17:21 UTC | |
    | start | --vm-driver=none | minikube | root | v1.30.1 | 30 Jun 23 17:22 UTC | |
    | start | --vm-driver=none | minikube | root | v1.30.1 | 30 Jun 23 17:28 UTC | |
    |---------|------------------|----------|------|---------|---------------------|----------|

  • ==> Last Start <==

  • Log file created at: 2023/06/30 17:28:58
    Running on machine: ip-172-31-83-61
    Binary: Built with gc go1.20.2 for linux/amd64
    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
    I0630 17:28:58.945849 5684 out.go:296] Setting OutFile to fd 1 ...
    I0630 17:28:58.946026 5684 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color
    I0630 17:28:58.946031 5684 out.go:309] Setting ErrFile to fd 2...
    I0630 17:28:58.946037 5684 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color
    I0630 17:28:58.946168 5684 root.go:336] Updating PATH: /root/.minikube/bin
    W0630 17:28:58.946290 5684 root.go:312] Error reading config file at /root/.minikube/config/config.json: open /root/.minikube/config/config.json: no such file or directory
    I0630 17:28:58.946485 5684 out.go:303] Setting JSON to false
    I0630 17:28:58.947314 5684 start.go:125] hostinfo: {"hostname":"ip-172-31-83-61","uptime":2072,"bootTime":1688144067,"procs":113,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1036-aws","kernelArch":"x86_64","virtualizationSystem":"xen","virtualizationRole":"guest","hostId":"ec2c6501-7c75-f30c-9b18-760705ca851c"}
    I0630 17:28:58.947370 5684 start.go:135] virtualization: xen guest
    I0630 17:28:58.950449 5684 out.go:177] * minikube v1.30.1 on Ubuntu 20.04 (xen/amd64)
    W0630 17:28:58.953431 5684 preload.go:295] Failed to list preload files: open /root/.minikube/cache/preloaded-tarball: no such file or directory
    I0630 17:28:58.953574 5684 notify.go:220] Checking for updates...
    I0630 17:28:58.954783 5684 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.26.3
    I0630 17:28:58.955164 5684 exec_runner.go:51] Run: systemctl --version
    I0630 17:28:58.957594 5684 driver.go:375] Setting default libvirt URI to qemu:///system
    I0630 17:28:58.960070 5684 out.go:177] * Using the none driver based on existing profile
    I0630 17:28:58.962733 5684 start.go:295] selected driver: none
    I0630 17:28:58.962758 5684 start.go:870] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.31.83.61 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/root:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
    I0630 17:28:58.962869 5684 start.go:881] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:}
    I0630 17:28:58.962896 5684 start.go:1629] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
    I0630 17:28:58.963443 5684 cni.go:84] Creating CNI manager for ""
    I0630 17:28:58.963455 5684 cni.go:157] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
    I0630 17:28:58.963474 5684 start_flags.go:319] config:
    {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.39@sha256:bf2d9f1e9d837d8deea073611d2605405b6be904647d97ebd9b12045ddfe1106 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.31.83.61 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/root:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
    I0630 17:28:58.966430 5684 out.go:177] * Starting control plane node minikube in cluster minikube
    I0630 17:28:58.969202 5684 profile.go:148] Saving config to /root/.minikube/profiles/minikube/config.json ...
    I0630 17:28:58.969493 5684 cache.go:193] Successfully downloaded all kic artifacts
    I0630 17:28:58.969543 5684 start.go:364] acquiring machines lock for minikube: {Name:mkc8ab01ad3ea83211c505c81a7ee49a8e3ecb89 Clock:{} Delay:500ms Timeout:13m0s Cancel:}
    I0630 17:28:58.969660 5684 start.go:368] acquired machines lock for "minikube" in 55.599µs
    I0630 17:28:58.969681 5684 start.go:96] Skipping create...Using existing machine configuration
    I0630 17:28:58.969688 5684 fix.go:55] fixHost starting: m01
    W0630 17:28:58.969950 5684 none.go:130] unable to get port: "minikube" does not appear in /root/.kube/config
    I0630 17:28:58.969961 5684 api_server.go:165] Checking apiserver status ...
    I0630 17:28:58.969990 5684 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.minikube.
    W0630 17:28:59.000619 5684 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: exit status 1
    stdout:

stderr:
I0630 17:28:59.000660 5684 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0630 17:28:59.013093 5684 fix.go:103] recreateIfNeeded on minikube: state=Stopped err=
W0630 17:28:59.013110 5684 fix.go:129] unexpected machine state, will restart:
I0630 17:28:59.015853 5684 out.go:177] * Restarting existing none bare metal machine for "minikube" ...
I0630 17:28:59.019192 5684 profile.go:148] Saving config to /root/.minikube/profiles/minikube/config.json ...
I0630 17:28:59.019347 5684 start.go:300] post-start starting for "minikube" (driver="none")
I0630 17:28:59.019393 5684 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0630 17:28:59.019433 5684 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0630 17:28:59.028130 5684 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0630 17:28:59.028152 5684 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0630 17:28:59.028165 5684 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0630 17:28:59.031169 5684 out.go:177] * OS release is Ubuntu 20.04.6 LTS
I0630 17:28:59.033263 5684 filesync.go:126] Scanning /root/.minikube/addons for local assets ...
I0630 17:28:59.033315 5684 filesync.go:126] Scanning /root/.minikube/files for local assets ...
I0630 17:28:59.033341 5684 start.go:303] post-start completed in 13.984017ms
I0630 17:28:59.033350 5684 fix.go:57] fixHost completed within 63.663382ms
I0630 17:28:59.033357 5684 start.go:83] releasing machines lock for "minikube", held for 63.687057ms
I0630 17:28:59.033749 5684 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/loopback.conf"
I0630 17:28:59.033871 5684 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
W0630 17:28:59.035826 5684 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/loopback.conf" not found
I0630 17:28:59.035866 5684 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name bridge -or -name podman ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0630 17:28:59.045636 5684 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0630 17:28:59.045652 5684 start.go:481] detecting cgroup driver to use...
I0630 17:28:59.045677 5684 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0630 17:28:59.045776 5684 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0630 17:28:59.064889 5684 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( )sandbox_image = .$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
W0630 17:28:59.076145 5684 start.go:448] cannot ensure containerd is configured properly and reloaded for docker - cluster might be unstable: update sandbox_image: sh -c "sudo sed -i -r 's|^( )sandbox_image = .$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml": exit status 2
stdout:

stderr:
sed: can't read /etc/containerd/config.toml: No such file or directory
I0630 17:28:59.076170 5684 start.go:481] detecting cgroup driver to use...
I0630 17:28:59.076197 5684 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0630 17:28:59.076395 5684 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0630 17:28:59.095352 5684 exec_runner.go:51] Run: which cri-dockerd
I0630 17:28:59.096298 5684 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0630 17:28:59.104219 5684 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
I0630 17:28:59.104230 5684 exec_runner.go:207] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0630 17:28:59.104310 5684 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (195 bytes)
I0630 17:28:59.104440 5684 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2443376820 /etc/systemd/system/cri-docker.service.d/10-cni.conf
I0630 17:28:59.112704 5684 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0630 17:28:59.342869 5684 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0630 17:28:59.568088 5684 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
I0630 17:28:59.568115 5684 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
I0630 17:28:59.568142 5684 exec_runner.go:207] rm: /etc/docker/daemon.json
I0630 17:28:59.568211 5684 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (144 bytes)
I0630 17:28:59.568359 5684 exec_runner.go:51] Run: sudo cp -a /tmp/minikube12454852 /etc/docker/daemon.json
I0630 17:28:59.580626 5684 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0630 17:28:59.900085 5684 exec_runner.go:51] Run: sudo systemctl restart docker
I0630 17:29:00.225149 5684 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0630 17:29:00.464294 5684 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
I0630 17:29:00.717327 5684 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0630 17:29:00.969918 5684 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0630 17:29:01.199678 5684 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
I0630 17:29:01.217638 5684 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0630 17:29:01.453513 5684 exec_runner.go:51] Run: sudo systemctl restart cri-docker
I0630 17:29:01.541473 5684 out.go:177]
W0630 17:29:01.544289 5684 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker: exit status 1
stdout:

stderr:
Job for cri-docker.service failed because the control process exited with error code.
See "systemctl status cri-docker.service" and "journalctl -xe" for details.

W0630 17:29:01.544319 5684 out.go:239] *
W0630 17:29:01.545514 5684 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0630 17:29:01.551933 5684 out.go:177]

Operating System

Ubuntu

Driver

None

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions