Skip to content

csi-nfs-controller on the same node, gives a CrashLoopBackOff #173

@curx

Description

@curx

What happened:

deployed the csi-driver-nfs on a cluster via kubectl apply -f ... and the static manifests.

What you expected to happen:

All pods are running in a healthy state.

How to reproduce it:

Deploy the manifest.

Anything else we need to know?:

both pod are scheduled on one node, but this is a 5 node cluster, three control-plane, and two workers.

$ kubectl -n kube-system get pod -o wide -l app=csi-nfs-controller
NAME                                   READY   STATUS             RESTARTS    AGE   IP              NODE            NOMINATED NODE   READINESS GATES
csi-nfs-controller-84f58c6dcb-n6r6w    2/3     CrashLoopBackOff   201        14h   10.250.10.248    worker1   <none>           <none>
csi-nfs-controller-84f58c6dcb-szzsw    3/3     Running            19         14h   10.250.10.248    worker1   <none>           <none>

Environment:

  • CSI Driver version: v2.1.0
  • Kubernetes version (use kubectl version):
serverVersion:
  buildDate: "2021-01-13T13:20:00Z"
  compiler: gc
  gitCommit: faecb196815e248d3ecfb03c680a4507229c2a56
  gitTreeState: clean
  gitVersion: v1.20.2
  goVersion: go1.15.5
  major: "1"
  minor: "20"
  platform: linux/amd64
  • OS (e.g. from /etc/os-release): 18.04.5 LTS (Bionic Beaver)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions