Skip to content

Conversation

@vadorovsky
Copy link
Member

This PR is not fully functional yet, just showing progress

This change makes lockc deployable on Kubernetes by simply doing

kubectl apply -f contrib/kubernetes/lockc.yaml

That way:

  • lockcd is deployed as a DaemonSet
  • lockc-runc-wrapper is installed on the host by the init container
  • a new component, lockc-k8s-agent, gets deployed as a DaemonSet and its
    purpose is to serve a small API via UNIX socket which lets
    lockc-runc-wrapper now what kind of policy should be applied for which
    k8s namespace

Container images can be built with:

./scripts/container-build.sh

or also pushed with:

LOCKC_PUSH=true ./scripts/container-build.sh

Signed-off-by: Michal Rostecki [email protected]

This change makes lockc deployable on Kubernetes by simply doing

  kubectl apply -f contrib/kubernetes/lockc.yaml

That way:

* lockcd is deployed as a DaemonSet
* lockc-runc-wrapper is installed on the host by the init container
* a new component, lockc-k8s-agent, gets deployed as a DaemonSet and its
  purpose is to serve a small API via UNIX socket which lets
  lockc-runc-wrapper now what kind of policy should be applied for which
  k8s namespace

Container images can be built with:

  ./scripts/container-build.sh

or also pushed with:

  LOCKC_PUSH=true ./scripts/container-build.sh

Signed-off-by: Michal Rostecki <[email protected]>
@vadorovsky
Copy link
Member Author

The issue I have with this PR are pods failing with:

opensuse@lockc-control-plane-0:/usr/local/src/lockc/examples/kubernetes> kubectl describe pod nginx-default-5c4d987847-8n4mq
Name:           nginx-default-5c4d987847-8n4mq
Namespace:      default
Priority:       0
Node:           lockc-control-plane-0/10.16.0.199
Start Time:     Wed, 03 Nov 2021 12:52:46 +0000
Labels:         app=nginx-default
                pod-template-hash=5c4d987847
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/nginx-default-5c4d987847
Containers:
  nginx:
    Container ID:   
    Image:          nginx:1.14.2
    Image ID:       
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8xrhw (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-8xrhw:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age   From               Message
  ----     ------                  ----  ----               -------
  Normal   Scheduled               19s   default-scheduler  Successfully assigned default/nginx-default-5c4d987847-8n4mq to lockc-control-plane-0
  Warning  FailedCreatePodSandBox  18s   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: failed to retrieve OCI runtime container pid: open /run/containerd/io.containerd.runtime.v2.task/k8s.io/36ee18ba4ce6a00fa5693ba52ca06094334d7ee36f918eab0d21b9d61c2cd651/init.pid: no such file or directory: unknown
  Warning  FailedCreatePodSandBox  6s    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: failed to retrieve OCI runtime container pid: open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1acc3f1e827b04fa99dd907f5180c5ca4f65344e43261abda0effa86b0a72e3c/init.pid: no such file or directory: unknown

I've never seen this error before and when I was using the lockc-runc-wrapper as a default runtime. This occurs only when using lockc-runc-wrapper as a secondary runtime with Runtime Class.

@vadorovsky
Copy link
Member Author

OK, same issuee appears when I'm putting lockc-runc-wrapper as the main runtime in containerd config (in ConfigMap). Weird.

hostPID: true
containers:
- name: lockcd
image: docker.io/vadorovsky/lockcd:latest
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't use latest tag for Kubernetes deployments, it breaks RollingUpdate

mountPath: /sys/fs/bpf
initContainers:
- name: install-lockc-runc-wrapper
image: docker.io/vadorovsky/lockc-runc-wrapper:latest
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't use latest tag for Kubernetes deployments, it breaks RollingUpdate

- name: containerd-config
mountPath: /config
- name: restart-containerd
image: busybox:latest
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't use latest tag for Kubernetes deployments, it breaks RollingUpdate

spec:
containers:
- name: lockc-k8s-agent
image: docker.io/vadorovsky/lockc-k8s-agent:latest
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't use latest tag for Kubernetes deployments, it breaks RollingUpdate

- kubectl wait --for=condition=Available deployment --timeout=2m -n cert-manager --all
- helm install -n kube-system kubewarden-crds kubewarden/kubewarden-crds
- helm install --wait -n kube-system kubewarden-controller kubewarden/kubewarden-controller
- kubectl apply -f /usr/local/src/lockc/contrib/kubernetes/lockc.yaml
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we would like to have lockc installed also as Helm chart

- helm install --wait -n kube-system kubewarden-controller kubewarden/kubewarden-controller
- kubectl apply -f /usr/local/src/lockc/contrib/kubernetes/lockc.yaml
- kubectl wait --for=condition=Available daemonset --timeout=2m -n lockcd --all
- kubectl apply -f /usr/local/src/lockc/contrib/kubernetes/kubewarden.yaml
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be added to Kubewarden Helm chart

Copy link
Member Author

@vadorovsky vadorovsky Nov 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really, it's our way of using kubewarden and our speficic policy. That file is about a policy which enforces usage of lockc-runc-wrapper RuntimeClass. Maybe I should change the name of that to be less misleading.

@vadorovsky
Copy link
Member Author

vadorovsky commented Nov 3, 2021

So my mentioned issue actually occurs also on main, created a bug #92

@vadorovsky
Copy link
Member Author

Moved the content of this PR to #93

@vadorovsky vadorovsky closed this Nov 15, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants