-
Notifications
You must be signed in to change notification settings - Fork 5.1k
Upgrade Cilium to version 1.12.3 #15242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
|
|
Welcome @rastaman! |
|
Hi @rastaman. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: rastaman The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
Can one of the admins verify this patch? |
|
/ok-to-test |
|
kvm2 driver with docker runtime Times for minikube start: 54.5s 54.0s 55.7s 55.6s 56.7s Times for minikube (PR 15242) ingress: 28.3s 29.2s 28.7s 28.8s 28.2s docker driver with docker runtime |
|
These are the flake rates of all failed tests.
Too many tests failed - See test logs for more details. To see the flake rates of all tests by environment, click here. |
|
I will look at the tests results. |
|
@rastaman thank you for this PR ! look forward for more contributions from you |
Context
The Cilium version deployed by minikube is quite old and doesn't support
arm64setups.This PR update Cilium to version 1.12.3.
Implementation
Update Kubernetes template in
minikube/pkg/cni/cilium.goto:Tests
This has been tested by running current minikube code from a Mac M1 with the podman driver in a Qemu VM (limavm) runnning Fedora Core 36 (
Fedora-Cloud-Base-36-1.5.aarch64):./out/minikube -p miniforge start --rootless=false --container-runtime=containerd --cpus=4 --cni=cilium --disk-size=20000mb --dns-domain=cluster.local --driver=podman --memory=7916m --nodes=1 😄 [miniforge] minikube v1.27.1 sur Darwin 12.6 (arm64) ▪ MINIKUBE_ROOTLESS=false ✨ Utilisation du pilote podman (expérimental) basé sur la configuration de l'utilisateur 📌 Utilisation du pilote Podman avec le privilège root 👍 Démarrage du noeud de plan de contrôle miniforge dans le cluster miniforge 🚜 Extraction de l'image de base... > gcr.io/k8s-minikube/kicbase...: 347.52 MiB / 347.52 MiB 100.00% 9.10 Mi E1029 11:34:47.234886 77006 cache.go:203] Error downloading kic artifacts: not yet implemented, see issue #8426 🔥 Création de podman container (CPUs=4, Memory=7916Mo) ... 📦 Préparation de Kubernetes v1.25.3 sur containerd 1.6.9... E1029 11:35:16.036962 77006 start.go:130] Unable to get host IP: RoutableHostIPFromInside is currently only implemented for linux ▪ Génération des certificats et des clés ▪ Démarrage du plan de contrôle ... ▪ Configuration des règles RBAC ... 🔗 Configuration de Cilium (Container Networking Interface)... 🔎 Vérification des composants Kubernetes... ▪ Utilisation de l'image gcr.io/k8s-minikube/storage-provisioner:v5 🌟 Modules activés: storage-provisioner, default-storageclass 🏄 Terminé ! kubectl est maintenant configuré pour utiliser "miniforge" cluster et espace de noms "default" par défaut. $ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE cilium-mbzrq 1/1 Running 0 46s cilium-operator-6b885c4575-6f954 1/1 Running 0 46s coredns-565d847f94-6f6vm 1/1 Running 0 46s etcd-miniforge 1/1 Running 0 58s kube-apiserver-miniforge 1/1 Running 0 59s kube-controller-manager-miniforge 1/1 Running 0 58s kube-proxy-72czb 1/1 Running 0 46s kube-scheduler-miniforge 1/1 Running 0 59s storage-provisioner 1/1 Running 0 57s $ cilium status /¯¯\ /¯¯\__/¯¯\ Cilium: OK \__/¯¯\__/ Operator: OK /¯¯\__/¯¯\ Hubble: disabled \__/¯¯\__/ ClusterMesh: disabled \__/ Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1 DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1 Containers: cilium Running: 1 cilium-operator Running: 1 Cluster Pods: 0/1 managed by Cilium Image versions cilium quay.io/cilium/cilium:v1.12.3@sha256:30de50c4dc0a1e1077e9e7917a54d5cab253058b3f779822aec00f5c817ca826: 1 cilium-operator quay.io/cilium/operator-generic:v1.12.3@sha256:816ec1da586139b595eeb31932c61a7c13b07fb4a0255341c0e0f18608e84eff: 1 $ kubectl -n kube-system exec ds/cilium -c cilium-agent -- cilium status KVStore: Ok Disabled Kubernetes: Ok 1.25 (v1.25.3) [linux/arm64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Probe [eth0 192.168.49.2] Host firewall: Disabled CNI Chaining: none Cilium: Ok 1.12.3 (v1.12.3-1c466d2) NodeMonitor: Listening for events on 4 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 2/254 allocated from 10.244.0.0/24, ClusterMesh: 0/0 clusters ready, 0 global-services BandwidthManager: Disabled Host Routing: BPF Masquerading: BPF [eth0] 10.244.0.0/24 [IPv4: Enabled, IPv6: Disabled] Controller Status: 18/18 healthy Proxy Status: OK, ip 10.244.0.214, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 36/4095 (0.88%), Flows/s: 0.20 Metrics: Disabled Encryption: Disabled Cluster health: 1/1 reachable (2022-10-29T09:37:02Z)