#Local Kubernetes Deployment with Minikube
This project demonstrates the process of building a local Kubernetes cluster using Minikube, deploying a sample Nginx application, and managing its lifecycle. The entire task was performed within a GitHub Codespaces environment.
A key part of this exercise involved diagnosing and solving a series of complex networking challenges specific to this cloud-based development environment.
#Objective 🎯
The core objective was to deploy, manage, and scale a web application in a local Kubernetes cluster to understand the fundamentals of Kubernetes deployments and services.
#Tools Used 🛠️
- GitHub Codespaces: The cloud-based development environment.
- Minikube: To create and manage a local Kubernetes cluster.
- kubectl: The command-line tool for interacting with the Kubernetes API.
- Docker: The container runtime used by Minikube.
#Process & Implementation Steps ⚙️
The deployment followed these key steps:
- Cluster Creation: Minikube and kubectl were installed in the Codespace, and a new Kubernetes cluster was launched using the minikube start command.
- Application Deployment: A deployment.yaml file was created to define the desired state for our application. It was configured to run two replicas of the nginx:1.14.2 container image.
- Exposing the Application: A service.yaml file was created to expose the Nginx deployment to network traffic. It was configured as a NodePort service, making the application accessible from outside the cluster.
- Verification and Scaling: The kubectl get pods command was used to verify that the two Nginx pods were running successfully. The application was then scaled up to 4 replicas using the kubectl scale deployment nginx-deployment --replicas=4 command.
- Log Inspection: The kubectl logs command was used to view the application's access logs.
#Challenges & Troubleshooting Journey 🕵️♂️ Accessing the application within the Codespaces environment proved to be a significant challenge, leading to a deep-dive troubleshooting process.
Problem 1: Connection Timeout
-
Issue: Initial attempts to access the service using minikube service resulted in an ERR_CONNECTION_TIMED_OUT error.
-
Reasoning: The command provides an internal cluster IP (192.168.x.x) which is not accessible from a browser outside the Codespace's virtual network.
-
Solution: The correct NodePort was manually forwarded using the Codespaces "Ports" tab. Problem 2: Persistent 502 Bad Gateway Error
-
Issue: Even with the correct port forwarded, the browser returned a persistent HTTP 502 Bad Gateway error. This indicated that the Codespaces proxy could not get a valid response from the application.
-
Troubleshooting Steps Taken:
- Verified Service Endpoints: Confirmed that the Service was correctly linked to the Pods (kubectl describe service).
- Attempted minikube tunnel: Created a network route to expose the service's IP, but this did not resolve the issue.
- Restarted Pods: Forced a recreation of all pods (kubectl delete pods -l app=nginx) to resolve any potential zombie processes.
- Full Cluster Reset: Completely deleted (minikube delete) and rebuilt the cluster from scratch.
-
Definitive Solution: After exhausting all standard methods, the issue was identified as a fundamental networking incompatibility between Minikube's service routing and the GitHub Codespaces environment. The solution was to bypass the service network entirely and establish a direct connection to a pod using the kubectl port-forward command. kubectl port-forward 8080:80
This provided a stable, reliable connection and immediately resolved the 502 error.
#Deliverables: Screenshots 📸 Here are the key moments from the task
- Initial Deployment Verified

This screenshot shows the output of kubectl get pods right after the initial deployment, confirming that the two Nginx pods were created and in a Running state.
- Deployment Scaled to 4 Replicas

This screenshot shows the output of kubectl get pods after executing the kubectl scale command, verifying that the deployment was successfully scaled to four running pods.
- Successful Log Verification

This final screenshot shows the Nginx access logs retrieved using kubectl logs. These logs were generated by refreshing the browser after a stable connection was finally established using kubectl port-forward.
Outcome & Key Learnings 🧠
This task was a successful demonstration of Kubernetes fundamentals. More importantly, it was a valuable real-world exercise in advanced troubleshooting, highlighting the necessity of understanding different network exposure strategies (NodePort vs. port-forward) when working in complex, cloud-based environments.