A complete Kubernetes homelab setup using kind (Kubernetes in Docker) for learning and experimentation. Includes Traefik ingress controller, persistent storage, and ArgoCD for GitOps workflows.
- Configurable kind cluster - Single-node by default, multi-node with
WORKERS=N - Traefik Ingress Controller - Modern, actively maintained ingress with dashboard
- Persistent Storage - Dynamic volume provisioning with local-path-provisioner
- ArgoCD - GitOps-based deployment and application lifecycle management
- Makefile Automation - One command to create/destroy the entire stack
- Example Applications - Ready-to-deploy sample apps to test the setup
- Docker Desktop (or Docker Engine) running on your machine
- kubectl - Kubernetes command-line tool
- kind - Kubernetes in Docker
# Install tools automatically
./scripts/install-tools.sh
# Or install manually with Homebrew
brew install kubectl kind# Create single-node cluster (default)
make create
# Create multi-node cluster with 2 workers
make create WORKERS=2This single command will:
- Generate kind cluster configuration (1 control-plane + N workers)
- Create a kind cluster with ingress support
- Install local-path storage provisioner
- Install Traefik ingress controller
- Install and configure ArgoCD
- Display access information
Expected time: ~2 minutes (single-node), ~3-4 minutes (multi-node)
After creation, you can access:
- Traefik Dashboard: http://traefik.localhost
- ArgoCD UI: http://argocd.localhost
- Username:
admin - Password: Run
make argocd-passwordto retrieve
- Username:
# Deploy sample nginx application
make example-app
# Access at: http://example.localhost# View cluster and component status
make status
# View access information
make info# Delete entire cluster
make delete
# Or delete and recreate (fresh start)
make recreatemake help # Show all available commands
make create # Create single-node cluster (1 control-plane)
make create WORKERS=2 # Create multi-node cluster (1 control-plane + 2 workers)
make delete # Delete entire cluster
make recreate # Delete and recreate cluster
make recreate WORKERS=3 # Recreate with 3 worker nodes
make status # Show cluster and component status
make info # Display access URLs and credentials
make example-app # Deploy example application
make argocd-password # Retrieve ArgoCD admin password
make install-storage # Install storage provisioner only
make install-ingress # Install ingress controller only
make install-argocd # Install ArgoCD onlyhome-kind/
├── Makefile # Primary automation interface
├── README.md # This file
├── cluster/
│ ├── kind-config.yaml # Generated cluster config (gitignored)
│ ├── kind-config.yaml.template # Static template for reference
│ ├── traefik/
│ │ ├── values.yaml # Traefik Helm values
│ │ └── install.yaml # Traefik ingress manifests
│ ├── local-path-provisioner/
│ │ └── install.yaml # Storage provisioner manifests
│ └── argocd/
│ ├── install.yaml # ArgoCD installation
│ ├── ingress.yaml # ArgoCD UI ingress
│ ├── server-insecure-patch.yaml # ArgoCD insecure mode patch
│ └── apps/ # ArgoCD Application CRs
├── apps/
│ └── example-app/ # Sample application
│ ├── deployment.yaml
│ ├── service.yaml
│ └── ingress.yaml
├── scripts/
│ ├── generate-kind-config.sh # Generates cluster config based on WORKERS
│ ├── install-tools.sh # Tool installation helper
│ └── wait-for-ready.sh # Component readiness checker
└── docs/
├── ARCHITECTURE.md # Detailed architecture documentation
└── TROUBLESHOOTING.md # Common issues and solutions
- Kind creates a Docker container running Kubernetes
- Ports 80 and 443 are mapped from host to container
- Traefik ingress controller receives traffic on these ports
.localhostdomains resolve to127.0.0.1automatically- Ingress routes traffic based on hostname to services
- Traefik dashboard provides visibility into routing rules
Example: http://example.localhost → Traefik → Service → Pod
- Local-path-provisioner creates PersistentVolumes on-demand
- Storage uses directories on the kind node (inside container)
- Data persists across pod restarts
- Data is lost when cluster is deleted (kind node is ephemeral)
- Runs in dedicated
argocdnamespace - UI accessible via ingress at
http://argocd.localhost - Watches git repositories for application definitions
- Automatically syncs desired state to cluster
- Provides visibility into deployment status
Open http://argocd.localhost in your browser.
# Get admin password
make argocd-password
# Or manually
kubectl -n argocd get secret argocd-initial-admin-secret \
-o jsonpath="{.data.password}" | base64 -dUsername: admin
Create an Application manifest in cluster/argocd/apps/:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/your-org/your-repo
targetRevision: main
path: kubernetes/
destination:
server: https://kubernetes.default.svc
namespace: default
syncPolicy:
automated:
prune: true
selfHeal: trueApply it:
kubectl apply -f cluster/argocd/apps/my-app.yamlArgoCD will automatically deploy your application!
# Create a PVC
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
# Create a pod that uses it
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test
image: nginx
volumeMounts:
- name: storage
mountPath: /data
volumes:
- name: storage
persistentVolumeClaim:
claimName: test-pvc
EOF
# Verify
kubectl get pvc test-pvc
kubectl get pod test-pod# Deploy example app
make example-app
# Test with curl
curl http://example.localhost
# Or open in browser
open http://example.localhost# Example: WordPress + MySQL
kubectl create namespace wordpress
# MySQL with persistent storage
kubectl apply -n wordpress -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:8.0
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: changeme
- name: MYSQL_DATABASE
value: wordpress
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
EOF
# WordPress
kubectl apply -n wordpress -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:6.2-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
value: changeme
ports:
- containerPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wordpress
spec:
ingressClassName: traefik
rules:
- host: wordpress.localhost
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wordpress
port:
number: 80
EOF
# Access at: http://wordpress.localhost# Check Docker is running
docker info
# Check for port conflicts
sudo lsof -i :80
sudo lsof -i :443
# View detailed docs
cat docs/TROUBLESHOOTING.md# Verify Traefik is running
kubectl get pods -n traefik
# Check ingress rules
kubectl get ingress --all-namespaces
# Check Traefik dashboard
open http://traefik.localhost
# Test connectivity
curl -v http://example.localhost# Get fresh password
make argocd-password
# Check ArgoCD is running
kubectl get pods -n argocd
# Restart ArgoCD server
kubectl rollout restart -n argocd deployment/argocd-serverFor detailed troubleshooting, see docs/TROUBLESHOOTING.md.
For detailed architecture documentation, see docs/ARCHITECTURE.md.
Key highlights:
- Configurable nodes: 1 control-plane + 0-5 workers (default: single-node)
- Control-plane runs in Docker with port mappings (80/443)
- Ingress uses host port mappings on control-plane
- Storage provisioner creates hostPath volumes on any node
- ArgoCD for GitOps-based deployments
- All components in dedicated namespaces
# Add Prometheus Helm repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
# Install kube-prometheus-stack
helm install prometheus prometheus-community/kube-prometheus-stack \
--namespace monitoring --create-namespace
# Access Grafana
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80
# Open: http://localhost:3000 (admin/prom-operator)# Install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml
# Create self-signed ClusterIssuer
# (for production, use Let's Encrypt)Worker nodes are configured at cluster creation time using the WORKERS variable:
# Delete existing cluster
make delete
# Recreate with desired number of workers (max 5)
make create WORKERS=3When to use multiple nodes:
- Testing pod scheduling and node affinity
- Learning about node selectors and taints/tolerations
- Testing workload distribution across nodes
- Simulating production-like environments
Resource considerations:
- Each worker node requires ~500MB RAM
- Docker Desktop should have 4GB+ for 0-1 workers, 6GB+ for 2+ workers
- Maximum 5 worker nodes supported
Note: Worker nodes cannot be added to an existing cluster - you must recreate the cluster with the desired WORKERS count.
- This is for learning/development only - not production-ready
- All data is lost when cluster is deleted
- No TLS/HTTPS configured (all HTTP)
- Default passwords should be changed
- macOS Docker Desktop should have 4GB+ memory allocated
This is a personal homelab setup, but suggestions are welcome! Feel free to:
- Fork and customize for your needs
- Open issues for bugs or improvements
- Share your extensions and additions
MIT License - Feel free to use and modify as needed.
- kind - Kubernetes in Docker
- Traefik - Modern ingress controller with excellent kind support
- Local Path Provisioner
- ArgoCD
Happy Learning! 🚀
For questions or issues, see docs/TROUBLESHOOTING.md or open an issue.