Skip to content

ACLabs-Code/home-kind

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Kind Homelab Cluster

A complete Kubernetes homelab setup using kind (Kubernetes in Docker) for learning and experimentation. Includes Traefik ingress controller, persistent storage, and ArgoCD for GitOps workflows.

Features

  • Configurable kind cluster - Single-node by default, multi-node with WORKERS=N
  • Traefik Ingress Controller - Modern, actively maintained ingress with dashboard
  • Persistent Storage - Dynamic volume provisioning with local-path-provisioner
  • ArgoCD - GitOps-based deployment and application lifecycle management
  • Makefile Automation - One command to create/destroy the entire stack
  • Example Applications - Ready-to-deploy sample apps to test the setup

Quick Start

Prerequisites

  • Docker Desktop (or Docker Engine) running on your machine
  • kubectl - Kubernetes command-line tool
  • kind - Kubernetes in Docker

Install Prerequisites (macOS)

# Install tools automatically
./scripts/install-tools.sh

# Or install manually with Homebrew
brew install kubectl kind

Create Cluster

# Create single-node cluster (default)
make create

# Create multi-node cluster with 2 workers
make create WORKERS=2

This single command will:

  1. Generate kind cluster configuration (1 control-plane + N workers)
  2. Create a kind cluster with ingress support
  3. Install local-path storage provisioner
  4. Install Traefik ingress controller
  5. Install and configure ArgoCD
  6. Display access information

Expected time: ~2 minutes (single-node), ~3-4 minutes (multi-node)

Access Services

After creation, you can access:

Deploy Example Application

# Deploy sample nginx application
make example-app

# Access at: http://example.localhost

Check Status

# View cluster and component status
make status

# View access information
make info

Destroy Cluster

# Delete entire cluster
make delete

# Or delete and recreate (fresh start)
make recreate

Available Commands

make help                   # Show all available commands
make create                 # Create single-node cluster (1 control-plane)
make create WORKERS=2       # Create multi-node cluster (1 control-plane + 2 workers)
make delete                 # Delete entire cluster
make recreate               # Delete and recreate cluster
make recreate WORKERS=3     # Recreate with 3 worker nodes
make status                 # Show cluster and component status
make info                   # Display access URLs and credentials
make example-app            # Deploy example application
make argocd-password        # Retrieve ArgoCD admin password
make install-storage        # Install storage provisioner only
make install-ingress        # Install ingress controller only
make install-argocd         # Install ArgoCD only

Project Structure

home-kind/
├── Makefile                          # Primary automation interface
├── README.md                         # This file
├── cluster/
│   ├── kind-config.yaml             # Generated cluster config (gitignored)
│   ├── kind-config.yaml.template    # Static template for reference
│   ├── traefik/
│   │   ├── values.yaml              # Traefik Helm values
│   │   └── install.yaml             # Traefik ingress manifests
│   ├── local-path-provisioner/
│   │   └── install.yaml             # Storage provisioner manifests
│   └── argocd/
│       ├── install.yaml             # ArgoCD installation
│       ├── ingress.yaml             # ArgoCD UI ingress
│       ├── server-insecure-patch.yaml # ArgoCD insecure mode patch
│       └── apps/                    # ArgoCD Application CRs
├── apps/
│   └── example-app/                 # Sample application
│       ├── deployment.yaml
│       ├── service.yaml
│       └── ingress.yaml
├── scripts/
│   ├── generate-kind-config.sh      # Generates cluster config based on WORKERS
│   ├── install-tools.sh             # Tool installation helper
│   └── wait-for-ready.sh            # Component readiness checker
└── docs/
    ├── ARCHITECTURE.md              # Detailed architecture documentation
    └── TROUBLESHOOTING.md           # Common issues and solutions

How It Works

Networking

  • Kind creates a Docker container running Kubernetes
  • Ports 80 and 443 are mapped from host to container
  • Traefik ingress controller receives traffic on these ports
  • .localhost domains resolve to 127.0.0.1 automatically
  • Ingress routes traffic based on hostname to services
  • Traefik dashboard provides visibility into routing rules

Example: http://example.localhost → Traefik → Service → Pod

Storage

  • Local-path-provisioner creates PersistentVolumes on-demand
  • Storage uses directories on the kind node (inside container)
  • Data persists across pod restarts
  • Data is lost when cluster is deleted (kind node is ephemeral)

ArgoCD

  • Runs in dedicated argocd namespace
  • UI accessible via ingress at http://argocd.localhost
  • Watches git repositories for application definitions
  • Automatically syncs desired state to cluster
  • Provides visibility into deployment status

Using ArgoCD

1. Access the UI

Open http://argocd.localhost in your browser.

2. Login

# Get admin password
make argocd-password

# Or manually
kubectl -n argocd get secret argocd-initial-admin-secret \
  -o jsonpath="{.data.password}" | base64 -d

Username: admin

3. Deploy an Application

Create an Application manifest in cluster/argocd/apps/:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/your-org/your-repo
    targetRevision: main
    path: kubernetes/
  destination:
    server: https://kubernetes.default.svc
    namespace: default
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

Apply it:

kubectl apply -f cluster/argocd/apps/my-app.yaml

ArgoCD will automatically deploy your application!

Learning Resources

Kubernetes Basics

Kind

Ingress

Storage

ArgoCD

Example Workflows

Test Persistent Storage

# Create a PVC
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
EOF

# Create a pod that uses it
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: test
    image: nginx
    volumeMounts:
    - name: storage
      mountPath: /data
  volumes:
  - name: storage
    persistentVolumeClaim:
      claimName: test-pvc
EOF

# Verify
kubectl get pvc test-pvc
kubectl get pod test-pod

Test Ingress

# Deploy example app
make example-app

# Test with curl
curl http://example.localhost

# Or open in browser
open http://example.localhost

Deploy Multi-Container Application

# Example: WordPress + MySQL
kubectl create namespace wordpress

# MySQL with persistent storage
kubectl apply -n wordpress -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: wordpress-mysql
spec:
  ports:
  - port: 3306
  selector:
    app: wordpress
    tier: mysql
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress-mysql
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - image: mysql:8.0
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: changeme
        - name: MYSQL_DATABASE
          value: wordpress
        ports:
        - containerPort: 3306
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim
EOF

# WordPress
kubectl apply -n wordpress -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: wordpress
spec:
  ports:
  - port: 80
  selector:
    app: wordpress
    tier: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
      - image: wordpress:6.2-apache
        name: wordpress
        env:
        - name: WORDPRESS_DB_HOST
          value: wordpress-mysql
        - name: WORDPRESS_DB_PASSWORD
          value: changeme
        ports:
        - containerPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wordpress
spec:
  ingressClassName: traefik
  rules:
  - host: wordpress.localhost
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: wordpress
            port:
              number: 80
EOF

# Access at: http://wordpress.localhost

Troubleshooting

Cluster Won't Start

# Check Docker is running
docker info

# Check for port conflicts
sudo lsof -i :80
sudo lsof -i :443

# View detailed docs
cat docs/TROUBLESHOOTING.md

Can't Access Services

# Verify Traefik is running
kubectl get pods -n traefik

# Check ingress rules
kubectl get ingress --all-namespaces

# Check Traefik dashboard
open http://traefik.localhost

# Test connectivity
curl -v http://example.localhost

ArgoCD Won't Login

# Get fresh password
make argocd-password

# Check ArgoCD is running
kubectl get pods -n argocd

# Restart ArgoCD server
kubectl rollout restart -n argocd deployment/argocd-server

For detailed troubleshooting, see docs/TROUBLESHOOTING.md.

Architecture

For detailed architecture documentation, see docs/ARCHITECTURE.md.

Key highlights:

  • Configurable nodes: 1 control-plane + 0-5 workers (default: single-node)
  • Control-plane runs in Docker with port mappings (80/443)
  • Ingress uses host port mappings on control-plane
  • Storage provisioner creates hostPath volumes on any node
  • ArgoCD for GitOps-based deployments
  • All components in dedicated namespaces

Extending the Cluster

Add Monitoring (Prometheus + Grafana)

# Add Prometheus Helm repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

# Install kube-prometheus-stack
helm install prometheus prometheus-community/kube-prometheus-stack \
  --namespace monitoring --create-namespace

# Access Grafana
kubectl port-forward -n monitoring svc/prometheus-grafana 3000:80
# Open: http://localhost:3000 (admin/prom-operator)

Add Cert-Manager

# Install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml

# Create self-signed ClusterIssuer
# (for production, use Let's Encrypt)

Add Worker Nodes

Worker nodes are configured at cluster creation time using the WORKERS variable:

# Delete existing cluster
make delete

# Recreate with desired number of workers (max 5)
make create WORKERS=3

When to use multiple nodes:

  • Testing pod scheduling and node affinity
  • Learning about node selectors and taints/tolerations
  • Testing workload distribution across nodes
  • Simulating production-like environments

Resource considerations:

  • Each worker node requires ~500MB RAM
  • Docker Desktop should have 4GB+ for 0-1 workers, 6GB+ for 2+ workers
  • Maximum 5 worker nodes supported

Note: Worker nodes cannot be added to an existing cluster - you must recreate the cluster with the desired WORKERS count.

Important Notes

  • This is for learning/development only - not production-ready
  • All data is lost when cluster is deleted
  • No TLS/HTTPS configured (all HTTP)
  • Default passwords should be changed
  • macOS Docker Desktop should have 4GB+ memory allocated

Contributing

This is a personal homelab setup, but suggestions are welcome! Feel free to:

  • Fork and customize for your needs
  • Open issues for bugs or improvements
  • Share your extensions and additions

License

MIT License - Feel free to use and modify as needed.

Acknowledgments


Happy Learning! 🚀

For questions or issues, see docs/TROUBLESHOOTING.md or open an issue.

About

Kubernetes homelab setup using kind with Traefik ingress, ArgoCD, and persistent storage for learning and experimentation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors