MicroK8s is a powerful yet lightweight Kubernetes distribution perfect for:
- π Developers testing Kubernetes locally
- π§ͺ CI/CD environments
- π Homelabs
- π οΈ Edge and IoT systems
- π§© Single-node or small clusters
This configuration is ideal for running MicroK8s with multiple workloads, monitoring tools, and CI/CD pipelines.
π What You'll Learn
In this guide, you will:
- β¨ Install MicroK8s
- π Configure user permissions
- π§° Enable essential add-ons
- π§ͺ Verify the cluster is running
- π Deploy a test application
- π Ensure services autostart on boot
β Why MicroK8s?
MicroK8s is:
- Lightweight πͺΆ β Minimal resource footprint
- Easy to install β‘ β One command setup
- Zero external dependencies π― β Everything bundled
- Production-ready π₯ β Enterprise-grade
- Beginner-friendly and pro-friendly π¨βπ» β Great for learning and production
It includes core Kubernetes components and plug-and-play add-ons like DNS, Ingress, Helm, and the Dashboard.
π¦ Install MicroK8s
Update your system
sudo apt update && sudo apt upgrade -y
Install MicroK8s via Snap
sudo snap install microk8s --classic
This will take a few minutes. MicroK8s installs the latest stable Kubernetes release.
Verify installation
sudo microk8s status --wait-ready
Expected output:
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
π Kubernetes is up and running!
π₯ Add Your User to the MicroK8s Group
MicroK8s creates its own Unix group to manage access. Add your user to avoid using sudo every time:
sudo usermod -a -G microk8s $USER
Apply group changes immediately without logging out:
newgrp microk8s
Now you can run microk8s commands without sudo!
π‘ Pro Tip: Create a Kubectl Alias
Typing microk8s kubectl every time is tedious. Let's create an alias so you can just type kubectl.
Add the alias to your bash configuration:
echo 'alias kubectl="microk8s kubectl"' >> ~/.bashrc
Apply the changes to your current session:
source ~/.bashrc
Test it:
kubectl version --client
From this point forward, all commands in this guide will use kubectl for brevity, but remember it is running the MicroK8s version.
π§ Enable Essential Kubernetes Add-ons
Recommended add-ons
Enable DNS for internal service discovery:
microk8s enable dns
Enable storage for persistent volumes (databases, file storage):
microk8s enable storage
Enable Ingress for external access (domain routing, SSL):
microk8s enable ingress
Optional but useful add-ons
Enable Helm 3 package manager:
microk8s enable helm3
Enable Kubernetes Dashboard (UI):
microk8s enable dashboard
Enable Metrics Server (for monitoring CPU/RAM usage):
microk8s enable metrics-server
Check status
microk8s status
You should see all enabled add-ons listed as active.
π§ͺ Test Your Kubernetes Cluster
Check node status
kubectl get nodes
Expected output:
NAME STATUS ROLES AGE VERSION
hostname Ready <none> 5m v1.31.3
STATUS should be Ready β
Check system pods
kubectl get pods --all-namespaces
All components should show Running or Completed status:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-xxxxx 1/1 Running 0 5m
kube-system coredns-xxxxx 1/1 Running 0 4m
ingress nginx-ingress-microk8s-controller-xxxxx 1/1 Running 0 3m
If everything shows Running, your cluster is healthy! π
π³ Deploy a Test NGINX App
Let's deploy a simple NGINX workload to verify everything works end-to-end:
Create deployment
kubectl create deployment nginx --image=nginx
Expose the deployment
kubectl expose deployment nginx --port=80 --type=NodePort
Get service details
kubectl get svc nginx
You'll see output like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.152.183.45 <none> 80:31234/TCP 10s
Note the NodePort (e.g., 31234).
Access the application
Open your browser and visit:
http://your-server-ip:31234
If you see the NGINX welcome page β π Your Kubernetes cluster is working perfectly!
π Ensuring MicroK8s Auto-Starts on Boot
Since MicroK8s is installed via Snap, systemctl cannot be used directly. Snap manages startup behavior automatically, but let's verify and ensure it's active.
βοΈ Step 1: Check Service Status
sudo snap services microk8s
You should see:
Service Startup
microk8s.daemon-apiserver enabled
microk8s.daemon-containerd enabled
...
If all services show Startup: enabled, MicroK8s is set to autostart β
βοΈ Step 2: Enable Autostart (If Needed)
If services show disabled, simply run:
sudo snap start --enable microk8s
This enables the entire application service and its internal daemons.
βοΈ Step 3: Reboot & Validate
Reboot the server to test autostart:
sudo reboot
After logging back in, run:
microk8s status --wait-ready
If you see microk8s is running, then autostart is confirmed! π
Verify your NGINX app survived the reboot
kubectl get pods
Your nginx pod should be running.
πΎ Resource Usage Check
Check how much of your 8GB RAM MicroK8s is using:
free -h
Check disk usage (out of 120GB):
df -h
MicroK8s typically uses:
- ~500MB-1GB RAM at idle
- ~2-3GB storage for base installation
This leaves plenty of resources for your workloads! π
π οΈ Troubleshooting Tools
Check cluster health (Generates a report)
sudo microk8s inspect
Restart MicroK8s
sudo microk8s stop
sudo microk8s start
Check specific logs
# Check kubelet logs
journalctl -u snap.microk8s.daemon-kubelet -f
# Check API server logs
journalctl -u snap.microk8s.daemon-apiserver -f
π Summary Checklist
| Step | Purpose | Status |
|---|---|---|
| β Install MicroK8s | Kubernetes runtime | Done |
| β Add user to group | Non-root access | Done |
| β Set up kubectl alias | Simplify commands | Done |
| β Enable add-ons | DNS, storage, ingress | Done |
| β Test with kubectl | Validate cluster | Done |
| β Deploy NGINX | Confirm workload scheduling | Done |
| β Enable autostart | Ensure services start on boot | Done |
π§Ή Clean Up Test Deployment (Optional)
Once you've verified everything works, you can remove the test NGINX deployment:
kubectl delete deployment nginx
kubectl delete service nginx
π― Final Thoughts
Congratulations! π Your 4-core, 8GB RAM, 120GB Ubuntu server is now running a fully functional MicroK8s cluster!
You have a robust foundation ready for production workloads.
π Previous Guide
π Back to Part 1: Prerequisites & Basic Ubuntu Setup
π What's Next?
Ready to deploy production applications? π
π Continue to Part 3: Production Namespace, App Deployment & Configuration
In Part 3, you'll learn how to:
- ποΈ Create production namespaces for workload isolation
- π Deploy PostgreSQL and n8n with proper configurations
- π¦ Set up Persistent Volume Claims for databases
- π Secure sensitive data using Kubernetes Secrets
Top comments (0)