I Built a 24-Container Homelab and You Can Too
I’m running 24 Docker containers on a refurbished mini PC that fits in a shoebox. SIEM monitoring, DNS filtering, mesh VPN, media streaming, and AI tools. The whole thing purrs along at 60% RAM usage and barely breaks a sweat.
You don’t need a server rack to run enterprise-grade services at home.
Starting Small Beats Starting Perfect
I bought one tiny refurbished business computer. Quad-core processor, 16GB RAM, built-in SSD. The kind of machine that sits forgotten on corporate desks until IT refreshes to newer models.
Perfect for a homelab.
Here’s what I didn’t do: I didn’t buy a giant tower with 128GB RAM and dream about “future expansion.” I bought what I needed to run five containers and learned as I went.
That machine now runs 24 services. Still has room to spare.
Security First, Everything Else Later
Most people start homelabs with Plex or Jellyfin. Media streaming is sexy. Security monitoring is not.
I did it backwards. First containers I deployed were Wazuh SIEM and Pi-hole DNS filtering. I wanted to know what was hitting my network before I started serving family photos to the internet.
Wazuh collects logs from everything. Failed SSH attempts, unusual network traffic, container anomalies. It feeds data to OpenSearch for analysis and alerting. When someone port-scans my public IP, I know about it in minutes.
Pi-hole blocks ads and tracking at the DNS level. Every device on my network gets cleaner web browsing without installing anything. Added bonus: I can see which smart home devices are calling home to China at 3 AM.
Security infrastructure in place, then the fun stuff.
The Stack That Actually Runs
Here’s what’s currently spinning:
Infrastructure: Traefik reverse proxy, Authelia SSO, Tailscale mesh VPN, Netdata monitoring, Uptime Kuma status page
Security: Wazuh SIEM, Pi-hole DNS, Fail2ban intrusion prevention
Media: Jellyfin streaming, Navidrome music server, Photoprism photo management
Productivity: Vaultwarden password manager, Paperless document management, Stirling-PDF tools
Development: Gitea git server, Docker Registry, Portainer container management
AI/ML: Ollama local LLM, Open WebUI chat interface
Every service gets evaluated against my 7 Pillars framework: Security, Performance, Reliability, Usability, Maintainability, Documentation, and Community. If it doesn’t score well across most categories, it doesn’t make the cut.
The Magic of Mesh Networking
Tailscale changed everything about remote access. Instead of exposing services to the public internet or wrestling with VPN server configs, I just install Tailscale on each device.
My laptop connects to the homelab from coffee shops. My phone streams music from Navidrome while I’m traveling. My work machine can grab files from the document server without port forwarding or dynamic DNS nonsense.
Each device gets a private IP that follows it everywhere. The mesh network handles NAT traversal, encryption, and routing automatically. It’s like having one giant private network that spans the internet.
Growing Storage Without Growing Problems
The mini PC came with a small SSD. Fine for the OS and containers, terrible for media files and document storage.
I didn’t gut the machine to install more drives. I plugged in external storage when I needed it. USB 3.0 is plenty fast for media streaming and document access. When that filled up, I added another drive.
Docker volumes map to wherever the storage lives. The containers don’t care if their data sits on internal SSD, external USB, or network storage. I can move things around without rebuilding anything.
Resource Reality Check
Current resource usage averages 60% RAM and 11% CPU. That’s with everything running simultaneously.
The heaviest consumers are Wazuh’s OpenSearch database and Jellyfin when transcoding video. Most containers idle at nearly zero CPU and use maybe 50-200MB RAM each.
Modern mini PCs are surprisingly capable. This one runs cooler and quieter than my laptop. Sits on a shelf next to my router, draws less power than a light bulb.
What I Learned the Hard Way
Docker networking will confuse you at first. Bridge networks, overlay networks, host networking, macvlan. Start simple with bridge networks and default settings. Add complexity only when you need it.
Backup configurations, not just data. I can rebuild containers quickly, but recreating all the configs and connections takes hours. I back up my entire Docker Compose directory weekly.
Resource limits save headaches. Set memory limits on containers that might grow unbounded. Better to have a service restart than watch it consume all available RAM and crash everything else.
SSL certificates through Let’s Encrypt and Traefik make everything feel professional. Proper HTTPS with automatic renewal beats self-signed certificates every time.
Building the Next Phase
I’m planning a K3s cluster on Raspberry Pis. Not because the current setup can’t handle more load, but because I want to learn Kubernetes without the complexity of a full K8s deployment.
The mini PC will become the control plane. The Pi cluster will run workloads. High availability for services that matter, redundancy for data that can’t be lost.
Your homelab doesn’t need to be perfect from day one. Start with what you have, add what you need, and learn as you build. That refurbished mini PC collecting dust in your closet might be more capable than you think.