Pillar Guide 📖 45 min read
📅 Published: 🔄 Updated:

The Complete Self-Hosting Guide for 2026

I started self-hosting in March 2024 because Google killed a product I relied on. Again. It was Google Podcasts that time — the third Google service I'd depended on that just vanished. I decided I didn't want a corporation's product roadmap to dictate what tools I could use. So I started running my own. As of this writing, I have 23 containers across two machines, and I still add something new every couple months.

🛠️ Before You Start

💻
Hardware Mini PC (2+ cores, 8GB+ RAM) or used server, SSD recommended
📦
Software Ubuntu Server 24.04, Proxmox VE 8.x, or Debian 12
⏱️
Estimated Time 1-3 hours

Who this is for: You're comfortable with basic Linux commands but haven't set up your own server infrastructure. Maybe you've spun up a VPS once or twice. You want to self-host applications but don't know where to start — or you tried once, got overwhelmed by the decisions, and gave up. This guide walks through the entire process, from hardware to deployment to long-term maintenance.

What I'd tell past-me:

  • Start with Docker from day one. Don't waste weeks doing bare-metal installs and fighting dependency conflicts — containerize everything.
  • Set up backups before you add a single service. Not after. I lost three months of Nextcloud data learning this.
  • Your ISP will throttle upload speeds. Budget $5-10/month for a cheap VPS if you need anything accessible outside your network.

Why Self-Host in 2026?

I'll be honest: self-hosting isn't for everyone. It's more work than just paying for SaaS. Things break at inconvenient times. You'll spend weekend afternoons debugging instead of relaxing.

So why do it?

Self-hosting sounds romantic until your Nextcloud instance goes down at 2 AM and you realize you're the only sysadmin. I've had that exact scenario — power outage killed my UPS, corrupted a database. Took me an afternoon to recover from backups. The lesson: test your backups regularly. Not "I should do that someday" — actually restore from a backup and confirm it works.

For me, it started with privacy concerns — I didn't love the idea of my photos living on Google's servers. But it became about something else: understanding how things work. Every service you host teaches you something about networking, security, databases, or system administration.

What you actually get out of it:

  • Cost savings at scale — My homelab runs services that would cost $200+/month as subscriptions
  • No vendor lock-in — Your data stays portable
  • Customization — Modify anything, integrate everything
  • Learning — Hands-on experience that's hard to get any other way
  • Fun — Seriously. Once it clicks, it's genuinely enjoyable

That said, here's my honest take: start with one or two services. Don't try to replace everything at once. I made that mistake and burned out within a month. Now I add maybe one new service every few months, when I actually need it.

The month everything broke:

November 2024. A WD Blue drive I'd been using for Docker volumes started throwing SMART warnings on a Tuesday. By Thursday it was read-only. While I was migrating data to a replacement drive, my Let's Encrypt certs expired because the renewal cronjob was on the dying volume. So now half my services are throwing SSL errors while I'm doing an emergency data migration. Then — because the universe has a sense of humor — Docker Engine auto-updated overnight and broke compatibility with two of my Compose files. Three separate failures in one week. I got everything back, but it took the entire weekend. That week is why I now run SMART monitoring with alerts, keep cert renewal on the boot drive, and pin Docker versions.

Hardware: What You Actually Need

The internet will tell you to buy a rack-mounted server with 128GB RAM. Don't. At least not yet.

Starting Out: The $100 Homelab

My first setup was an used Dell Optiplex 3040 I bought for $80 on eBay. I5-6500, 8GB RAM, 256GB SSD. It ran Pi-hole, Nextcloud, and a few Docker containers for two years without issue.

What to look for in used business PCs:

  • Dell Optiplex, HP ProDesk, Lenovo ThinkCentre — Reliable, well-documented, parts are cheap
  • Intel Core i5 or better — 6th gen (Skylake) or newer for reasonable power efficiency
  • 8GB RAM minimum — 16GB is comfortable, 32GB is future-proofing
  • SSD required — Don't buy anything with just a spinning drive
  • Small form factor — Easier to place, quieter fans

For about $100-150, you can get something that handles 10+ Docker containers without breaking a sweat.

The Raspberry Pi Question

Can you use a Raspberry Pi? Yes, but with caveats.

Pi 4 (8GB) works great for: Pi-hole, Home Assistant, lightweight file sharing, WireGuard VPN.

Pi struggles with: Anything involving transcoding (Plex/Jellyfin), databases under load, running more than 5-6 services.

I used a Pi 4 as my primary server for about 8 months. It worked, but I was constantly bumping into RAM limits. The micro SD card also died twice — always use a SSD via USB for anything important.

When to Go Bigger

You need more hardware when:

  • You want to run virtual machines (not just containers)
  • Media transcoding is a priority
  • You're hosting for more than your household
  • You want redundancy (multiple drives, failover)

At that point, look at used enterprise gear (Dell PowerEdge, HP ProLiant) or build something custom. But that's a different guide — for now, a simple mini PC gets you surprisingly far.

Operating System: Just Pick One

People spend way too much time on this decision. Here's my take:

Ubuntu Server 24.04 LTS if you want the most documentation and community support. Most tutorials are written for Ubuntu.

Debian 12 if you want something slightly more stable/minimal. What I personally use.

Proxmox VE if you want virtualization. It's basically Debian with a nice web UI for managing VMs and containers.

Avoid: Anything rolling release (Arch, Fedora) for a server. Stability matters more than having the newest packages.

Base Installation Checklist

After installing your OS, here's what I do on every new server:

# Update everything
sudo apt update && sudo apt upgrade -y

# Install keys
sudo apt install -y curl wget git htop ncdu tmux ufw fail2ban

# Set up firewall (allow SSH, deny everything else by default)
sudo ufw allow OpenSSH
sudo ufw enable

# Enable fail2ban to prevent brute-force attacks
sudo systemctl enable fail2ban
sudo systemctl start fail2ban

# Set timezone
sudo timedatectl set-timezone Your/Timezone

# Create a non-root user if you don't have one
# sudo adduser yourname
# sudo usermod -aG sudo yourname

That's it. You've a secure-ish base to work from.

Docker: The Foundation of Modern Self-Hosting

Almost everything you'll self-host runs in Docker. If you don't know Docker yet, invest time learning it — it's not optional anymore.

Why Docker?

Before Docker, installing software meant:

  1. Installing dependencies (and hoping they don't conflict with other stuff)
  2. Configuring paths, users, permissions
  3. Managing upgrades (and praying nothing breaks)
  4. Cleaning up when you remove something

With Docker, you run a single command and the application works. It's isolated from everything else. Upgrades are pulling a new image. Removal is deleting a container. Night and day difference.

Installing Docker

# Remove old versions if any
sudo apt remove docker docker-engine docker.io containerd runc

# Install Docker using the official script
curl -fsSL https://get.docker.com | sudo sh

# Add yourself to the docker group (so you don't need sudo)
sudo usermod -aG docker $USER

# Log out and back in for group change to take effect
# Then verify:
docker run hello-world

Docker Compose: How Professionals Do It

Running containers with docker run works but gets messy. Docker Compose lets you define your entire stack in a YAML file that's easy to version control and replicate.

# docker-compose.yml example
version: "3.8"

services:
 nginx:
 image: nginx:alpine
 ports:
 - "80:80"
 - "443:443"
 volumes:
 - ./nginx.conf:/etc/nginx/nginx.conf:ro
 restart: unless-stopped

 db:
 image: postgres:16
 environment:
 POSTGRES_PASSWORD: ${DB_PASSWORD}
 volumes:
 - postgres_data:/var/lib/postgresql/data
 restart: unless-stopped

volumes:
 postgres_data:

Then docker compose up -d starts everything. docker compose down stops it. docker compose pull && docker compose up -d upgrades.

I keep all my compose files in a Git repository. If my server dies, I can spin everything back up in minutes on new hardware.

Networking: Making Services Accessible

This is where most people get stuck.

Local Network Only

If you only need to access services from inside your home, you're done. Just use http://your-server-ip:port.

Make the IP static so it doesn't change:

# /etc/netplan/50-cloud-init.yaml (Ubuntu)
network:
 ethernets:
 eth0:
 dhcp4: false
 addresses: [192.168.1.100/24]
 routes:
 - to: default
 via: 192.168.1.1
 nameservers:
 addresses: [1.1.1.1, 8.8.8.8]
 version: 2

Then sudo netplan apply.

Remote Access: The VPN Approach

Start with a VPN rather than exposing services directly. Tailscale makes this dead simple:

# Install Tailscale
curl -fsSL https://tailscale.com/install.sh | sh

# Connect to your tailnet
sudo tailscale up

# Done. Your server now has a stable IP accessible from any device on your Tailscale network.

With Tailscale running on your phone and laptop, you can access your homelab from anywhere as if you were home. No port forwarding, no dynamic DNS, no certificates to manage.

Remote Access: The Public Domain Approach

If you want services accessible via a real domain (like nextcloud.yourdomain.com), you need:

  1. A domain name (~$10-15/year)
  2. DNS pointing to your home IP (or a Cloudflare Tunnel)
  3. A reverse proxy handling HTTPS
  4. Port forwarding on your router (unless using Cloudflare Tunnel)

This is more complex and has real security implications. Get comfortable with local + VPN access first.

Reverse Proxy: One Domain, Many Services

A reverse proxy lets you run multiple services on one server and access them via subdomains:

  • nextcloud.example.com → Nextcloud
  • jellyfin.example.com → Jellyfin
  • Git.example.com → Gitea

I use Nginx Proxy Manager because it has a web UI for managing everything. Traefik is more powerful but has a steeper learning curve.

# docker-compose.yml for Nginx Proxy Manager
services:
 npm:
 image: jc21/nginx-proxy-manager:latest
 ports:
 - "80:80"
 - "443:443"
 - "81:81" # Admin panel
 volumes:
 - ./data:/data
 - ./letsencrypt:/etc/letsencrypt
 restart: unless-stopped

After starting, go to http://your-server:81, log in with default credentials ([email protected] / changeme), and add your proxy hosts through the UI. It handles Let's Encrypt certificates automatically.

Services Worth Self-Hosting

I'm not going to give you a list of 50 services. Here's what I actually use daily after three years of experimenting:

Tier 1: Use Every Day

Nextcloud — File sync, calendar, contacts. Replaced Google Drive/Calendar for me. The mobile apps are good enough.

Vaultwarden — Bitwarden-compatible password manager. Self-hosting your passwords sounds scary but it's actually more secure than trusting a third party with them.

Pi-hole — Network-wide ad blocking. Once you use it, browsing on networks without it feels broken.

Tailscale — Not really self-hosted (their coordination servers are cloud), but the data path is peer-to-peer. Key for remote access.

Tier 2: Use Frequently

Jellyfin — Media streaming. Free Plex alternative. Transcoding works if your hardware supports it.

Home Assistant — Smart home control. Only if you have smart devices. The learning curve is brutal but once it clicks, nothing else comes close.

Paperless-ngx — Document management. I scan all paper documents and it OCRs/organizes them. Easily the most useful service I run after Nextcloud.

Tier 3: Nice to Have

Uptime Kuma — Monitoring dashboard. Shows if your services are up. Sends alerts when they're not.

Gitea — Self-hosted Git. Useful if you want private repos without GitHub/GitLab. I keep trying to make Gitea stick but I always end up back on GitHub for the CI/CD integration and the fact that every tool on earth has a GitHub integration.

Immich — Google Photos replacement. Still young but the development pace is insane — every release adds something I was about to request. Best photo backup option right now.

Backups: The Part Everyone Skips

I learned this the hard way when a drive failed and I lost three months of Nextcloud data. Don't be me.

The 3-2-1 Rule

  • 3 copies of your data
  • 2 different storage media
  • 1 offsite backup

For a homelab, this might look like:

  1. Primary data on server SSD
  2. Daily backup to external USB drive
  3. Weekly encrypted backup to Backblaze B2 or similar ($5/month for reasonable storage)

What to Back Up

Docker makes this straightforward. You need to back up:

  1. Your docker-compose.yml files (I keep these in Git)
  2. The volumes where containers store data
  3. Any custom configuration files

I use restic for backups. It's fast, encrypted, and handles deduplication:

# Initialize backup repository
restic init --repo /mnt/backup

# Backup Docker volumes
restic backup /var/lib/docker/volumes --repo /mnt/backup

# Automate with cron
0 3 * * * restic backup /var/lib/docker/volumes --repo /mnt/backup --quiet

Security: Don't Get Hacked

Here's the security baseline:

SSH Hardening

# /etc/ssh/sshd_config changes:
PermitRootLogin no
PasswordAuthentication no
PubkeyAuthentication yes

Use SSH keys only. No password authentication. No root login.

Keep Things Updated

# Enable automatic security updates (Ubuntu)
sudo apt install unattended-upgrades
sudo dpkg-reconfigure unattended-upgrades

For Docker images, I check for updates weekly and pull new versions for anything with security patches.

Principle of Least Privilege

Don't expose more than you need. If something only needs local access, keep it local. Use VPN for remote access instead of opening ports. I made the mistake of port-forwarding Gitea early on — the SSH brute-force attempts started within hours.

Monitoring

At minimum, set up fail2ban to block brute-force attempts. Check logs occasionally. I run Uptime Kuma to alert me if services go down — sometimes that's the first sign of a problem.

Long-Term Maintenance

Self-hosting isn't set-and-forget. Expect to spend a few hours per month on maintenance.

Regular Tasks

  • Weekly: Check that backups are running, review any alerts
  • Monthly: Update Docker images, review disk space, check logs for anomalies
  • Quarterly: Full system update, test backup restoration, review what's running (remove unused stuff)

When Things Break

They'll. My debugging process:

  1. docker logs container_name — What's the container saying?
  2. docker compose down && docker compose up -d — Restart often fixes things
  3. Check if anything changed (updates, config edits, disk full)
  4. Search the application's GitHub issues
  5. Ask in application-specific Discord/forums

Keep notes. When you solve a problem, write down what it was and how you fixed it. Future you'll thank present you.

Where Things Stand Now

Current state of my setup as of January 2026: two machines (a Lenovo ThinkCentre M720q and a Dell Optiplex 3060 Micro), 23 Docker containers total, running Nextcloud, Vaultwarden, Pi-hole, Jellyfin, Paperless-ngx, Home Assistant, Immich, Uptime Kuma, Gitea, and a handful of smaller things. Monthly cost is about $18 — $5 for Backblaze B2 backups, $8 for a Hetzner VPS that runs Tailscale as an exit node, and the electricity for two mini PCs which I measured at around $5/month.

Uptime Kuma says I'm at 99.4% over the last 90 days. The 0.6% was a planned power outage in my building and one time I rebooted the wrong machine while SSH'd in.

I still use SaaS for email (self-hosting email is misery I don't recommend) and for GitHub (the CI/CD and integrations are too good). Everything else runs on hardware I own, on data I control.

Total time investment per month is maybe 3-4 hours. Some months it's zero. Some months a Docker update breaks something and it's an evening. It averages out.

If you made it through this guide, you have everything you need to start. Pick one service — Pi-hole is a good first one — and go from there.

💬 Comments

💬 Comments