Why Bare Metal Docker (No Kubernetes)
The container industry has a complexity problem. Somewhere along the way, "I want to deploy a Docker container" became "I need a Kubernetes cluster with Helm charts, Istio service mesh, and a GitOps pipeline." Most teams don't need any of that.
Docker on bare metal means: your containers talk directly to the Linux kernel. No hypervisor. No container orchestration platform taking 2-4 GB of RAM just to exist. No YAML files longer than your actual application code.
| Factor | Docker + Bare Metal | Kubernetes (EKS/GKE) |
|---|---|---|
| Setup time | 10 minutes | 2-8 hours |
| RAM overhead | ~50 MB | 2-4 GB |
| Learning curve | Dockerfile + Compose | Pods, Services, Ingress, PVCs... |
| Debugging | docker logs | kubectl + lens + dashboards |
| Cost (8 vCPU/16 GB) | $21/mo | $150-300/mo (managed) |
| Max services | 20-50 on one node | Unlimited (multi-node) |
Kubernetes makes sense at 50+ services across multiple nodes. Below that, Docker Compose handles everything.
Install Docker on a Fresh Server
Starting from a fresh Ubuntu 24.04 bare metal server:
# SSH into your server
ssh root@your-server-ip
# Remove any legacy Docker packages
for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do
apt-get remove -y $pkg 2>/dev/null
done
# Add Docker's official GPG key and repository
apt-get update
apt-get install -y ca-certificates curl
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
-o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/etc/apt/keyrings/docker.asc] \
https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine + Compose plugin
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io \
docker-buildx-plugin docker-compose-plugin
# Verify
docker --version
docker compose versionProduction Docker Configuration
The default Docker config is fine for development but needs tuning for production. Create a daemon config:
# /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"default-ulimits": {
"nofile": { "Name": "nofile", "Hard": 65536, "Soft": 65536 }
},
"live-restore": true,
"userland-proxy": false,
"storage-driver": "overlay2",
"metrics-addr": "127.0.0.1:9323"
}Key settings explained:
- live-restore: Containers keep running when the Docker daemon restarts — essential for zero-downtime upgrades
- userland-proxy: false: Uses iptables instead of a userland proxy, reducing latency on port-forwarded connections
- log limits: Prevents container logs from filling your disk (the #1 cause of "my server ran out of space")
- metrics-addr: Exposes Prometheus metrics on localhost for monitoring
# Apply the config
systemctl restart dockerMulti-Container App with Docker Compose
Here's a real-world production stack: a Node.js API, PostgreSQL database, Redis cache, and Nginx reverse proxy — all on one server.
# /opt/myapp/docker-compose.yml
services:
app:
image: your-registry/myapp:latest
restart: unless-stopped
environment:
NODE_ENV: production
DATABASE_URL: postgres://app:secretpassword@db:5432/myapp
REDIS_URL: redis://redis:6379
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
deploy:
resources:
limits:
cpus: '4'
memory: 4G
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 5s
retries: 3
db:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_DB: myapp
POSTGRES_USER: app
POSTGRES_PASSWORD: secretpassword
volumes:
- pgdata:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
deploy:
resources:
limits:
cpus: '2'
memory: 4G
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app -d myapp"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
restart: unless-stopped
command: redis-server --maxmemory 512mb --maxmemory-policy allkeys-lru
volumes:
- redisdata:/data
deploy:
resources:
limits:
cpus: '1'
memory: 1G
nginx:
image: nginx:alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./certs:/etc/nginx/certs:ro
depends_on:
- app
volumes:
pgdata:
driver: local
redisdata:
driver: localNetworking: How Containers Talk to Each Other
Docker Compose creates a default bridge network for all services in the file. Containers reference each other by service name — db, redis, app. No IP addresses to manage.
For more complex setups, define custom networks to isolate traffic:
# Add to docker-compose.yml
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # No external internet access
services:
app:
networks: [frontend, backend]
db:
networks: [backend] # Only reachable by app, not from outside
redis:
networks: [backend]
nginx:
networks: [frontend]The internal: true flag on the backend network means PostgreSQL and Redis have zero exposure to the internet — they can only be reached by services on the same network.
Volumes: Persistent Data That Survives Restarts
Docker volumes are the correct way to persist data. Never use bind mounts for databases in production.
# List volumes
docker volume ls
# Inspect a volume (shows mount point on disk)
docker volume inspect myapp_pgdata
# Backup a volume
docker run --rm \
-v myapp_pgdata:/data \
-v /backups:/backup \
alpine tar czf /backup/pgdata-$(date +%Y%m%d).tar.gz -C /data .
# Restore a volume
docker run --rm \
-v myapp_pgdata:/data \
-v /backups:/backup \
alpine tar xzf /backup/pgdata-20260405.tar.gz -C /dataMonitoring with Portainer
Portainer gives you a web UI for managing Docker containers, images, volumes, and networks. Deploy it in one command:
# Create Portainer data volume
docker volume create portainer_data
# Deploy Portainer
docker run -d \
--name portainer \
--restart unless-stopped \
-p 9443:9443 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:latest
# Access at https://your-server-ip:9443
# Set your admin password on first loginPortainer shows you real-time CPU/memory/network per container, lets you view logs, exec into running containers, and manage Compose stacks — all from a browser.
For metrics and alerting, add Prometheus and Grafana:
# /opt/monitoring/docker-compose.yml
services:
prometheus:
image: prom/prometheus:latest
restart: unless-stopped
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- promdata:/prometheus
ports:
- "127.0.0.1:9090:9090"
grafana:
image: grafana/grafana:latest
restart: unless-stopped
ports:
- "127.0.0.1:3001:3000"
environment:
GF_SECURITY_ADMIN_PASSWORD: your-secure-password
volumes:
- grafanadata:/var/lib/grafana
node-exporter:
image: prom/node-exporter:latest
restart: unless-stopped
pid: host
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--path.rootfs=/rootfs'
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
restart: unless-stopped
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
volumes:
promdata:
grafanadata:Auto-Restart and Resilience
Docker's restart policies handle most failure scenarios automatically:
- unless-stopped: Restarts on crash, but respects manual
docker stop - always: Restarts unconditionally, even after
docker stop+ reboot - on-failure:5: Retries up to 5 times, then gives up (good for batch jobs)
Combine with health checks for smarter restarts:
# Docker Compose health check with restart
services:
app:
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 5s
retries: 3
start_period: 40sDocker marks the container as unhealthy after 3 failed checks, then restarts it. The start_period gives your app 40 seconds to boot before health checks begin — important for apps that need time to warm up.
Automated Backups
Set up a cron job that backs up all volumes and the Compose config nightly:
#!/bin/bash
# /opt/scripts/backup.sh
set -euo pipefail
BACKUP_DIR="/backups/$(date +%Y%m%d)"
mkdir -p "$BACKUP_DIR"
# Backup all Docker volumes
for volume in $(docker volume ls -q); do
echo "Backing up volume: $volume"
docker run --rm \
-v "$volume":/data \
-v "$BACKUP_DIR":/backup \
alpine tar czf "/backup/${volume}.tar.gz" -C /data .
done
# Backup Compose files
cp /opt/myapp/docker-compose.yml "$BACKUP_DIR/"
cp /opt/myapp/.env "$BACKUP_DIR/" 2>/dev/null || true
# Delete backups older than 7 days
find /backups -maxdepth 1 -type d -mtime +7 -exec rm -rf {} +
echo "Backup completed: $BACKUP_DIR"# Make executable and schedule
chmod +x /opt/scripts/backup.sh
# Run nightly at 3 AM
crontab -e
# Add: 0 3 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>&1Zero-Downtime Deploys
Rolling updates without Kubernetes — just Docker Compose and a health check:
#!/bin/bash
# /opt/scripts/deploy.sh
set -euo pipefail
cd /opt/myapp
# Pull the new image
docker compose pull app
# Start new container alongside old one
docker compose up -d --no-deps --scale app=2 app
# Wait for health check
echo "Waiting for new container..."
sleep 15
# Check health
if docker compose exec app curl -sf http://localhost:3000/health; then
# Scale back to 1 (removes old container)
docker compose up -d --no-deps --scale app=1 app
echo "Deploy successful"
else
# Rollback
docker compose up -d --no-deps --scale app=1 app
echo "Deploy FAILED — rolled back"
exit 1
fi
# Clean up old images
docker image prune -fWhen You Actually Need Kubernetes
Docker Compose on bare metal handles more than most people think. But there are genuine reasons to move to K8s:
- You're running 50+ microservices across 10+ nodes
- You need automatic horizontal scaling based on CPU/memory metrics
- You require rolling updates across a multi-node cluster
- Your compliance requirements mandate orchestration-level audit logging
For everything else — and that includes most startups, side projects, and even mid-size SaaS products — Docker Compose on a single bare metal server is simpler, faster, and 90% cheaper.
Deploy Docker on Bare Metal
Get a Docker-ready bare metal server in under 60 seconds. Ubuntu 24.04, full root access, 20 TB bandwidth included.
$ npx rawhq deployDeploy Free Server →