← Back to Blog
DockerTutorial

Docker Compose in Production: A Practical Guide

Docker Compose is not just for local development. With the right configuration, it runs production workloads reliably on bare metal — no Kubernetes required. Here is a battle-tested approach to production Compose deployments.

Why Compose in Production

Kubernetes is overkill for most applications. If you have fewer than 20 services and do not need auto-scaling across nodes, Docker Compose gives you container orchestration without the operational overhead of K8s. On a RAW bare metal server, Compose runs at native speed with no hypervisor tax.

Production docker-compose.yml

Here is a production-ready template for a typical web application with a database and cache:

services:
  app:
    image: myapp:latest
    restart: unless-stopped
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=production
      - DATABASE_URL=postgres://app:secret@db:5432/myapp
      - REDIS_URL=redis://cache:6379
    deploy:
      resources:
        limits:
          cpus: "2.0"
          memory: 1G
        reservations:
          cpus: "0.5"
          memory: 256M
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 5s
      retries: 3
      start_period: 10s
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"

  db:
    image: postgres:16-alpine
    restart: unless-stopped
    volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      - POSTGRES_DB=myapp
      - POSTGRES_USER=app
      - POSTGRES_PASSWORD=secret
    deploy:
      resources:
        limits:
          cpus: "1.0"
          memory: 512M
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U app -d myapp"]
      interval: 10s
      timeout: 5s
      retries: 5
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "3"

  cache:
    image: redis:7-alpine
    restart: unless-stopped
    command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru
    deploy:
      resources:
        limits:
          cpus: "0.5"
          memory: 256M
    logging:
      driver: json-file
      options:
        max-size: "5m"
        max-file: "3"

volumes:
  pgdata:

Restart Policies

Use unless-stopped for production services. It restarts containers after crashes and server reboots, but respects manual docker compose stop commands.

  • unless-stopped — Best for production. Survives reboots, respects manual stops
  • always — Restarts even after manual stops. Useful for critical infrastructure
  • on-failure — Only restarts on non-zero exit codes. Good for batch jobs
  • no — Never restart. Development only

Health Checks

Health checks tell Docker when a container is actually ready, not just running. Without them, dependent services start before databases finish initializing. Always define health checks for databases and APIs.

The start_period gives slow-starting services (like Java apps) time to initialize before Docker counts failed checks.

Resource Limits

Without limits, a memory leak in one container crashes the entire server. The deploy.resources block sets hard ceilings. On a 4-core, 8 GB RAW server, a reasonable split:

  • App: 2 CPUs, 1 GB RAM
  • Database: 1 CPU, 512 MB RAM (PostgreSQL manages its own cache)
  • Cache: 0.5 CPU, 256 MB RAM
  • Reserve: 0.5 CPU, ~6 GB for OS, monitoring, and headroom

Logging

The default JSON log driver keeps logs forever and fills your disk. Always set max-size and max-file. For centralized logging, ship to a log aggregator:

# View logs
docker compose logs -f app --tail 100

# Or ship to syslog
logging:
  driver: syslog
  options:
    syslog-address: "udp://localhost:514"
    tag: "myapp"

Backup Strategy

Volumes persist data, but you need off-server backups. A simple cron-based approach:

# Backup PostgreSQL daily at 2 AM
0 2 * * * docker compose exec -T db pg_dump -U app myapp | gzip > /backups/db-$(date +\%Y\%m\%d).sql.gz

# Prune backups older than 30 days
0 3 * * * find /backups -name "db-*.sql.gz" -mtime +30 -delete

# Sync to offsite storage
0 4 * * * rsync -az /backups/ backup-server:/backups/myapp/

Zero-Downtime Deploys

Pull the new image and recreate only the changed services:

# Pull latest image
docker compose pull app

# Recreate only the app service (db and cache stay running)
docker compose up -d --no-deps app

# Verify health
docker compose ps
docker compose logs -f app --tail 20

For true zero-downtime, run two app replicas behind Nginx and drain connections before stopping the old container.

Monitoring

Add a monitoring stack to your Compose file or run it separately:

# Check container stats
docker stats --no-stream

# Watch for unhealthy containers
docker compose ps --format "table" | grep -v "healthy"

For production monitoring, add Prometheus with cAdvisor to collect container metrics, and Grafana for dashboards. See our monitoring guide for the full setup.

Security Checklist

  • Never put secrets in docker-compose.yml — use .env files or Docker secrets
  • Pin image versions (e.g., postgres:16.2-alpine, not postgres:latest)
  • Run containers as non-root when possible
  • Limit network exposure — only expose ports that need external access
  • Keep Docker updated for security patches

Get Started

Deploy a RAW bare metal server, install Docker, and run your production stack with Compose. Full dedicated resources, no shared tenancy, starting at $6/mo.