Why Self-Host PostgreSQL?
Managed database services like RDS, Supabase, and PlanetScale are convenient. But the pricing is brutal once you look at what you're actually getting:
| Provider | Specs | Price/mo |
|---|---|---|
| AWS RDS (db.r6g.large) | 2 vCPU, 16 GB, 100 GB gp3 | $185 |
| Supabase Pro | 2 vCPU, 4 GB, 8 GB disk | $25 |
| Neon Pro | 4 compute units, 50 GB | $69 |
| DigitalOcean Managed DB | 2 vCPU, 4 GB, 38 GB | $60 |
| RAW bare metal + self-host | 4 vCPU, 8 GB, 80 GB NVMe | $11 |
Self-hosted PostgreSQL on RAW raw-4 ARM. Same app server doubles as your database server.
Beyond cost, self-hosting gives you:
- Full control: Install any extension (PostGIS, pgvector, TimescaleDB) without waiting for your provider to support it
- No connection limits: RDS and Supabase cap connections. Self-hosted, you configure the limit
- Better I/O: Direct NVMe access vs virtualized block storage adds up on write-heavy workloads
- Data sovereignty: Your data stays on your server, in your datacenter, under your control
Install PostgreSQL 16
Starting from a fresh Ubuntu 24.04 server:
# Add PostgreSQL's official repository (always up-to-date)
sudo apt install -y postgresql-common
sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh -y
# Install PostgreSQL 16
sudo apt install -y postgresql-16 postgresql-client-16
# Verify
psql --version
# psql (PostgreSQL) 16.x
# Check service status
systemctl status postgresqlInitial Configuration
Create your application database and user:
# Switch to postgres user
sudo -u postgres psql
-- Create application user
CREATE USER myapp WITH PASSWORD 'your-secure-password-here';
-- Create database
CREATE DATABASE myappdb OWNER myapp;
-- Grant permissions
GRANT ALL PRIVILEGES ON DATABASE myappdb TO myapp;
-- Connect to the database and grant schema permissions
\c myappdb
GRANT ALL ON SCHEMA public TO myapp;
-- Exit
\qPerformance Tuning
PostgreSQL's default configuration assumes 512 MB of RAM and a spinning hard drive. On a bare metal server with NVMe storage and multiple gigabytes of RAM, the defaults leave 90% of your hardware potential on the table.
Edit /etc/postgresql/16/main/postgresql.conf. These settings are tuned for an 8 GB RAM server with NVMe storage:
# /etc/postgresql/16/main/postgresql.conf
# ── Memory ──
shared_buffers = 2GB # 25% of total RAM
effective_cache_size = 6GB # 75% of total RAM
work_mem = 64MB # Per-operation sort/hash memory
maintenance_work_mem = 512MB # For VACUUM, CREATE INDEX
wal_buffers = 64MB # Write-ahead log buffer
# ── Write Performance ──
checkpoint_completion_target = 0.9 # Spread checkpoint writes
wal_compression = zstd # Compress WAL (PG16+)
min_wal_size = 1GB
max_wal_size = 4GB
# ── Query Planner ──
random_page_cost = 1.1 # NVMe is nearly sequential speed
effective_io_concurrency = 200 # NVMe handles high concurrency
seq_page_cost = 1.0
# ── Connections ──
max_connections = 200 # Adjust based on your app
listen_addresses = 'localhost' # Only local connections (use pgBouncer for remote)
# ── Parallel Queries ──
max_parallel_workers_per_gather = 2
max_parallel_workers = 4
max_worker_processes = 8
# ── Logging ──
log_min_duration_statement = 500 # Log queries slower than 500ms
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_temp_files = 0
log_autovacuum_min_duration = 0# Apply changes
sudo systemctl restart postgresql
# Verify settings loaded
sudo -u postgres psql -c "SHOW shared_buffers;"
# Should show: 2GBpgBouncer: Connection Pooling
PostgreSQL creates a new process for every connection. At 100+ concurrent connections, this eats RAM and CPU. pgBouncer sits between your app and PostgreSQL, pooling connections so 500 application connections share 20 database connections.
# Install pgBouncer
sudo apt install -y pgbouncer# /etc/pgbouncer/pgbouncer.ini
[databases]
myappdb = host=127.0.0.1 port=5432 dbname=myappdb
[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 6432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
# Pool settings
pool_mode = transaction # Best for web apps
default_pool_size = 20 # Max connections to PostgreSQL per database
max_client_conn = 500 # Max connections from your app
min_pool_size = 5 # Keep 5 connections warm
# Timeouts
server_idle_timeout = 600 # Close idle server connections after 10min
client_idle_timeout = 0 # Don't close idle client connections
query_timeout = 30 # Kill queries running longer than 30s
# Logging
log_connections = 1
log_disconnections = 1
stats_period = 60# Create the auth file
echo '"myapp" "your-secure-password-here"' | sudo tee /etc/pgbouncer/userlist.txt
sudo chown pgbouncer:pgbouncer /etc/pgbouncer/userlist.txt
sudo chmod 600 /etc/pgbouncer/userlist.txt
# Start pgBouncer
sudo systemctl enable pgbouncer
sudo systemctl start pgbouncer
# Test connection through pgBouncer
psql -h 127.0.0.1 -p 6432 -U myapp myappdbUpdate your application's DATABASE_URL to connect through pgBouncer on port 6432 instead of PostgreSQL directly on port 5432:
# Before (direct to PostgreSQL)
DATABASE_URL=postgres://myapp:password@localhost:5432/myappdb
# After (through pgBouncer)
DATABASE_URL=postgres://myapp:password@localhost:6432/myappdbAutomated Backups
Three backup strategies, ordered by importance:
1. Logical Backups with pg_dump (Daily)
#!/bin/bash
# /opt/scripts/pg-backup.sh
set -euo pipefail
BACKUP_DIR="/backups/postgresql"
RETENTION_DAYS=14
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
mkdir -p "$BACKUP_DIR"
# Dump with compression
pg_dump -U postgres -Fc myappdb > "$BACKUP_DIR/myappdb_${TIMESTAMP}.dump"
# Delete old backups
find "$BACKUP_DIR" -name "*.dump" -mtime +${RETENTION_DAYS} -delete
echo "Backup completed: myappdb_${TIMESTAMP}.dump ($(du -h "$BACKUP_DIR/myappdb_${TIMESTAMP}.dump" | cut -f1))"# Make executable and schedule
chmod +x /opt/scripts/pg-backup.sh
# Run daily at 2 AM
(crontab -l 2>/dev/null; echo "0 2 * * * /opt/scripts/pg-backup.sh >> /var/log/pg-backup.log 2>&1") | crontab -2. Continuous Archiving with WAL-G (Point-in-Time Recovery)
# Install WAL-G
wget https://github.com/wal-g/wal-g/releases/latest/download/wal-g-pg-ubuntu-20.04-amd64 \
-O /usr/local/bin/wal-g
chmod +x /usr/local/bin/wal-g
# Configure WAL archiving in postgresql.conf
# archive_mode = on
# archive_command = 'wal-g wal-push %p'
# archive_timeout = 60
# Take a base backup
sudo -u postgres wal-g backup-push /var/lib/postgresql/16/main
# Restore to a specific point in time
sudo -u postgres wal-g backup-fetch /var/lib/postgresql/16/main LATEST
sudo -u postgres wal-g wal-fetch --until "2026-04-05 14:30:00"3. Server Snapshots (Weekly)
Use RAW's one-click snapshot feature as a belt-and-suspenders backup. Captures the entire server state — OS, config, and data.
Monitoring
Monitor PostgreSQL with pg_stat_statements and a lightweight dashboard:
# Enable pg_stat_statements
sudo -u postgres psql -c "CREATE EXTENSION IF NOT EXISTS pg_stat_statements;"
# Add to postgresql.conf
# shared_preload_libraries = 'pg_stat_statements'
# Restart PostgreSQL
sudo systemctl restart postgresql# Top 10 slowest queries
sudo -u postgres psql -c "
SELECT
round(mean_exec_time::numeric, 2) AS avg_ms,
calls,
round(total_exec_time::numeric, 2) AS total_ms,
left(query, 80) AS query
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
"For real-time monitoring, install postgres_exporter with Prometheus and Grafana:
# Install postgres_exporter via Docker
docker run -d \
--name postgres-exporter \
--restart unless-stopped \
-p 9187:9187 \
-e DATA_SOURCE_NAME="postgresql://postgres:password@host.docker.internal:5432/postgres?sslmode=disable" \
prometheuscommunity/postgres-exporter
# Key metrics to watch:
# - pg_stat_activity_count (active connections)
# - pg_stat_database_tup_fetched (rows read/sec)
# - pg_stat_database_tup_inserted (rows written/sec)
# - pg_stat_bgwriter_buffers_checkpoint (checkpoint frequency)
# - pg_replication_lag (if using replication)Replication: Read Replicas
For read-heavy workloads, set up streaming replication to a second server:
On the Primary Server
# Create replication user
sudo -u postgres psql -c "
CREATE ROLE replication WITH REPLICATION LOGIN PASSWORD 'repl-password';
"
# Edit pg_hba.conf — allow replication connections
# host replication replication replica-ip/32 md5
# Edit postgresql.conf
# wal_level = replica
# max_wal_senders = 3
# wal_keep_size = 1GB
sudo systemctl restart postgresqlOn the Replica Server
# Stop PostgreSQL
sudo systemctl stop postgresql
# Clear existing data
sudo rm -rf /var/lib/postgresql/16/main/*
# Clone from primary
sudo -u postgres pg_basebackup \
-h primary-server-ip \
-U replication \
-D /var/lib/postgresql/16/main \
-Fp -Xs -P -R
# The -R flag creates standby.signal and configures recovery
# Start the replica
sudo systemctl start postgresql
# Verify replication on primary
sudo -u postgres psql -c "SELECT * FROM pg_stat_replication;"Security Hardening
Production PostgreSQL security checklist:
Network Isolation
# postgresql.conf — only listen on localhost
listen_addresses = 'localhost'
# If you need remote access, use SSH tunneling:
ssh -L 5432:localhost:5432 root@your-server-ip
# Then connect locally
psql -h localhost -p 5432 -U myapp myappdbAuthentication
# /etc/postgresql/16/main/pg_hba.conf
# TYPE DATABASE USER ADDRESS METHOD
# Local connections
local all postgres peer
local all myapp md5
# Remote connections (only if needed, prefer SSH tunnel)
host myappdb myapp 10.0.0.0/8 scram-sha-256
# Reject everything else
host all all 0.0.0.0/0 rejectFirewall
# Block external PostgreSQL access
sudo ufw deny 5432
sudo ufw deny 6432
# Only allow SSH, HTTP, HTTPS
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enableEncryption at Rest
# Enable LUKS encryption on the data partition (do this during initial setup)
# For existing servers, use pgcrypto for column-level encryption:
sudo -u postgres psql -c "CREATE EXTENSION IF NOT EXISTS pgcrypto;"
-- Encrypt sensitive columns
UPDATE users SET
ssn = pgp_sym_encrypt(ssn_plain, 'encryption-key')
WHERE ssn_plain IS NOT NULL;Essential Extensions
# Install common extensions
sudo apt install -y postgresql-16-pgvector # Vector search for AI
sudo apt install -y postgresql-16-postgis # Geospatial queries
# Enable in your database
sudo -u postgres psql -d myappdb -c "
CREATE EXTENSION IF NOT EXISTS pgvector; -- Vector similarity search
CREATE EXTENSION IF NOT EXISTS pg_trgm; -- Fuzzy text search
CREATE EXTENSION IF NOT EXISTS btree_gin; -- GIN index improvements
CREATE EXTENSION IF NOT EXISTS uuid-ossp; -- UUID generation
"Production Checklist
- Tuned postgresql.conf for your server's RAM and storage type
- pgBouncer running on port 6432, app connecting through it
- Daily pg_dump backups with 14-day retention
- WAL archiving enabled for point-in-time recovery
- pg_stat_statements enabled for slow query identification
- Firewall blocking external access to ports 5432/6432
- listen_addresses set to localhost only
- Monitoring with postgres_exporter or similar
- Server snapshots scheduled weekly via RAW dashboard
- SCRAM-SHA-256 authentication (not md5) for remote connections
Self-Host PostgreSQL on RAW
Get a bare metal server with NVMe storage in 13 seconds. Install PostgreSQL, tune it for your hardware, and save 90% vs managed database services.
$ npx rawhq deploySee Pricing →