logo
icon

PostgreSQL HA

High Availability PostgreSQL cluster with Patroni and etcd. Provides automatic failover and replication for production-grade database deployments.

template cover
部署次數22
發布者canyugs
建立於2025-11-13
模板內的服務
service icon
service icon
service icon
service icon
service icon
service icon
標籤
DatabasePostgreSQLHigh AvailabilityPatroni

PostgreSQL High Availability Cluster

This template deploys a highly available PostgreSQL cluster using:

  • Patroni: PostgreSQL HA solution with automatic failover
  • Spilo: Docker image combining PostgreSQL and Patroni
  • etcd: Distributed configuration and service discovery

Architecture

  • 3x etcd cluster nodes for distributed consensus
  • 3x PostgreSQL nodes with Patroni for automatic failover
  • Built-in replication and health monitoring

Connection Information

Use any of the Patroni nodes to connect to the cluster:

  • Host: Use the hostname of patroni1, patroni2, or patroni3
  • Port: 5432
  • Username: postgres (superuser) or admin
  • Password: Check the environment variables in Zeabur dashboard

Patroni will automatically route connections to the master node.

User Accounts

AccountUsernamePrivilegesUse Case
SuperuserpostgresFull system privilegesSystem administration, backup/restore
AdminadminCREATEDB, CREATEROLEApplication connections, daily development

Security: Use admin or create dedicated users for applications. Avoid using superuser directly.

Features

  • Automatic Failover: If the master fails, Patroni automatically promotes a replica
  • Synchronous Replication: Data consistency across nodes
  • Health Monitoring: REST API on port 8008 for each node
  • Rolling Updates: Update nodes without downtime
  • Horizontal Scaling: Easily add or remove nodes

Cluster Sizing

This 3-node configuration provides standard production HA (tolerates 1 node failure).

NodesFault ToleranceUse Case
31 failure✅ Production (standard)
52 failures✅ Production (high availability)

Scaling the Cluster

Adding Nodes (Scale to 5 Nodes)

To scale from 3 to 5 nodes for higher availability:

  1. Add etcd nodes:

  2. Add Patroni nodes:

  3. Update existing Patroni ETCD3_HOSTS to include new etcd nodes:

    etcd1:2379,etcd2:2379,etcd3:2379,etcd4:2379,etcd5:2379
    

Removing Nodes

To scale down the cluster:

  1. Remove Patroni replica nodes (never remove the master)
  2. Remove etcd nodes after updating cluster configuration
  3. Always maintain odd number of nodes (3, 5, etc.)

⚠️ Important: Never reduce below 3 nodes in production to maintain HA.

📖 Full Guide: See the complete README for step-by-step removal instructions.

Management

Run inside any Patroni container:

# Cluster status
patronictl list pg-ha

# Show cluster configuration
patronictl show-config pg-ha

Troubleshooting

Common operations:

# Check etcd cluster health
curl http://etcd1:2379/health

# List etcd members
curl -X POST http://etcd1:2379/v3/cluster/member/list

# Check Patroni cluster status (run inside patroni container)
patronictl list pg-ha

# Check PostgreSQL replication
psql -U postgres -c "SELECT * FROM pg_stat_replication;"

📖 Full Troubleshooting Guide: See the complete README.

Changing Passwords

To change passwords after deployment:

  1. Change in PostgreSQL: ALTER USER postgres PASSWORD 'new_password';
  2. Update environment variables in ALL Patroni services
  3. Rolling restart all Patroni services

📖 Full Guide: See Password Change Guide

Related Templates

etcd Expansion:

  • etcd4: Add 4th etcd node
  • etcd5: Add 5th etcd node

Patroni Expansion:

Documentation