iDatam

IN AFRICA

ALBANIA

ARGENTINA

AUSTRALIA

AUSTRIA

AZERBAIJAN

B AND H

BANGLADESH

BELGIUM

BRAZIL

BULGARIA

CANADA

CHILE

CHINA

COLOMBIA

COSTA RICA

CROATIA

CYPRUS

CZECH

DENMARK

ECUADOR

EGYPT

EL SALVADOR

ESTONIA

FINLAND

FOR BACKUP AND STORAGE

FOR DATABASE

FOR EMAIL

FOR MEDIA STREAMING

FRANCE

GEORGIA

GERMANY

GREECE

GUATEMALA

HUNGARY

ICELAND

IN ASIA

IN AUSTRALIA

IN EUROPE

IN NORTH AMERICA

IN SOUTH AMERICA

INDIA

INDONESIA

IRELAND

ISRAEL

ITALY

JAPAN

KAZAKHSTAN

KENYA

KOSOVO

LATVIA

LIBYA

LITHUANIA

LUXEMBOURG

MALAYSIA

MALTA

MEXICO

MOLDOVA

MONTENEGRO

MOROCCO

NETHERLANDS

NEW ZEALAND

NIGERIA

NORWAY

PAKISTAN

PANAMA

PARAGUAY

PERU

PHILIPPINES

POLAND

PORTUGAL

QATAR

ROMANIA

RUSSIA

SAUDI ARABIA

SERBIA

SINGAPORE

SLOVAKIA

SLOVENIA

SOUTH AFRICA

SOUTH KOREA

SPAIN

SWEDEN

SWITZERLAND

TAIWAN

THAILAND

TUNISIA

TURKEY

UK

UKRAINE

UNITED ARAB EMIRATES

URUGUAY

USA

UZBEKISTAN

VIETNAM

Escaping AWS RDS: How to Set Up a High-Availability PostgreSQL Cluster (Patroni) on Bare Metal

Learn how to escape AWS RDS costs by deploying a High-Availability PostgreSQL cluster on bare metal using Patroni, etcd, and HAProxy. Step-by-step 2026 guide.

PostgreSQL Patroni High Availability on Bare Metal

For years, startups have accepted the exorbitant costs of managed cloud databases like AWS RDS or Google Cloud SQL as a necessary evil. The promise was simple: pay a massive premium, and they handle the backups, failovers, and replication. But as your data grows, the RDS bill—fueled by IOPS charges and egress fees—quickly becomes the largest line item in your infrastructure spend.

The 2026 solution to this financial drain is Cloud Repatriation. You can build an enterprise-grade, High-Availability (HA) PostgreSQL cluster on your own raw hardware using Patroni. Originally developed by Zalando, Patroni is an open-source template for PostgreSQL HA that handles automatic failover and cluster management perfectly.

While most guides assume you are deploying Patroni inside Kubernetes, this tutorial covers the raw, Bare Metal Deployment. By pairing this setup with an iDatam NVMe Dedicated Server, you get 3x the IOPS of a standard AWS RDS instance for a fraction of the monthly cost, all with zero egress fees.

What You'll Learn

The Architecture Overview

To achieve true High Availability without a split-brain scenario, you need a minimum of three servers.

  • Node 1 (10.0.0.11): Database Node (Primary candidate) + etcd + Patroni

  • Node 2 (10.0.0.12): Database Node (Replica candidate) + etcd + Patroni

  • Node 3 (10.0.0.13): Database Node (Replica candidate) + etcd + Patroni + HAProxy (for routing)

Note: In production, HAProxy should ideally run on separate application servers or load balancers, but for this guide, we will place it on Node 3.

Step 1: Install Prerequisites and PostgreSQL

Execute this step on all three nodes.

First, update your Ubuntu 24.04/22.04 servers and install the required packages. We will install PostgreSQL, etcd (for cluster consensus), and Python tools (for Patroni).

bash

sudo apt update && sudo apt upgrade -y
sudo apt install postgresql postgresql-contrib etcd python3-pip python3-dev libpq-dev haproxy -y
                            

Next, stop the default PostgreSQL service. Patroni will handle starting and managing the PostgreSQL processes.

bash

sudo systemctl stop postgresql
sudo systemctl disable postgresql
                            

Install Patroni and its etcd dependency via PIP:

bash

sudo pip3 install patroni[etcd] --break-system-packages
                            

Step 2: Configure the etcd Cluster

Execute this step on all three nodes, modifying the IP addresses accordingly. Patroni relies on etcd to store the cluster state and determine which node is the "leader."

Edit the etcd configuration file:

bash

sudo nano /etc/default/etcd
                            

For Node 1 (10.0.0.11), the file should look like this (Change the ETCD_NAME and IP addresses for Node 2 and Node 3 respectively):

plaintext

ETCD_NAME="node1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://10.0.0.11:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.0.0.11:2379,http://localhost:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.0.0.11:2380"
ETCD_INITIAL_CLUSTER="node1=http://10.0.0.11:2380,node2=http://10.0.0.12:2380,node3=http://10.0.0.13:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="patroni-cluster"
ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
                            

Restart and enable etcd on all nodes:

bash

sudo systemctl restart etcd
sudo systemctl enable etcd
                            

Verify the cluster health from any node. You should see all three nodes listed as healthy.

bash

etcdctl cluster-health
                            

Step 3: Configure Patroni

Execute this step on all three nodes, modifying the node name and IPs. Create the Patroni configuration file. This file tells Patroni how to connect to etcd, how to configure PostgreSQL, and how to authenticate replication.

bash

sudo mkdir -p /etc/patroni
sudo nano /etc/patroni/patroni.yml
                            

Add the following configuration (Example for Node 1):

yaml

scope: my_pg_cluster
namespace: /db/
name: node1 # Change to node2 or node3 on the other servers

restapi:
  listen: 10.0.0.11:8008 # Change IP
  connect_address: 10.0.0.11:8008 # Change IP

etcd:
  host: 10.0.0.11:2379 # Change IP

bootstrap:
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    postgresql:
      use_pg_rewind: true
  initdb:
    - auth-host: md5
    - auth-local: trust
    - encoding: UTF8
    - data-checksums

  users:
    admin:
      password: SuperSecretAdminPassword
      options:
        - createrole
        - createdb

postgresql:
  listen: 10.0.0.11:5432 # Change IP
  connect_address: 10.0.0.11:5432 # Change IP
  data_dir: /var/lib/postgresql/14/main # Verify your PG version path
  bin_dir: /usr/lib/postgresql/14/bin # Verify your PG version path
  pgpass: /tmp/pgpass
  authentication:
    superuser:
      username: postgres
      password: SuperSecretPostgresPassword
    replication:
      username: replicator
      password: SuperSecretReplicationPassword
  parameters:
    unix_socket_directories: '.'
                            

Change the ownership of the configuration file so the postgres user can read it:

bash

sudo chown postgres:postgres /etc/patroni/patroni.yml
                            

Step 4: Create the Patroni systemd Service

Execute this step on all three nodes.

To ensure Patroni starts automatically and manages PostgreSQL, create a systemd service file:

bash

sudo nano /etc/systemd/system/patroni.service
                            

Add the following:

ini

[Unit]
Description=Runners to orchestrate a high-availability PostgreSQL
After=syslog.target network.target etcd.service

[Service]
Type=simple
User=postgres
Group=postgres
ExecStart=/usr/local/bin/patroni /etc/patroni/patroni.yml
KillMode=process
TimeoutSec=30
Restart=no

[Install]
WantedBy=multi-user.target
                            

Reload systemd, start Patroni, and enable it on boot:

bash

sudo systemctl daemon-reload
sudo systemctl start patroni
sudo systemctl enable patroni
                            

Check the cluster status. You should see one node listed as the Leader and the other two as Replica:

bash

patronictl -c /etc/patroni/patroni.yml list
                            

Step 5: Configure HAProxy for Routing

Your applications should not connect directly to a specific node's IP, because if that node fails, the application breaks. Instead, your applications will connect to HAProxy, which will dynamically route traffic only to the current Patroni "Leader."

Execute this on Node 3 (or your designated load balancer node).

bash

sudo nano /etc/haproxy/haproxy.cfg
                            

Append the following configuration to the bottom of the file:

plaintext

listen postgresql_cluster
    bind *:5000
    option httpchk
    http-check expect status 200
    default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
    server node1 10.0.0.11:5432 maxconn 100 check port 8008
    server node2 10.0.0.12:5432 maxconn 100 check port 8008
    server node3 10.0.0.13:5432 maxconn 100 check port 8008
                            

How this works: HAProxy checks port 8008 (Patroni's REST API). The Leader node will return HTTP 200, while replicas return HTTP 503. HAProxy then routes your database queries (port 5000) only to the node returning 200.

Restart HAProxy:

bash

sudo systemctl restart haproxy
                            

Step 6: Test the Automatic Failover

Now for the fun part. Your applications are connected to HAProxy on port 5000. Let's simulate a catastrophic hardware failure on the primary node.

Log into the node that patronictl list identified as the Leader (e.g., Node 1) and violently kill the Patroni process or reboot the server:

bash

sudo systemctl stop patroni
                            

Immediately log into Node 2 or Node 3 and run:

bash

patronictl -c /etc/patroni/patroni.yml list
                            

Within 10 to 30 seconds, you will see that Patroni has detected the failure via etcd consensus, promoted a replica to be the new Leader, and instructed HAProxy to instantly reroute all database traffic to the new primary.

Your application experiences a brief hiccup, and then continues normally—with zero human intervention.

Conclusion: True Enterprise Database Performance

You have successfully deployed a resilient, self-healing PostgreSQL cluster. You are no longer locked into the expensive ecosystem of managed cloud databases.

To maximize the performance of this Patroni cluster, especially for I/O-heavy workloads like massive e-commerce sites or SaaS backends, raw disk speed is everything. Deploying this architecture on an iDatam Storage Dedicated Server guarantees your database replication relies on PCIe Gen 5 NVMe drives and unmetered internal 100Gbps network fabrics—delivering data-center performance without the cloud tax.

Discover iDatam Dedicated Server Locations

iDatam servers are available around the world, providing diverse options for hosting websites. Each region offers unique advantages, making it easier to choose a location that best suits your specific hosting needs.

Up