PIGSTY

Repository

Backup storage repository for PostgreSQL

You can to configure WHERE to store the backups by specifying the pgbackrest_repo parameter. You can define multiple repo there, and Pigsty will pick it according to the value of pgbackrest_method.

Default Repo

By default, Pigsty has two default backup repo definition: the local and minio backup repo.

  • local: The default, use the local /pg/backup dir (Softlink point to pg_fs_backup: /data/backups)
  • minio: Use the SNSD 1-node MinIO cluster (Supported by pigsty, but not enabled by default)
pgbackrest_method: local          # choose the backup repo method, `local` or `minio` or any other user defined repo
pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository
  local:                          # default pgbackrest repo with local posix fs
    path: /pg/backup              # local backup directory, `/pg/backup` by default
    retention_full_type: count    # retention full backups by count
    retention_full: 2             # keep 2, at most 3 full backups when using local fs repo
  minio:                          # optional minio repo for pgbackrest
    type: s3                      # minio is s3-compatible, so s3 is used
    s3_endpoint: sss.pigsty       # minio endpoint domain name, `sss.pigsty` by default
    s3_region: us-east-1          # minio region, us-east-1 by default, useless for minio
    s3_bucket: pgsql              # minio bucket name, `pgsql` by default
    s3_key: pgbackrest            # minio user access key for pgbackrest
    s3_key_secret: S3User.Backup  # minio user secret key for pgbackrest
    s3_uri_style: path            # use path style uri for minio rather than host style
    path: /pgbackrest             # minio backup path, default is `/pgbackrest`
    storage_port: 9000            # minio port, 9000 by default
    storage_ca_file: /etc/pki/ca.crt  # minio ca file path, `/etc/pki/ca.crt` by default
    block: y                      # Enable block incremental backup
    bundle: y                     # bundle small files into a single file
    bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
    bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
    cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
    cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
    retention_full_type: time     # retention full backup by time on minio repo
    retention_full: 14            # keep full backup for the last 14 days

Repo Retention

If you take backups every day without deleting them, the backup repo will grow larger and larger and blow your disk space. You'll need to define a retention policy to only keep a limited number of backups.

The default backup policy is defined in the pgbackrest_repo parameter, change them on demand.

  • local: keep last 2 full backups, at most 3 during backup
  • minio: keep all full backups in the last 14 days

Space Planning

Object storage provides virtually unlimited storage capacity, so you don't need to worry about the disk space. You can optimize space usage with a hybrid full & diff backup policy.

For local disk backup repo, pigsty recommends using a retention policy of keeping the last 2 full backups, which means keep the two most-recent full backups on disk (a third copy may exist while a new backup is running).

This gives you a guaranteed recovery window of at least last 24 hours. Check backup policy for details.


Repo Alternative

You can also use other services as backup repo, check pgbackrest documentation for details:


Repo Versioning

You can even specify a repo target time to get a snapshot of object storage.

You can enable MinIO versioning by adding versioning flag to the minio_buckets:

minio_buckets:
  - { name: pgsql ,versioning: true }
  - { name: meta  ,versioning: true }
  - { name: data }

Repo Locking

Some object storage service (S3, MinIO, etc.) supports the locking, which can prevent the backup from being deleted, even by DBA themselves.

You can enable MinIO locking feature by adding lock flag to the minio_buckets:

minio_buckets:
  - { name: pgsql , lock: true }
  - { name: meta ,versioning: true  }
  - { name: data }

Use Object Storage

Object storage service provides virtually unlimited storage capacity, and provides a remote disaster tolerance for your system. If you don't have one, Pigsty has built-in MinIO support.

MinIO

You can enable minio backup repo by uncommenting the following settings. Beware that pgbackrest only takes HTTPS / domain names, so you have to run MinIO with a domain name and HTTPS endpoint.

all:
  vars:
    pgbackrest_method: minio      # use minio as the default backup repo
  children:                       # define a one-node minio SNSD cluster
    minio: { hosts: { 10.10.10.10: { minio_seq: 1 }} ,vars: { minio_cluster: minio }}

S3

If you only have one node, the meaningful backup policy could be using a cloud vendor's object storage service such as AWS S3, Aliyun OSS, or Google Cloud, etc... To achieve this, you can define a new repo:

pgbackrest_method: s3             # use the 'pgbackrest_repo.s3' as backup repo
pgbackrest_repo:                  # pgbackrest repo: https://pgbackrest.org/configuration.html#section-repository

  s3:                             # aliyun oss (s3 compatible) object storage service
    type: s3                      # oss is s3-compatible
    s3_endpoint: oss-cn-beijing-internal.aliyuncs.com
    s3_region: oss-cn-beijing
    s3_bucket: <your_bucket_name>
    s3_key: <your_access_key>
    s3_key_secret: <your_secret_key>
    s3_uri_style: host
    path: /pgbackrest
    bundle: y                     # bundle small files into a single file
    bundle_limit: 20MiB           # Limit for file bundles, 20MiB for object storage
    bundle_size: 128MiB           # Target size for file bundles, 128MiB for object storage
    cipher_type: aes-256-cbc      # enable AES encryption for remote backup repo
    cipher_pass: pgBackRest       # AES encryption password, default is 'pgBackRest'
    retention_full_type: time     # retention full backup by time on minio repo
    retention_full: 14            # keep full backup for last 14 days

  local:                          # default pgbackrest repo with local posix fs
    path: /pg/backup              # local backup directory, `/pg/backup` by default
    retention_full_type: count    # retention full backups by count
    retention_full: 2             # keep 2, at most 3 full backups when using local fs repo

Manage Backups

Enable Backup

If you database cluster is created with pgbackrest_enable set to true, the backup will be enabled automatically.

If it created with the false value, you can enable the pgbackrest component with:

./pgsql.yml -t pg_backup    # run the pgbackrest subtask

Remove Backup

Pigsty will remove pgbackrest backup stanza when removing the primary instance (pg_role = primary).

./pgsql-rm.yml
./pgsql-rm.yml -e pg_rm_backup=false   # leave backup intact
./pgsql-rm.yml -t pg_backup            # only remove backup

Use the pg_backup subtask to remove the backup only, and use the keep_backup arg to keep backups.

If your backup repo is locked, (e.g., S3 / MinIO has a lock option), this operation will fail.

Backup Removal

Removing backup may lead to permanent data loss, it's a dangerous operation, do with extreme caution.

List Backup

This command will list all backups in the pgbackrest repository (shared by all clusters)

pgbackrest info

Manual Backup

Pigsty has a built-in script /pg/bin/pg-backup which wraps the pgbackrest backup command.

pg-backup        # take an incremental backup
pg-backup full   # take an full backup
pg-backup incr   # take an incremental backup
pg-backup diff   # take an differential backup

Base Backup

Pigsty has an alternative backup script /pg/bin/pg-basebackup which does not rely on pgbackrest, and gives you a physical copy of the database cluster. The default backup dir is /pg/backup.

NAME
  pg-basebackup  -- make base backup from PostgreSQL instance

SYNOPSIS
  pg-basebackup -sdfeukr
  pg-basebackup --src postgres:/// --dst . --file backup.tar.lz4

DESCRIPTION
-s, --src, --url     Backup source URL, optional, "postgres:///" by default, if password is required, it should be given in url, ENV or .pgpass
-d, --dst, --dir     Where to put backup files, "/pg/backup" by default
-f, --file           Overwrite default backup filename, "backup_${tag}_${date}.tar.lz4"
-r, --remove         .lz4 Files mtime before n minutes ago will be removed, default is 1200 (20hour)
-t, --tag            Backup file tag, if not set, target cluster_name or local ip address will be used. Also used as part of DEFAULT filename
-k, --key            Encryption key when --encrypt is specified, default key is ${tag}
-u, --upload         Upload backup files to cloud storage, (need your own implementation)
-e, --encryption     Encrypt with RC4 using OpenSSL, if not key is specified, tag is used as key
-h, --help           Print this message
postgres@pg-meta-1:~$ pg-basebackup
[2025-07-13 06:16:05][INFO] ================================================================
[2025-07-13 06:16:05][INFO] [INIT] pg-basebackup begin, checking parameters
[2025-07-13 06:16:05][DEBUG] [INIT] #====== BINARY
[2025-07-13 06:16:05][DEBUG] [INIT] pg_basebackup     :   /usr/pgsql/bin/pg_basebackup
[2025-07-13 06:16:05][DEBUG] [INIT] openssl           :   /usr/bin/openssl
[2025-07-13 06:16:05][DEBUG] [INIT] #====== PARAMETER
[2025-07-13 06:16:05][DEBUG] [INIT] filename  (-f)    :   backup_pg-meta_20250713.tar.lz4
[2025-07-13 06:16:05][DEBUG] [INIT] src       (-s)    :   postgres:///
[2025-07-13 06:16:05][DEBUG] [INIT] dst       (-d)    :   /pg/backup
[2025-07-13 06:16:05][DEBUG] [INIT] tag       (-t)    :   pg-meta
[2025-07-13 06:16:05][DEBUG] [INIT] key       (-k)    :   pg-meta
[2025-07-13 06:16:05][DEBUG] [INIT] encrypt   (-e)    :   false
[2025-07-13 06:16:05][DEBUG] [INIT] upload    (-u)    :   false
[2025-07-13 06:16:05][DEBUG] [INIT] remove    (-r)    :   -mmin +1200
[2025-07-13 06:16:05][INFO] [LOCK] acquire lock @ /tmp/backup.lock
[2025-07-13 06:16:05][INFO] [LOCK] lock acquired success on /tmp/backup.lock, pid=107417
[2025-07-13 06:16:05][INFO] [BKUP] backup begin, from postgres:/// to /pg/backup/backup_pg-meta_20250713.tar.lz4
[2025-07-13 06:16:05][INFO] [BKUP] backup in normal mode
pg_basebackup: initiating base backup, waiting for checkpoint to complete

pg_basebackup: checkpoint completed
pg_basebackup: write-ahead log start point: 0/7000028 on timeline 1
pg_basebackup: write-ahead log end point: 0/7000FD8
pg_basebackup: syncing data to disk ...
pg_basebackup: base backup completed
[2025-07-13 06:16:06][INFO] [BKUP] backup complete!
[2025-07-13 06:16:06][INFO] [RMBK] remove local obsolete backup: 1200
[2025-07-13 06:16:06][INFO] [BKUP] find obsolete backups: find /pg/backup/ -maxdepth 1 -type f -mmin +1200 -name 'backup*.lz4'
[2025-07-13 06:16:06][WARN] [BKUP] remove obsolete backups:
[2025-07-13 06:16:06][INFO] [RMBK] remove old backup complete
[2025-07-13 06:16:06][INFO] [LOCK] release lock @ /tmp/backup.lock
[2025-07-13 06:16:06][INFO] [DONE] backup procedure complete!
[2025-07-13 06:16:06][INFO] ================================================================

Backup are compressed with lz4, You can unzip and extract the tarball with the following command:

mkdir -p /tmp/data   # extract backup to this directory
cat /pg/backup/backup_pg-meta_20250713.tar.lz4 | unlz4 -d -c | tar -xC /tmp/data

Logical Backup

You can also use the pg_dump command to perform a logical backup.

Logical backups cannot be used for PITR (Point In Time Recovery), but they are useful for migrating data between different major versions, or implement flexible data export logic.

Bootstrap from Repo

Now let's say you have an existing cluster pg-meta, and want to FORK it as pg-meta2:

You'll need to create the new pg-meta2 cluster fork, then run pitr on it.