Set up scheduled PostgreSQL backup to AWS S3 using wal-e


In the previous article, I wrote a guide how to set up PostgreSQL master-slave database replication. It is good to have hot standby database server. You can make your slave servers as read-replica to reduce master server load. In case your primary (master) server failed, you still have another server that is ready to be a new master.

Why should I have a cold database backup? Imagine someone accidentally delete your table (let’s say DROP TABLE users;). The drop query is going to be replicated across all hot standby databases. Your data will be gone. To mitigate this issue, we can set up a backup (i.e snapshot) of your database. So in this incident, you can restore your data. You can backup it weekly, daily or hourly, it depends on your RPO/Recovery Point Objective.

There are a few PostgreSQL backup tools. One of them is wal-e. wal-e can perform continuous archiving PostgreSQL WAL files and base backups. The archives are compressed using lzop. The compression is slightly bigger than gzip, but the compressing takes lower CPU usage.

 

UPDATE (2016/06/02): As per bluetreble’s comment below, the backup retention will be done by AWS S3 Lifecycle management and remove IAM user privilege to delete objects in S3 bucket. And also the creating first base backup will be done after configuring archiving first to make sure it can also be restored.

 

Backup on AWS S3

For saving backup files, wal-e support AWS S3, Azure Blob Storage, and OpenStack Swift. Currently, Google Cloud Storage is not supported, but there is a pull request to support it.

I personally prefer to use AWS S3. AWS S3 is a secure, durable and highly-scalable cloud storage managed by Amazon. All objects in AWS S3 are replicated across multiple Availability Zones (AZs) in a region. At the time of writing, AWS ensured the durability of your files up to 99.999999999% (eleven-9s).

Tips using AWS S3 for backup:

  • Choosing region. Prefer to backup on a different region than existing original database. Just in case a failure in primary database region, you still have cold backups in another region.
  • Choosing storage class. AWS S3 has 4 storage classes: Standard, Standard – Infrequent Access (IA), Reduced Redundancy Storage (RRS), and Glacier. I suggest using either Standard or Standard – IA. Both have 99.999999999% durability. Standard has 99.99% availability, but Standard – IA has only 99.9% with lower cost. I do not recommend RRS since it only has 99.99% durability and Glacier may take hours to restore the data.

Please take a note, if you put your data to a different region, there will be additional AWS data transfer charge.

Step 1: Create AWS S3 bucket and set up its lifecycle

First, we need to create a new S3 bucket to store your database backup.

  1. Open AWS S3 Management Console.
  2. Click Create Bucket.
  3. Fill the bucket name and the region where you want to keep the backup. I assumed that the S3 bucket name is pgbackup.
  4. Click Create.

Then, we will enable the versioning and setup lifecycle of the bucket:

  1. Open AWS S3 Management Console.
  2. Right click on your bucket name, and click Properties.
  3. In the Versioning menu, click on Enable Versioning button and click OK for the confirmation dialog.
  4. In the Lifecycle menu,
    1. Click Add rule and a new dialog will appear.
    2. In the Step 1, choose Whole Bucket and click Configure Rule.
    3. In the Step 2, check on Expire and fill its value to 8 days. You can adjust the value based on your requirement. In my case, I want to keep my backup for 7 days and I will create base backup daily, so I just use 7 + 1 = 8 days as the value.
    4. Still on the Step 2, check on Permanently Delete and fill its value to 1 day. This will keep you backup for extra 1 day after it marked as expired.
    5. Click Review. You can name this rule with intuitive name like `Database retention rule`
    6. Click Create and Activate Rule.

Step 2: Create AWS IAM user

Create a new IAM user. Take a note of the generated access key and secret key. After that, set inline policy to your new user with the following policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": ["arn:aws:s3:::pgbackup"]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:GetObjectVersion",
        "s3:PutObject"
      ],
      "Resource": ["arn:aws:s3:::pgbackup/*"]
    }
  ]
}

Don’t forget to adjust the bucket name. The above policy is to allow the principal (user) to CRUD the backup bucket.

If you host your database server on AWS EC2 instances and an IAM role has been attached to it, you can skip creating IAM user, just add a new inline policy. Using IAM role is more recommended, but you can only attach it to a new EC2 instance.

Step 3: Install and configure wal-e

You can install wal-e to any servers that you want to backup. Run the following commands to install wal-e:

sudo apt-get -y update
sudo apt-get -y install daemontools python-dev lzop pv
curl -O https://bootstrap.pypa.io/get-pip.py
sudo python get-pip.py
export LC_ALL=C
sudo pip install -U six
sudo pip install -U requests
sudo pip install wal-e

Configure environment variable for wal-e

umask u=rwx,g=rx,o=
sudo mkdir -p /etc/wal-e.d/env
echo '<AWS_REGION>' | sudo tee /etc/wal-e.d/env/AWS_REGION
echo '<AWS_ACCESS_KEY_ID>' | sudo tee /etc/wal-e.d/env/AWS_ACCESS_KEY_ID
echo '<AWS_SECRET_ACCESS_KEY>' | sudo tee /etc/wal-e.d/env/AWS_SECRET_ACCESS_KEY
echo 's3://pgbackup/<NODE_NAME>' | sudo tee /etc/wal-e.d/env/WALE_S3_PREFIX 
sudo chown -R root:postgres /etc/wal-e.d

Change <AWS_REGION> to your S3 bucket region (e.g: us-east-1). Adjust <AWS_SECRET_ACCESS_KEY> and <AWS_ACCESS_KEY_ID> to the key pairs, you git after creating a new IAM user. <NODE_NAME> parameter is optional, use it whenever you put your backups from multiple nodes to a single bucket.

If you use IAM role, you do not need to put <AWS_SECRET_ACCESS_KEY> and <AWS_ACCESS_KEY_ID>. wal-e can use your instance’s IAM role. Just append --aws-instance-profile to all wal-e commands below.

Step 4: Configure PostgreSQL

Not only create a base backup, wal-e can archive PostgreSQL WAL too. We need to configure PostgreSQL to trigger wal-e whenever there is a new WAL archive. Edit your PostgreSQL configuration using your favorite text editor:

POSTGRES_VERSION=9.5
sudo vim /etc/postgresql/${POSTGRES_VERSION}/main/postgresql.conf

Add/update the following configuration:

# wal_level determines how much information is written to the WAL.
# For backup, you need to set this value as 'hot_standby' or 'archive'
# The 'archive' adds logging required for WAL archiving,
# and the 'hot_standby' further adds information required to run read-only queries on a standby server.
wal_level = 'hot_standby'

# Enable archiving
archive_mode = on

# Command that will be triggered after the WAL archive segment ready
archive_command = 'envdir /etc/wal-e.d/env /usr/local/bin/wal-e wal-push %p'

# The archive_timeout setting will force to swicth WAL archive segment
# if the WAL segments is not completed within this periods.
archive_timeout = 60 # seconds

Restart PostgreSQL server:

sudo service postgresql restart

 

Now, you can test you wal-e configuration to create a base backup:

POSTGRES_VERSION=9.5
sudo -u postgres /usr/bin/envdir /etc/wal-e.d/env /usr/local/bin/wal-e backup-push /var/lib/postgresql/${POSTGRES_VERSION}/main

 

The output might look like:

wal_e.main INFO MSG: starting WAL-E
 DETAIL: The subcommand is "backup-push".
 STRUCTURED: time=2016-05-24T14:34:51.764264-00 pid=26204
wal_e.operator.backup INFO MSG: start upload postgres version metadata
 DETAIL: Uploading to s3://pgbackup/node1/basebackups_005/base_000000010000000000000004_00000040/extended_version.txt.
 STRUCTURED: time=2016-05-24T14:34:52.490033-00 pid=26204
wal_e.operator.backup INFO MSG: postgres version metadata upload complete
 STRUCTURED: time=2016-05-24T14:34:52.580803-00 pid=26204
wal_e.worker.upload INFO MSG: beginning volume compression
 DETAIL: Building volume 0.
 STRUCTURED: time=2016-05-24T14:34:52.664456-00 pid=26204
wal_e.worker.upload INFO MSG: begin uploading a base backup volume
 DETAIL: Uploading to "s3://pgbackup/node1/basebackups_005/base_000000010000000000000004_00000040/tar_partitions/part_00000000.tar.lzo".
 STRUCTURED: time=2016-05-24T14:34:53.282628-00 pid=26204
wal_e.worker.upload INFO MSG: finish uploading a base backup volume
 DETAIL: Uploading to "s3://pgbackup/node1/basebackups_005/base_000000010000000000000004_00000040/tar_partitions/part_00000000.tar.lzo" complete at 7570.78KiB/s. 
 STRUCTURED: time=2016-05-24T14:34:53.787137-00 pid=26204
NOTICE: WAL archiving is not enabled; you must ensure that all required WAL segments are copied through other means to complete the backup

 

Step 6: Create scheduled base backup cronjob

Create a new cronjob to create base backup for you:

(sudo -u postgres crontab -l 2>/dev/null; echo "0 2 * * * POSTGRES_VERSION=9.5 /usr/bin/envdir /etc/wal-e.d/env /usr/local/bin/wal-e backup-push /var/lib/postgresql/${POSTGRES_VERSION}/main") | sudo -u postgres crontab -

The above command will append a new cronjob that will be triggered daily at 2 am.

In the first step, we already setup S3 lifecycle, so we do not have to worry to delete the old backup files. It will be automatically deleted by AWS.

What’s next?

Do you need encryption for your data? wal-e can encrypt your data on the client side (on the database server) using gpg. You can follow the documentation.

In the next article, I will show you how to restore the wal-e backup.

10 thoughts on “Set up scheduled PostgreSQL backup to AWS S3 using wal-e

  1. Taking a base backup before you have configured WAL archiving is very dangerous: it can easily result in a backup that’s useless, and it certainly means you don’t have any ongoing recovery capabilities (until a new base backup is taken).

    I’m also not a fan of allowing the normal WAL-E AWS user to delete objects. Granting that ability means a misconfiguration can nuke all your backups. It’s also risky because those credentials are sitting on the hard drive. It’s safer to use credentials that can’t delete objects and ensure that versioning is configured for the bucket. That way even if someone malicious gains access to those credentials they can’t remove or over-write your backed up data.

    Like

    1. Hi bluetreble,

      Thanks for letting me know about we need to set up WAL archiving first. I’ve just updated this post according to your suggestion.

      And also it’s a good point to make the credentials more secure. I also updated the post, so the backup rotation will be done by S3 Lifecycle instead cron job deletion.

      Again, thanks for your thought.

      Like

  2. Hi Edward,

    Very nice writeup, thank you. I have a quick question for you. Can I use wale to backup the base backups + wal segments to S3 while also having the master provide native postgres replication data via hotstandby to a standby server as it normally does? I have that working right now in a centos7 vm, however, instead of my recovery conf on the standby pointing to the master, should I map it to wale and fetch segments to restore from there? I’m just a little confused, as this article mentions see (“The flow for setting up a new Postgres machine slave, which will be based on a base backup”): https://gist.github.com/ruckus/2293434 So my question is, can I use my setup as is, or do I need to change the recovery file on the standby to replay wals from S3 using wale. Also, if there’s ever a failure on the master and I need to use PITR on the recovery on the standby, I would promote it, but could I use the wale base backup manually (blow away the data directory, and replace it with that base backup, or is that base backup customized by wale so I cant?) and replay the wal-e segments by using fetch? Or does the timeline change and the segement timeline id’s will be different? Yes, I’m confused!

    In sum, I just want to setup a master / hotstandby server with PITR either on the master or standby depending on the situation (like master fails, and need to do PITR on the promoted standby,) and I’m not sure with using wal-e do I need to do only wale on the master and use native postgres replication to the hotstandby or do I need to use wale to push from the master and use fetch from the hotstandby from the AWS S3 bucket?

    Thanks!

    Like

    1. You can use both (master-slave replication and wal-e backup) at the same time.

      There are no changes to setup master-slave replication if you have the wal-e. The slave connects to master (or another slave –> cascading replication).

      If the master is down, you can just switchover a slave to become a new master. No need to pull base backup from S3.

      Wal-e basically is to create a cold backup. In case something messed up you database (e.g accidentally drop a table), you can pull the base backup to recover your data.

      Like

      1. How do I have a reliable DR with wale , We have a postgresql Database that’s runs only on RDS instance, we are not replicating. We are seeking for the best way to have a DR please can you advice the architecture and how wale can be configure to serve our purpose?

        Like

Leave a comment