I decided it was probably time that I started backing up my
PostgreSQL databases and ended up writing a quick script to do just
that. I did have a look at pg_dumpall, but it appears
to only write to stdout, meaning I couldn’t have a file per database
like I wanted.
#!/bin/bash set -u set -e dumpdir=/var/backups/postgresql date=$(date +"%Y-%m-%d") mkdir -p $dumpdir find $dumpdir -name "*.sql" -mtime +10 -exec rm {} ; for database in $(su - postgres -c "psql -l -t" | cut -d'|' -f 1 | grep -v template0); do su - postgres -c "pg_dump $database" > $dumpdir/$database-$date.sql; done su - postgres -c "pg_dumpall -g" > $dumpdir/global-$date.sql
Just stick that in /etc/cron.daily/postgresql-backups and
you should end up with 10 days worth of backups. I’m not sure if
postgreSQL can have spaces in database names. I’m assuming it can’t.
The only other issue to deal with is not storing it on the same machine
as the database server, let alone on the same physical hardware or even
worse, the same partition.
Update: Corrected typo and added dumping global
objects to the script.
Sorry to anyone that got spammed by my change to pyblosxom
earlier today. I changed the base url, which changed the link of the
entries. I should sort out having proper guids.
on said:
Small correction… Shouldn’t it be mkdir -p $dumpdir instead of $datadir?
on said:
backing up
pg_dumpall -g
is good too..