Ever wanted to find out how much diskspace each table was taking in a
database? Here’s how:

database=# SELECT
   tablename,
   pg_size_pretty(pg_relation_size(tablename)) AS table_size,
   pg_size_pretty(pg_total_relation_size(tablename)) AS total_table_size
FROM
   pg_tables
WHERE
   schemaname = 'public';
 tablename  | table_size | total_table_size
------------+------------+------------------
 deferrals  | 205 MB     | 486 MB
 errors     | 58 MB      | 137 MB
 deliveries | 2646 MB    | 10096 MB
 queue      | 7464 kB    | 22 MB
 unknown    | 797 MB     | 2644 MB
 messages   | 1933 MB    | 6100 MB
 rejects    | 25 GB      | 75 GB
(7 rows)

Table size is the size for the current data.
Total table size includes indexes and data that is too large to fix in
the main table store (things like large BLOB fields). You can find more
information in the PostgreSQL
manual
.

Edit: changed to use pg_size_pretty(), which I thought existed, but couldn’t find in the docs. Brett Parker reminded me it did exist after all and I wasn’t just imagining it.

A while ago I published an article on PostgreSQL
user administration
. Typically, things have changed since I wrote
that article. I thought I’d detail a couple of the differences since
I wrote that guide.

The major difference is that you now have roles rather than users and
you use the CREATE ROLE command to create them instead of
CREATE USER, although the latter command still works. The
command line options for the createuser command have changed as
a result too. Before superuser and the ability to create new users were
the same thing. Now you can give a role permissions to create new roles
without giving them superuser powers. The options are now -s for
superuser and -S for not superuser, -d to allow them to create
databases and -D to disallow database creation and -r to allow the new
role to create other roles and -R to prevent them. for a standard user
you probably want somethig like:

createuser -S -D -R -P user

The -P makes createuser ask you for a password for
the new role.

You can find out more information about the new role system in
PostgreSQL in the user
management
and CREATE
ROLE
reference
sections of the manual.

I decided it was probably time that I started backing up my
PostgreSQL databases and ended up writing a quick script to do just
that. I did have a look at pg_dumpall, but it appears
to only write to stdout, meaning I couldn’t have a file per database
like I wanted.

#!/bin/bash

set -u
set -e

dumpdir=/var/backups/postgresql

date=$(date +"%Y-%m-%d")
mkdir -p $dumpdir
find $dumpdir -name "*.sql" -mtime +10 -exec rm {} ;
for database in $(su - postgres -c "psql -l -t" | cut -d'|' -f 1 | grep -v template0);
do
        su - postgres -c "pg_dump $database" > $dumpdir/$database-$date.sql;
done

su - postgres -c "pg_dumpall -g" > $dumpdir/global-$date.sql

Just stick that in /etc/cron.daily/postgresql-backups and
you should end up with 10 days worth of backups. I’m not sure if
postgreSQL can have spaces in database names. I’m assuming it can’t.
The only other issue to deal with is not storing it on the same machine
as the database server, let alone on the same physical hardware or even
worse, the same partition.

Update: Corrected typo and added dumping global
objects to the script.

Sorry to anyone that got spammed by my change to pyblosxom
earlier today. I changed the base url, which changed the link of the
entries. I should sort out having proper guids.

Just a quick one. If you’ve ever created a table using the number of
seconds since 1970 and realised, after populating it with data, that you
really need it in a TIMESTAMP type? If so, you can quickly convert it
using this SQL:

ALTER TABLE entries
   ALTER COLUMN created TYPE TIMESTAMP WITH TIME ZONE
      USING TIMESTAMP WITH TIME ZONE 'epoch' + created *interval '1 second';

With thanks to the PostgreSQL
manual
for saving me hours working out
how to do this.

Over the weekend, I managed to revive my main desktop machine, which
as spent the last 15 months turned off under a desk because it was
showing some odd behaviour and didn’t have time to fix it. I’ve upgraded
it to the latest sid and given it to my girlfriend to use instead of the
Windows machine she had been using. She appears to have fallen in love
with tuxpaint. 🙂

In the process of setting it up I discovered printconf, which
automatically sets up parallel and USB printers under cups. Plugged in
my printer, went to print in firefox and there was the printer. These
things just get easier and easier. Gone are the days when you spent
hours writing a printcap entry for your printer. One thing I would like
is for DBus support in CUPS so I know when the print job has finished.

Just finished writing an article on PostgreSQL
user administration
. Go read it.

Today, I discovered SQL::Translator,
which seems to have some very interesting use cases. Basically, it is a
perl module for translating a database schema from one of a number of
formats and turning it into another format. Parsers include:

  • Live querying of DB2, MySQL, DBI-PostgreSQL, SQLite and Sybase databases
  • Access
  • Excel
  • SQL for DB2, MySQL, Oracle, PostgreSQL, SQLite and Sybase
  • Storable
  • XML
  • YAML

Output formats include:

  • Class::DBI
  • SQL for MySQL, Oracle, PostgreSQL, SQLServer, SQLite and Sybase
  • Storable, XML and YAML
  • POD, Diagram, GraphViz and HTML

Several things spring to mind with this:

  1. Defining your Schema in XML and using SQL::Translator to convert it
    into SQL for several databases and a set of classes for Class::DBI,
    which would make your application immediately target any of the
    supported databases.
  2. Documenting an existing database for which you’ve lost existing
    documentation by pointing it at a running database instance and
    outputting HTML page and, thanks to the Diagram output module, visual
    representation of the structure.
  3. Convert one database from product to another. Point it at a MySQL
    database and generate SQL for postgresql. If you generated some
    Class::DBI stuff you could possibly quickly write a script to copy data
    too.
  4. Using the sqlt-diff script, compare you current SQL to what is
    running on the database and generate a SQL script to upgrade the
    database structure using ALTER TABLE etc. Presumably you’d need
    to convert any data yourself, but is still a time saver for large
    databases.

I’m sure other people could think of some interesting uses for this.
Having looked at the Class::DBI stuff, I think it could do with some
improvements. I can’t see a way to set the class names, although I
haven’t spent that much time looking and it insists on having all the
classes in one file. Also the XML and YAML formats
generated are rather verbose and I haven’t looked to see how much I
could cut them down to use as the source definition. I suspect that I
can make it a lot shorter and rely on sensible defaults.

My initial reason for wanting to use SQL::Translator is that
Class::DBI::Pg has a large start up time and isn’t really suitable for
CGI use if you have a complex database. This might be mitigated by using
mod_perl, but in the mean time I was hoping I could speed up startup by
telling Class::DBI my column names, rather than it querying the
database. SQL::Translator should allow me to save duplicating the
database structure, whilst allowing me to support multiple backend
databases. If I get this working, I’ll write up a short HOWTO.