I managed to add two more sections to my article on writing
robust shell scripts
including using trap and making more atomic
changes. Had some useful feedback including making the fact that a few
small tweaks made it apply to more than just bash. Following from that
I’ve added an article about changing your bash
prompt
and how mine has
been built up over the years to something useful to me. Hopefully it’ll
give other people some ideas.

Dear perl programmers,

When using open(), please don’t use:

open FILE, "$file" or die "couldn't open filen";

It really helps if you tell us what file you’re trying to open and
what went wrong. The correct error message is:

open FILE, "$file" or die "could not open $file: $!n";

Thank you.

So you read the documentation for IO::File and see:

open( FILENAME [,MODE [,PERMS]] )
open( FILENAME, IOLAYERS )

so you write:

my $rules = new IO::File('debian/rules','w', 0755);

and wonder why it hasn’t changed the permissions from 0666. Stracing
confirms it is opened 0666:

open("./debian/rules", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 4

A bit further reading of the documentation you discover:

If IO::File::open receives a Perl mode string (“>”, “+<“, etc.) or an
ANSI C fopen() mode string (“w”, “r+”, etc.), it uses the basic Perl
open operator (but protects any special characters).

If IO::File::open is given a numeric mode, it passes that mode and the
optional permissions value to the Perl sysopen operator. The permissions
default to 0666.

If IO::File::open is given a mode that includes the : character, it
passes all the three arguments to the three-argument open operator.

For convenience, IO::File exports the O_XXX constants from the Fcntl
module, if this module is available.

and the correct way to write this is

my $rules = new IO::File('debian/rules',O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0755)

Thank you very much perl for ignoring the permission parameter when
you feel like it.

Update: Steinar,
yes sorry, I did have 0755 rather than "0755"
originally, but changed it just to check that didn’t make a difference
and copied the wrong version. I’ve changed the post to have the right
thing.

% strace -eopen perl -MIO::File 
   -e 'my $rules = new IO::File("foo","w", 0755);' 2>&1 | grep foo
open("./foo", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
% strace -eopen perl -MIO::File 
   -e 'my $rules = new IO::File("foo","w", "0755");' 2>&1 | grep foo
open("./foo", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3

Incidentally, 0755 and "0755" are different:

 perl -e 'printf("%d %dn", 0755, "0755");'
493 755

Which of these two fragments is more readable?

$self->{catalina_base} = $ENV{'CATALINA_BASE'};
if (!defined $self->{catalina_base}) {
    $self->{catalina_base} = $self->getTomcatHome() ;
}
if (!defined $self->{catalina_base}) {
    CCM::Util::error ("CATALINA_BASE unset and TOMCAT_HOME undefined", 3);
}

or

$self->{catalina_base} = $ENV{'CATALINA_BASE'} || $self->getTomcatHome()
   || CCM::Util::error ("CATALINA_BASE unset and TOMCAT_HOME undefined", 3);

Update: or

$self->{catalina_base} = (
   $ENV{'CATALINA_BASE'}
   or $self->getTomcatHome()
   or CCM::Util::error ("CATALINA_BASE unset and TOMCAT_HOME undefined", 3)
);

MJ
Ray
, I think you’re distorting what is going on. Google are,
rightly, protective of their search results and work actively against
people that try to manipulate the search results. This is what BMW.de
had done by giving a spam page to google and using javascript to
redirect users to the right page. As an aside, this would have broken
for text browsers or anyone without javascript. It’s not the pinicle of
accessibility is it? Google were protecting their index, not using it as
a vendetta against people it doesn’t like as you were suggesting. Stop
being so paranoid and stop distorting the story, or you’re no better
than the devil you’re trying to paint Google as.

Today, I discovered SQL::Translator,
which seems to have some very interesting use cases. Basically, it is a
perl module for translating a database schema from one of a number of
formats and turning it into another format. Parsers include:

  • Live querying of DB2, MySQL, DBI-PostgreSQL, SQLite and Sybase databases
  • Access
  • Excel
  • SQL for DB2, MySQL, Oracle, PostgreSQL, SQLite and Sybase
  • Storable
  • XML
  • YAML

Output formats include:

  • Class::DBI
  • SQL for MySQL, Oracle, PostgreSQL, SQLServer, SQLite and Sybase
  • Storable, XML and YAML
  • POD, Diagram, GraphViz and HTML

Several things spring to mind with this:

  1. Defining your Schema in XML and using SQL::Translator to convert it
    into SQL for several databases and a set of classes for Class::DBI,
    which would make your application immediately target any of the
    supported databases.
  2. Documenting an existing database for which you’ve lost existing
    documentation by pointing it at a running database instance and
    outputting HTML page and, thanks to the Diagram output module, visual
    representation of the structure.
  3. Convert one database from product to another. Point it at a MySQL
    database and generate SQL for postgresql. If you generated some
    Class::DBI stuff you could possibly quickly write a script to copy data
    too.
  4. Using the sqlt-diff script, compare you current SQL to what is
    running on the database and generate a SQL script to upgrade the
    database structure using ALTER TABLE etc. Presumably you’d need
    to convert any data yourself, but is still a time saver for large
    databases.

I’m sure other people could think of some interesting uses for this.
Having looked at the Class::DBI stuff, I think it could do with some
improvements. I can’t see a way to set the class names, although I
haven’t spent that much time looking and it insists on having all the
classes in one file. Also the XML and YAML formats
generated are rather verbose and I haven’t looked to see how much I
could cut them down to use as the source definition. I suspect that I
can make it a lot shorter and rely on sensible defaults.

My initial reason for wanting to use SQL::Translator is that
Class::DBI::Pg has a large start up time and isn’t really suitable for
CGI use if you have a complex database. This might be mitigated by using
mod_perl, but in the mean time I was hoping I could speed up startup by
telling Class::DBI my column names, rather than it querying the
database. SQL::Translator should allow me to save duplicating the
database structure, whilst allowing me to support multiple backend
databases. If I get this working, I’ll write up a short HOWTO.

Yesterday I posted about some bad
shell
code I had found and posted an improved version. Part of the
reason for posting it was that I hoping someone could point out any
errors in the version I posted. Fortunately Neil Moore emailed me some
improvements.

  1. If the script returns more than one line they will be removed by the
    $(…) expansion when it is split into words. The solution there is to
    surround it in double quotes.
  2. The next problem Neil pointed out was that $@ should be
    surrounded by quotes in pretty much every case, otherwise parameters
    with spaces in will get split into separate parameters.
  3. The final problem is that if the script includes a return statement, it
    will stop the inner most function or sourced script, but not during
    eval.The solution is to enclose it in a function:

    
    dummy() { eval "$(perl "$CONF_DIR/project.pl" "$@")"; } dummy "$@"
    
    

Since making the post, I discovered that Solaris’ /bin/sh
doesn’t like $(...), so it’s probably better to use backticks
instead if you want to be portable. As I know the output from the script
I’m not worried about return statements, so I’ve ended up with:


eval "`perl "$CONF_DIR/project.pl" "$@"`"

I’m always amazed at the number of bad shell scripts I keep coming
across. Take this snippet for example:

TEMP1=`mktemp /tmp/tempenv.XXXXXX` || exit 1
perl $CONF_DIR/project.pl $@ > $TEMP1
if [ $? != 0 ]; then
  return
fi

. $TEMP1
rm -f $TEMP1

There are several things wrong with this. First it uses a temporay file.
Secondly it uses more processes than are required and thirdly it doesn’t
clean up after itself properly. If perl failed, the temp file would
still be created, but not deleted. The last problem can be solved with
some suitable uses of trap:

TEMP1=`mktemp /tmp/tempenv.XXXXXX` || exit 1
trap "rm -f $TEMP" INT TERM EXIT
perl $CONF_DIR/project.pl $@ > $TEMP1
if [ $? != 0 ]; then
  return
fi

. $TEMP1
rm -f $TEMP1
trap - INT TERM EXIT

Of course this can all be replaced with a single line:

eval $(perl $CONF_DIR/project.pl $@)