Perl doesn’t enforce access to modules’ namespaces. This would
usually be considered a bad thing, but sometimes it allows us to work
around problems in modules without changing their code. Here’s a perfect
example:

I’ve been writing a script to talk to an XML-RPC endpoint, using
Frontier::Client but for
one of the requests, the script throws the following error:

wanted a data type, got `ex:i8'

Turning on debugging showed the response type was indeed ex:i8, which
isn’t one of the types that Frontier::Client supports.

<?xml version="1.0" encoding="UTF-8"?>
<methodResponse xmlns:ex="http://ws.apache.org/xmlrpc/namespaces/extensions">
  <params>
    <param>
      <value>
        <ex:i8>161</ex:i8>
      </value>
    </param>
  </params>
</methodResponse>

Searching through the code shows Frontier::Client is a wrapper around
Frontier::RPC2 and the error message happens at the following
section:

   } elsif ($scalars{$tag}) {
       $expat->{'rpc_text'} = "";
       push @{ $expat->{'rpc_state'} }, 'cdata';
   } else {
       Frontier::RPC2::die($expat, "wanted a data type, got `$tag'n");
   }

So we can see that it’s looking up the tag into a hash called
%scalars to see if the type is a scalar type, otherwise throws
the error we saw. Looking at the top, we can see this hash:

%scalars = (
    'base64' => 1,
    'boolean' => 1,
    'dateTime.iso8601' => 1,
    'double' => 1,
    'int' => 1,
    'i4' => 1,
    'string' => 1,
);

So, if we could add ex:i8 to this scalar, we could fix the
problem. We could fix the module, but that would require every user of
the script to patch their copy of the module. The alternative is to
inject something into that hash across module boundaries, which we can
do by just refering to the hash by it’s complete name including the
package name. We can use:

$Frontier::RPC2::scalars{'ex:i8'} = 1;

Now when we run the script, everything works. It’s not nice and it’s
dependent on Frontier::RPC2 not changing. but it allows us to get on
with our script.

Dear perl programmers,

When using open(), please don’t use:

open FILE, "$file" or die "couldn't open filen";

It really helps if you tell us what file you’re trying to open and
what went wrong. The correct error message is:

open FILE, "$file" or die "could not open $file: $!n";

Thank you.

So you read the documentation for IO::File and see:

open( FILENAME [,MODE [,PERMS]] )
open( FILENAME, IOLAYERS )

so you write:

my $rules = new IO::File('debian/rules','w', 0755);

and wonder why it hasn’t changed the permissions from 0666. Stracing
confirms it is opened 0666:

open("./debian/rules", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 4

A bit further reading of the documentation you discover:

If IO::File::open receives a Perl mode string (“>”, “+<“, etc.) or an
ANSI C fopen() mode string (“w”, “r+”, etc.), it uses the basic Perl
open operator (but protects any special characters).

If IO::File::open is given a numeric mode, it passes that mode and the
optional permissions value to the Perl sysopen operator. The permissions
default to 0666.

If IO::File::open is given a mode that includes the : character, it
passes all the three arguments to the three-argument open operator.

For convenience, IO::File exports the O_XXX constants from the Fcntl
module, if this module is available.

and the correct way to write this is

my $rules = new IO::File('debian/rules',O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0755)

Thank you very much perl for ignoring the permission parameter when
you feel like it.

Update: Steinar,
yes sorry, I did have 0755 rather than "0755"
originally, but changed it just to check that didn’t make a difference
and copied the wrong version. I’ve changed the post to have the right
thing.

% strace -eopen perl -MIO::File 
   -e 'my $rules = new IO::File("foo","w", 0755);' 2>&1 | grep foo
open("./foo", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3
% strace -eopen perl -MIO::File 
   -e 'my $rules = new IO::File("foo","w", "0755");' 2>&1 | grep foo
open("./foo", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3

Incidentally, 0755 and "0755" are different:

 perl -e 'printf("%d %dn", 0755, "0755");'
493 755

Which of these two fragments is more readable?

$self->{catalina_base} = $ENV{'CATALINA_BASE'};
if (!defined $self->{catalina_base}) {
    $self->{catalina_base} = $self->getTomcatHome() ;
}
if (!defined $self->{catalina_base}) {
    CCM::Util::error ("CATALINA_BASE unset and TOMCAT_HOME undefined", 3);
}

or

$self->{catalina_base} = $ENV{'CATALINA_BASE'} || $self->getTomcatHome()
   || CCM::Util::error ("CATALINA_BASE unset and TOMCAT_HOME undefined", 3);

Update: or

$self->{catalina_base} = (
   $ENV{'CATALINA_BASE'}
   or $self->getTomcatHome()
   or CCM::Util::error ("CATALINA_BASE unset and TOMCAT_HOME undefined", 3)
);

Today, I discovered SQL::Translator,
which seems to have some very interesting use cases. Basically, it is a
perl module for translating a database schema from one of a number of
formats and turning it into another format. Parsers include:

  • Live querying of DB2, MySQL, DBI-PostgreSQL, SQLite and Sybase databases
  • Access
  • Excel
  • SQL for DB2, MySQL, Oracle, PostgreSQL, SQLite and Sybase
  • Storable
  • XML
  • YAML

Output formats include:

  • Class::DBI
  • SQL for MySQL, Oracle, PostgreSQL, SQLServer, SQLite and Sybase
  • Storable, XML and YAML
  • POD, Diagram, GraphViz and HTML

Several things spring to mind with this:

  1. Defining your Schema in XML and using SQL::Translator to convert it
    into SQL for several databases and a set of classes for Class::DBI,
    which would make your application immediately target any of the
    supported databases.
  2. Documenting an existing database for which you’ve lost existing
    documentation by pointing it at a running database instance and
    outputting HTML page and, thanks to the Diagram output module, visual
    representation of the structure.
  3. Convert one database from product to another. Point it at a MySQL
    database and generate SQL for postgresql. If you generated some
    Class::DBI stuff you could possibly quickly write a script to copy data
    too.
  4. Using the sqlt-diff script, compare you current SQL to what is
    running on the database and generate a SQL script to upgrade the
    database structure using ALTER TABLE etc. Presumably you’d need
    to convert any data yourself, but is still a time saver for large
    databases.

I’m sure other people could think of some interesting uses for this.
Having looked at the Class::DBI stuff, I think it could do with some
improvements. I can’t see a way to set the class names, although I
haven’t spent that much time looking and it insists on having all the
classes in one file. Also the XML and YAML formats
generated are rather verbose and I haven’t looked to see how much I
could cut them down to use as the source definition. I suspect that I
can make it a lot shorter and rely on sensible defaults.

My initial reason for wanting to use SQL::Translator is that
Class::DBI::Pg has a large start up time and isn’t really suitable for
CGI use if you have a complex database. This might be mitigated by using
mod_perl, but in the mean time I was hoping I could speed up startup by
telling Class::DBI my column names, rather than it querying the
database. SQL::Translator should allow me to save duplicating the
database structure, whilst allowing me to support multiple backend
databases. If I get this working, I’ll write up a short HOWTO.