Has anyone managed to get a standard version of mod_fastcgi work
correctly with FastCGIExternalServer? There seems to be a
complete lack of documentation on how to get this to work. I have
managed to get it working by removing some code which appears to
completely break AddHandler. However, people on the FastCGI
list told me I was wrong for making it work. So, if anyone has managed
to get it to work, please show me some working config.

In the past, several of my Puppet modules have
been tightly coupled. A perfect example is Apache and Munin. When I
install Apache, I want munin graphs set up. As a result my apache class
has the following snippet in it:

munin::plugin { "apache_accesses": }
munin::plugin { "apache_processes": }
munin::plugin { "apache_volume": }

This should make sure that these three plugins are installed and that
munin-node is restarted to pick them up. The define was implemented like
this:

define munin::plugin (
      $enable = true,
      $plugin_name = false,
      ) {

   include munin::node

   file { "/etc/munin/plugins/$name":
      ensure => $enable ? {
         true => $plugin_name ? {
            false => "/usr/share/munin/plugins/$name",
            default => "/usr/share/munin/plugins/$plugin_name"
         },
         default => absent
      },
      links => manage,
      require => Package["munin-node"],
      notify => Service["munin-node"],
   }
}

(Note: this is a slight simplification of the define). As you can
see, the define includes munin::node, as it needs the definition of the
munin-node service and package. As a result of this, installing Apache
drags in munin-node on your server too. It would be much nicer if the
apache class only installed the munin plugins if you also install munin
on the server.

It turns out that is is possible, using virtual
resources
. Virtual resources allow you to define resources in one
place, but not make them happen unless you realise them. Using this, we
can make the file resource in the munin::plugin virtual and realise it
in our munin::node class. Our new munin::plugin looks like:

define munin::plugin (
      $enable = true,
      $plugin_name = false,
      ) {

   # removed "include munin::node"

   # Added @ in front of the resource to declare it as virtual
   @file { "/etc/munin/plugins/$name":
      ensure => $enable ? {
         true => $plugin_name ? {
            false => "/usr/share/munin/plugins/$name",
            default => "/usr/share/munin/plugins/$plugin_name"
         },
         default => absent
      },
      links => manage,
      require => Package["munin-node"],
      notify => Service["munin-node"],
      tag => munin-plugin,
   }
}

We add the following line to our munin::node class:

File<| tag == munin-plugin |>

The odd syntax in the munin::node class realises all the
virtual resources that match the filter, in this case, any that is
tagged munin-plugin. We’ve had to define this tag ourself, as
the auto-generated tags don’t seem to work. You’ll also notice that
we’ve removed the munin::node include from the
munin::plugin define, which means that we no longer install
munin-node just by using the plugin define. I’ve used a similar
technique for logcheck, so additional rules are not installed unless
I’ve installed logcheck. I’m sure there are several other places where I
can use it to reduce such tight coupling between classes.

This entry was originally posted in slightly different form to Server Fault

There are several ways to run Tomcat applications. You can either run
tomcat direcly on port 80, or you can put a webserver in front of tomcat and
proxy connections to it. I would highly recommend using Apache as a
front end. The main reason for this suggestion is that Apache is more
flexible than tomcat. Apache has many modules that would require you to
code support yourself in Tomcat. For example, while Tomcat can do gzip
compression, it’s a single switch; enabled or disabled. Sadly you can
not compress CSS or javascript for Internet Explorer 6. This is easy to
support in Apache, but impossible to do in Tomcat. Things like caching
are also easier to do in Apache.

Having decided to use Apache to front Tomcat, you need to decide how
to connect them. There are several choices: mod_proxy ( more accurately, mod_proxy_http in
Apache 2.2, but I’ll refer to this as mod_proxy), mod_jk and mod_jk2.
Mod_jk2 is not under active development and should not be used. This
leaves us with mod_proxy or mod_jk.

Both methods forward requests from apache to tomcat. mod_proxy uses the HTTP
that we all know an love. mod_jk uses a binary protocol AJP. The main
advantages of mod_jk are:

  • AJP is a binary protocol, so is slightly quicker for both ends to deal with and
    uses slightly less overhead compared to HTTP, but this is minimal.
  • AJP
    includes information like original host name, the remote host and the SSL
    connection. This means that ServletRequest.isSecure() works as expected, and
    that you know who is connecting to you and allows you to do some sort of
    virtualhosting in your code.

A slight disadvantage is that AJP is based on
fixed sized chunks, and can break with long headers, particularly request URLs
with long list of parameters, but you should rarely be in a position of having
8K of URL parameters. (It would suggest you were doing it wrong. 🙂 )

It used to be the case that mod_jk provided basic load balancing
between two tomcats, which mod_proxy couldn’t do, but with the new
mod_proxy_balancer in Apache 2.2, this is no longer a reason to choose between them.

The position is slightly complicated by the existence of mod_proxy_ajp. Between
them, mod_jk is the more mature of the two, but mod_proxy_ajp works in the same
framework as the other mod_proxy modules. I have not yet used mod_proxy_ajp,
but would consider doing so in the future, as mod_proxy_ajp is part of
Apche and mod_jk involves additional configuration outside of Apache.

Given a choice, I would prefer a AJP based connector, mostly due to my second
stated advantage, more than the performance aspect. Of course, if your
application vendor doesn’t support anything other than mod_proxy_http, that
does tie your hands somewhat.

You could use an alternative webserver like lighttpd, which does have
an AJP module. Sadly, my prefered lightweight HTTP server, nginx, does
not support AJP and is unlike ever to do so, due to the design of its
proxying system.

If you’ve not heard of Puppet, it is a
configuration management tool. You write descriptions of how you want
your systems to look and it checks the current setup and works out what
it needs to do to move your system so it matches your description.
The idea is to write how it should look, not how to change the system.

Puppet uses a client (puppetd) that talks to the central server
(puppetmaster) over HTTPS.The default puppetmaster HTTP server is
webbrick, which is a
lightweight Ruby HTTP server. While it’s simple and allows Puppetmaster
to work straight out the box, due to it’s pure Ruby structure and Ruby’s
green thread architecture, it doesn’t scale beyond a simple puppet
setup. After a while, every medium to large Puppet installation needs to
move to the other HTTP server that puppet supports: Mongrel. Mongrel is
a faster HTTP library, but supports a lot less features. In particular
it doesn’t support SSL, which is important with Puppet, as Puppet relies
heavily on client certificate verification for authentication. As a
result, we need to put another webserver in front that can handle the
SSL aspect. As a nice side effect of having to proxy to Puppetmaster is
that we can run several puppetmaster processes and improve on the green
threads problem that Ruby has. In this blog post, I’m going to describe
setting up nginx and mongrel.

The first thing to do is to install the mongrel and
nginx packages.

apt-get install mongrel nginx

We need to run nginx on port 8140 and proxy to
our mongrel servers on different ports, so lets move puppetmaster off
8140 and configure it to use mongrel while we’re at it. Edit
/etc/default/puppetmaster and set the following variables:

SERVERTYPE=mongrel
PUPPETMASTERS=4
PORT=18140
DAEMON_OPTS="--ssl_client_header=HTTP_X_SSL_SUBJECT"

This tells the init.d script to use the mongrel server type and to
run four of them. The init.d script is clever enough to start up the
right number of processes and will set them up to use a sequence of
ports for each one, starting at 18140 for the first process, up to 18143
for the last one. The DAEMON_OPTS option tells Puppetmaster how
we’re going to pass the SSL certificate information from nginx so it can
grant or refuse permission.

Now to set up nginx. Put the following in
/etc/nginx/conf.d/puppetmaster.conf:

ssl                     on;
ssl_certificate /var/lib/puppet/ssl/certs/puppetmaster.example.com.pem;
ssl_certificate_key /var/lib/puppet/ssl/private_keys/puppetmaster.example.com.pem;
ssl_client_certificate  /var/lib/puppet/ssl/certs/ca.pem;
ssl_ciphers             SSLv2:-LOW:-EXPORT:RC4+RSA;
ssl_session_cache       shared:SSL:8m;
ssl_session_timeout     5m;

upstream puppet-production {
   server 127.0.0.1:18140;
   server 127.0.0.1:18141;
   server 127.0.0.1:18142;
   server 127.0.0.1:18143;
}

In this file we tell nginx where to find the server certificates for
your puppetmaster, so your clients can authenticate your server. We also
tell nginx the CA certificate to authenticate clients with and set up
some SSL details required for Puppet. Finally we create a group of
remote servers for our pack of mongrel puppetmasters, so we can refer to
them later. If you added more or less servers earlier don’t forget to
add or remove them here. You also need to replace
puppetmaster.example.com with your FQDN. If at a later stage, you find
you need ever more performance, you can easily move some of your
puppetmaster processes to a separate box and update the upstream list to
refer to servers on the remote server.

Finally, we need to set up a couple of HTTP servers. Create
/etc/nginx/sites-enabled/puppetmaster with the following
contents:

server {
    listen                  8140;
    ssl_verify_client       on;
    root                    /var/empty;
    access_log              /var/log/nginx/access-8140.log;

    # Variables
    # $ssl_cipher returns the line of those utilized it is cipher for established SSL-connection
    # $ssl_client_serial returns the series number of client certificate for established SSL-connection
    # $ssl_client_s_dn returns line subject DN of client certificate for established SSL-connection
    # $ssl_client_i_dn returns line issuer DN of client certificate for established SSL-connection
    # $ssl_protocol returns the protocol of established SSL-connection

    location / {
        proxy_pass          http://puppet-production;
        proxy_redirect      off;
        proxy_set_header    Host             $host;
        proxy_set_header    X-Real-IP        $remote_addr;
        proxy_set_header    X-Forwarded-For  $proxy_add_x_forwarded_for;
        proxy_set_header    X-Client-Verify  SUCCESS;
        proxy_set_header    X-SSL-Subject    $ssl_client_s_dn;
        proxy_set_header    X-SSL-Issuer     $ssl_client_i_dn;
        proxy_read_timeout  65;
    }
}

server {
    listen                  8141;
    ssl_verify_client       off;
    root                    /var/empty;
    access_log              /var/log/nginx/access-8141.log;

    location / {
        proxy_pass  http://puppet-production;
        proxy_redirect     off;
        proxy_set_header   Host             $host;
        proxy_set_header   X-Real-IP        $remote_addr;
        proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
        proxy_set_header   X-Client-Verify  FAILURE;
        proxy_set_header   X-SSL-Subject    $ssl_client_s_dn;
        proxy_set_header   X-SSL-Issuer     $ssl_client_i_dn;
        proxy_read_timeout  65;
    }
}

This creates two servers on port 8140 and 8141 which both proxy all
requests to our group of mongrel servers, adding suitable headers to
pass on the SSL information. The only difference between them is the
X-Client-Verify header. This shows the one problem with using nginx with
puppet. Because the client verification success or failure is not
available as a variable before nginx 0.8.7, we can’t have a single port
for both the usual client connection and the initial unauthenticated
connection where the client requests a certificate to be signed. As a
result, with this setup, you are required to run puppet with --ca-port
8141
the first time you run puppet until the certificate has been
signed with puppetca.

Foruntately with versions of nginx later than 0.8.7, you can use a
simpler setup shown below. This replaces both files shown above with the single
server. Unfortunately, 0.8.7 is not available in any
version of Ubuntu yet, not even Karmic.

server {
  listen 8140;

  ssl                     on;
  ssl_session_timeout     5m;
  ssl_certificate         /var/lib/puppet/ssl/certs/puppetmaster.pem;
  ssl_certificate_key     /var/lib/puppet/ssl/private_keys/puppetmaster.pem;
  ssl_client_certificate  /var/lib/puppet/ssl/ca/ca_crt.pem;

  # choose any ciphers
  ssl_ciphers             SSLv2:-LOW:-EXPORT:RC4+RSA;

  # allow authenticated and client without certs
  ssl_verify_client       optional;

  # obey to the Puppet CRL
  ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem;

  root                    /var/tmp;

  location / {
    proxy_pass              http://puppet-production;
    proxy_redirect         off;
    proxy_set_header    Host             $host;
    proxy_set_header    X-Real-IP        $remote_addr;
    proxy_set_header    X-Forwarded-For  $proxy_add_x_forwarded_for;
    proxy_set_header    X-Client-Verify  $ssl_client_verify;
    proxy_set_header    X-SSL-Subject    $ssl_client_s_dn;
    proxy_set_header    X-SSL-Issuer     $ssl_client_i_dn;
    proxy_read_timeout  65;
  }
}

If you are running another webserver on the server, you may want to
delete /etc/nginx/sites-enabled/default which attempts to
create a server listening on port 80, which will conflict with your
existing HTTP server.

If you follow these instructions, you should find yourself with a
better performing puppetmaster and significantly few “connection reset
by peer” and other related error messages.

I’ve just set up syntax highlighting for Puppet manifest files,
and thought I’d share the simple steps. The first thing to do is
download the syntax file from http://www.reductivelabs.com/downloads/puppet/puppet.vim
and save this to ~/.vim/syntax/puppet.vim. Now when the
filetype is set to “puppet”, vim will use this syntax file.

That’s useful, it it would be even nicer if we could make vim know
that files ending in .pp were puppet files. Turns out this is
very easy to do. You need to create a file to detect the correct
filetype when you open a file. You need to put the following lines in
~/.vim/ftdetect/puppet.vim:

au BufRead,BufNewFile *.pp   setfiletype puppet

Now when you load a file ending in .pp, you should get nice syntax
highlighting. You can also make vim use special settings for the puppet
filetype by creating a vim script file in one of
~/.vim/ftplugin/puppet.vim, ~/.vim/ftplugin/puppet_*.vim and/or
~/.vim/ftplugin/puppet/*.vim. Vim has a lot of flexible hooks
to enable file type specific configuration; hopefully it should be
fairly easy to modify these examples for other file formats.