Bad Password Policies

After the whole Heartbleed fiasco, I’ve decided to continue my march towards improving my online security. I’d already begun the process of using LastPass to store my passwords and generate random passwords for each site, but I hadn’t completed the process, with some sites still using the same passwords, and some having less than ideal strength passwords, so I spent some time today improving my password position. Here’s some of the bad examples of password policy I’ve discovered today.

First up we have Live.com. A maximum of 16 characters from the Microsoft auth service. Seems to accept any character though.

Screenshot from 2014-04-15 21:36:57

 

This excellent example is from creditexpert.co.uk, one of the credit agencies here in the UK. They not only restrict to 20 characters, they restrict you to @, ., _ or |. So much for teaching people how to protect themselves online.

Screenshot from 2014-04-15 17:38:28

Here’s Tesco.com after attempting to change my password to ”QvHn#9#kDD%cdPAQ4&b&ACb4x%48#b”. If you can figure out how this violates their rules, I’d love to know. And before you ask, I tried without numbers and that still failed so it can’t be the “three and only three” thing. The only other idea might be that they meant “‘i.e.” rather than “e.g.”, but I didn’t test that.

Screenshot from 2014-04-15 16:20:17

Edit: Here is a response from Tesco on Twitter:

Screenshot from 2014-04-16 07:47:58

Here’s a poor choice from ft.com, refusing to accept non-alphanumeric characters. On the plus side they did allow the full 30 characters in the password.

Screenshot from 2014-04-15 15:22:08

 

The finest example of a poor security policy is a company who will remain nameless due to their utter lack of security. Not only did they not use HTTPS, they accepted a 30 character password and silently truncated it to 20 characters. The reason I know this is because when I logged out and tried to log in again and then used the “forgot my password” option, they emailed me the password in plain text.

I have also been setting up two-factor authentication where possible. Most sites use the Google Authenticator application on your mobile to give you a 6 digit code to type in in addition to your password. I highly recommend you set it up too. There’s a useful list of sites that implement 2FA and links to their documentation at http://twofactorauth.org/.

I realise that my choice LastPass requires me to trust them, but I think the advantages outweigh the disadvantages of having many sites using the same passwords and/or low strength passwords. I know various people cleverer than me have looked into their system and failed to find any obvious flaws.

Remember people, when you implement a password, allow the following things:

  • Any length of password. You don’t have to worry about length in your database, because when you hash the password, it will be a fixed length. You are hashing your passwords aren’t you?
  • Any character. The more possible characters that can be in your passwords, the harder it will be to brute force, as you are increasing the number of permutations a hacker needs to try.

If you are going to place restrictions, please make sure the documentation matches the implementation, provide a client-side implementation to match and provide quick feedback to the user, and make sure you explicitly say what is wrong with the password, rather than referring back to the incorrect documentation.

There are also many JS password strength meters available to show how secure the inputted passwords are. They are possibly a better way of providing feedback about security than having arbitrary policies that actually harm your security. As someone said to me on twitter, it’s not like “password is too strong” was ever a bad thing.

15 comments Categories: computing
A New Chapter

It’s been a while since I posted anything to my personal site, but I figured I should update with the news that I’m leaving Brighton (and the UK) after nearly nine years living by the seaside. I’ll be sad to leave this city, which is the greatest place to live in the country, but I have to opportunity to go explore the world and I’d be crazy to let it pass me by.

So what am I doing? In ten short days, I plan to sell all my possessions bar those I need to live and work day to day and will be moving to Spain for three months. I’m renting a flat in Madrid, where I’ll continue to work for my software development business and set about improving both my Spanish and my fitness.

If you want to follow my adventures, or read about the reasons for my change, then check out the Experimental Nomad website.

Multiple Crimes
mysql> select "a" = "A";
+-----------+
| "a" = "A" |
+-----------+
|         1 |
+-----------+
1 row in set (0.00 sec)

WTF? (via Nuxeo)

5 comments Categories: mysql
Letter to my MP regarding the Digital Economy Bill

I have just sent the following email to my MP, David Lepper MP, outlining my concerns about the Digital Economy Bill. I urge you to write to your MP with a similar letter.

Open Rights Group’s guide to writing to your MP

From: David Pashley <david@davidpashley.com>
To: David Lepper
Cc:
Bcc:
Subject: Digital Economy Bill
Reply-To:

Dear Mr Lepper,

I'm writing to you so express my concern at the Digital Economy Bill
which is currently working its way through the House of Commons. I
believe that the bill as it stands will have a negative effect on
the digital economy that the UK and in particular Brighton have
worked so hard to foster.

Section 4-17 deals with disconnecting people reported as infringing
copyright. As it stands, this section will result in the possibility
that my internet connection could be disconnected as a result of the
actions of my flatmate. My freelance web development business is
inherently linked to my access of the Internet. I currently allow my
landlady to share my internet access with her holiday flat above me.
I will have to stop this arrangement for fear of a tourist's actions
jeopardising my business.

This section will also result in the many pubs and cafes, much
favoured by Brighton's freelancers, from removing their free wifi. I
have often used my local pub's wifi when I needed a change of
scenery. I know a great many freelancers use Cafe Delice in the
North Laine as a place to meet other freelancers and discuss
projects while drinking coffee and working.

Section 18 deals with ISPs being required to prevent access to sites
hosting copyrighted material. The ISPs can insist on a court
injunction forcing them to prevent access. Unfortunately, a great
many ISPs will not want to deal with the costs of any court
proceedings and will just block the site in question. A similar law
in the Unitied States, the Digital Millenium Copyright Act (DMCA)
has been abused time and time again by spurious copyright claims to
silence critics or embarrassments.  A recent case is Microsoft
shutting down the entire Cryptome.org website because they were
embarrassed by a document they had hosted.  There are many more
examples of abuse at http://www.chillingeffects.org/

A concern is that there's no requirement for the accuser to prove
infringement has occured, nor is there a valid defense that a user
has done everything possible to prevent infringement.

There are several ways to reduce copyright infringement of music and
movies without introducing new legislation. The promotion of legal
services like iTunes and spotify, easier access to legal media, like
Digital Rights Management free music. Many of the record labels and
movie studios are failing to promote competing legal services which
many people would use if they were aware of them. A fairer
alternative to disconnection is a fine through the courts.

You can find further information on the effects of the Digital
Economy Bill at http://www.openrightsgroup.org/ and

http://news.bbc.co.uk/1/hi/technology/8544935.stm

The bill has currently passed the House of Lords and its first
reading in the Commons. There is a danger that without MPs demanding
to scrutinise this bill, this damaging piece of legislation will be
rushed through Parliament before the general election.

I ask you to demand your right to debate this bill and to amend the
bill to remove sections 4-18. I would also appreciate a response to
this email. If you would like to discuss the issues I've raised
further, I can be contacted on 01273 xxxxxx or 07966 xxx xxx or via
email at this address.

Thank you for your time.

--
David Pashley
david@davidpashley.com
Mod_fastcgi and external PHP

Has anyone managed to get a standard version of mod_fastcgi work
correctly with FastCGIExternalServer? There seems to be a
complete lack of documentation on how to get this to work. I have
managed to get it working by removing some code which appears to
completely break AddHandler. However, people on the FastCGI
list told me I was wrong for making it work. So, if anyone has managed
to get it to work, please show me some working config.

1 comment Categories: apache
Reducing Coupling between modules

In the past, several of my Puppet modules have
been tightly coupled. A perfect example is Apache and Munin. When I
install Apache, I want munin graphs set up. As a result my apache class
has the following snippet in it:

munin::plugin { "apache_accesses": }
munin::plugin { "apache_processes": }
munin::plugin { "apache_volume": }

This should make sure that these three plugins are installed and that
munin-node is restarted to pick them up. The define was implemented like
this:

define munin::plugin (
      $enable = true,
      $plugin_name = false,
      ) {

   include munin::node

   file { "/etc/munin/plugins/$name":
      ensure => $enable ? {
         true => $plugin_name ? {
            false => "/usr/share/munin/plugins/$name",
            default => "/usr/share/munin/plugins/$plugin_name"
         },
         default => absent
      },
      links => manage,
      require => Package["munin-node"],
      notify => Service["munin-node"],
   }
}

(Note: this is a slight simplification of the define). As you can
see, the define includes munin::node, as it needs the definition of the
munin-node service and package. As a result of this, installing Apache
drags in munin-node on your server too. It would be much nicer if the
apache class only installed the munin plugins if you also install munin
on the server.

It turns out that is is possible, using virtual
resources
. Virtual resources allow you to define resources in one
place, but not make them happen unless you realise them. Using this, we
can make the file resource in the munin::plugin virtual and realise it
in our munin::node class. Our new munin::plugin looks like:

define munin::plugin (
      $enable = true,
      $plugin_name = false,
      ) {

   # removed "include munin::node"

   # Added @ in front of the resource to declare it as virtual
   @file { "/etc/munin/plugins/$name":
      ensure => $enable ? {
         true => $plugin_name ? {
            false => "/usr/share/munin/plugins/$name",
            default => "/usr/share/munin/plugins/$plugin_name"
         },
         default => absent
      },
      links => manage,
      require => Package["munin-node"],
      notify => Service["munin-node"],
      tag => munin-plugin,
   }
}

We add the following line to our munin::node class:

File<| tag == munin-plugin |>

The odd syntax in the munin::node class realises all the
virtual resources that match the filter, in this case, any that is
tagged munin-plugin. We’ve had to define this tag ourself, as
the auto-generated tags don’t seem to work. You’ll also notice that
we’ve removed the munin::node include from the
munin::plugin define, which means that we no longer install
munin-node just by using the plugin define. I’ve used a similar
technique for logcheck, so additional rules are not installed unless
I’ve installed logcheck. I’m sure there are several other places where I
can use it to reduce such tight coupling between classes.

2 comments Categories: puppet Tags:
Maven and Grails 1.2 snapshot

Because I couldn’t find the information anywhere else, if you want to
use maven with Grails 1.2 snapshot, use:

mvn org.apache.maven.plugins:maven-archetype-plugin:2.0-alpha-4:generate
-DarchetypeGroupId=org.grails
-DarchetypeArtifactId=grails-maven-archetype
-DarchetypeVersion=1.2-SNAPSHOT     -DgroupId=uk.org.catnip
-DartifactId=armstrong
-DarchetypeRepository=http://snapshots.maven.codehaus.org/maven2
No comments yet Categories: java
Conversations regarding printers

I just had the following conversation with my linux desktop:

Me: “Hi, I’d like to use my new printer please.”

Computer: “Do you mean this HP Laserjet CP1515n on the network?”

Me: “Erm, yes I do.”

Computer: “Good. You’ve got a test page printing as we speak.
Anything else I can help you with?”

Sadly I don’t have any alternative modern operating systems to
compare it to, but having done similar things with linux over the last
12 years, I’m impressed with how far we’ve come. Thank you to everyone
who made this possible.

3 comments Categories: linux
Tarballs explained

This entry was originally posted in slightly different form to Server Fault

If you’re coming from a Windows world, you’re used to using tools like zip or
rar, which compress collections of files. In the typical Unix tradition of
doing one thing and doing one thing well, you tend to have two different
utilities; a compression tool and a archive format. People then use these two
tools together to give the same functionality that zip or rar provide.

There are numerous different compression formats; the common ones used on Linux
these days are gzip (sometimes known as zlib) and the newer, higher performing
bzip2. Unfortunately bzip2 uses more CPU and memory to provide the higher rates
of compression. You can use these tools to compress any file and by convention
files compressed by either of these formats is .gz and .bz2. You can use gzip
and bzip2 to compress and gunzip and bunzip2 to decompress these formats.

There are also several different types of archive formats available, including
cpio, ar and tar, but people tend to only use tar. These allow you to take a
number of files and pack them into a single file. They can also include path
and permission information. You can create and unpack a tar file using the tar
command. You might hear these operations referred to as “tarring” and
“untarring”. (The name of the command comes from a shortening of Tape ARchive.
Tar was an improvement on the ar format in that you could use it to span
multiple physical tapes for backups).

# tar -cf archive.tar list of files to include

This will create (-c) and archive into a file -f called archive.tar. (.tar
is the convention extention for tar archives). You should now have a single
file that contains five files (“list”, “of”, “files”, “to” and “include”). If
you give tar a directory, it will recurse into that directory and store
everything inside it.

# tar -xf archive.tar
# tar -xf archive.tar list of files

This will extract (-x) the previously created archive.tar. You can extract just
the files you want from the archive by listing them on the end of the command
line. In our example, the second line would extract “list”, “of”, “file”, but
not “to” and “include”. You can also use

# tar -tf archive.tar

to get a list of the contents before you extract them.

So now you can combine these two tools to replication the functionality of zip:

# tar -cf archive.tar directory
# gzip archive.tar

You’ll now have an archive.tar.gz file. You can extract it using:

# gunzip archive.tar.gz
# tar -xf archive.tar

We can use pipes to save us having an intermediate archive.tar:

# tar -cf - directory | gzip > archive.tar.gz
# gunzip < archive.tar.gz | tar -xf -

You can use – with the -f option to specify stdin or stdout (tar knows which one based on context).

We can do slightly better, because, in a slight apparent breaking of the
“one job well” idea, tar has the ability to compress its output and decompress
its input by using the -z argument (I say apparent, because it still uses the
gzip and gunzip commandline behind the scenes)

# tar -czf archive.tar.gz directory
# tar -xzf archive.tar.gz

To use bzip2 instead of gzip, use bzip2, bunzip2 and -j instead of gzip, gunzip
and -z respectively (tar -cjf archive.tar.bz2). Some versions of tar can detect
a bzip2 file archive with you use -z and do the right thing, but it is probably
worth getting in the habit of being explicit.

More info:

8 comments Categories: linux Tags: , , ,
mod_proxy or mod_jk

This entry was originally posted in slightly different form to Server Fault

There are several ways to run Tomcat applications. You can either run
tomcat direcly on port 80, or you can put a webserver in front of tomcat and
proxy connections to it. I would highly recommend using Apache as a
front end. The main reason for this suggestion is that Apache is more
flexible than tomcat. Apache has many modules that would require you to
code support yourself in Tomcat. For example, while Tomcat can do gzip
compression, it’s a single switch; enabled or disabled. Sadly you can
not compress CSS or javascript for Internet Explorer 6. This is easy to
support in Apache, but impossible to do in Tomcat. Things like caching
are also easier to do in Apache.

Having decided to use Apache to front Tomcat, you need to decide how
to connect them. There are several choices: mod_proxy ( more accurately, mod_proxy_http in
Apache 2.2, but I’ll refer to this as mod_proxy), mod_jk and mod_jk2.
Mod_jk2 is not under active development and should not be used. This
leaves us with mod_proxy or mod_jk.

Both methods forward requests from apache to tomcat. mod_proxy uses the HTTP
that we all know an love. mod_jk uses a binary protocol AJP. The main
advantages of mod_jk are:

  • AJP is a binary protocol, so is slightly quicker for both ends to deal with and
    uses slightly less overhead compared to HTTP, but this is minimal.
  • AJP
    includes information like original host name, the remote host and the SSL
    connection. This means that ServletRequest.isSecure() works as expected, and
    that you know who is connecting to you and allows you to do some sort of
    virtualhosting in your code.

A slight disadvantage is that AJP is based on
fixed sized chunks, and can break with long headers, particularly request URLs
with long list of parameters, but you should rarely be in a position of having
8K of URL parameters. (It would suggest you were doing it wrong. :) )

It used to be the case that mod_jk provided basic load balancing
between two tomcats, which mod_proxy couldn’t do, but with the new
mod_proxy_balancer in Apache 2.2, this is no longer a reason to choose between them.

The position is slightly complicated by the existence of mod_proxy_ajp. Between
them, mod_jk is the more mature of the two, but mod_proxy_ajp works in the same
framework as the other mod_proxy modules. I have not yet used mod_proxy_ajp,
but would consider doing so in the future, as mod_proxy_ajp is part of
Apche and mod_jk involves additional configuration outside of Apache.

Given a choice, I would prefer a AJP based connector, mostly due to my second
stated advantage, more than the performance aspect. Of course, if your
application vendor doesn’t support anything other than mod_proxy_http, that
does tie your hands somewhat.

You could use an alternative webserver like lighttpd, which does have
an AJP module. Sadly, my prefered lightweight HTTP server, nginx, does
not support AJP and is unlike ever to do so, due to the design of its
proxying system.

4 comments Categories: tomcat Tags: ,

Next Page