pfSense: SLAAC+DHCPv6 prefix delegation

pfSense is pretty awesome, but there’s one flaw in configuring an interface which can be pretty annoying: you cannot use more than one IP configuration scheme per interface.
This is not much of an issue in most average use cases – and IP aliasing has a whole different set of configuration options. The latter does require you to create aliases for local IPs if there are multiple on an interface you want to cover with firewall rules, though.

A rather special use case – one which is an issue when you use the German provider NetCologne, for example – is that you might want to

  1. Use SLAAC to get an interface IPv6 address
  2. Use DHCPv6 to request a /48 prefix delegation via the autoconfigured IPv6 address

With the GUI, the latter is sadly impossible to replicate as to my knowledge.

Which is why you have to do a bit of fiddling to get it to run.

Zeroth, ensure you have a valid SLAAC configuration on your WAN interface.

First, you’ll need to create a configuration for dhcp6c, which only requests your prefix delegation. Use the following as /etc/ipv6-prefix.conf, taking care to replace the interface name, if required:

interface pppoe0 {
    send ia-pd 1;                                                                                   	
id-assoc pd 1 {                                                                                         	
    prefix-interface vr1 {                                                                          	
        sla-id 0;                                                                               	
        sla-len 16;                                                                             	

The above snippet will request a /48 prefix (64 network bits – sla-len (16) = 48).

Secondly, you will have to integrate it into the system startup. You can either do this by using the shellcmd package or by adding /usr/local/etc/rc.d/ with the following content:

/usr/local/sbin/dhcp6c -c /etc/ipv6-prefix.conf pppoe0

Manually start it (or reboot, if you’re into that way of starting programs … you shouldn’t) and voila, you’ll have a prefix. You can then use the prefix you get in your DHCPv6 Server/RA config; you’ll need to manually enter it.

Disabling SSL < TLS in Dovecot and Postfix

Since many of you probably haven’t:


smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3
smtpd_tls_protocols = !SSLv2, !SSLv3
smtp_tls_mandatory_protocols = !SSLv2, !SSLv3
smtp_tls_protocols = !SSLv2, !SSLv3
smtpd_tls_exclude_ciphers =  aNULL, DES, 3DES, MD5, DES+MD5, RC4, eNULL, LOW, EXP, PSK, SRP, DSS


ssl_protocols = !SSLv2 !SSLv3
ssl_cipher_list = ALL:!LOW:!SSLv2:!SSLv3:!EXP:!aNULL

[Bug watch] Erlang and SHA256 SSL certificates

A quick bug watch post to help with Googleability.

If you’re using Erlang (e.g.) with CouchDB in still in use version like Erlang 14 in Ubuntu 12.04, you might get the following error:

[10:35:50.859 UTC] [<0.8389.46>] [error] gen_server <0.8389.46> terminated with reason: {{{badmatch,{error,{asn1,{‘Type not compatible with table constraint’,{{component,’Type’},{value,{5,<<>>}},{unique_name_and_value,id,{1,2,840,113549,1,1,11}}}}}}},[{public_key,pkix_decode_cert,2},{ssl_certificate,trusted_cert_and_path,3},{ssl_handshake,certify,7},{ssl_connection,certify,2},{ssl_connection,next_state,3},{gen_fsm,handle_msg,7},{proc_lib,init_p_do_apply,3}]},{gen_fsm,sync_send_all_state_event,[<0.8390.46>,start,infinity]}}
[10:35:50.860 UTC] [<0.8389.46>] [error] CRASH REPORT Process <0.8389.46> with 0 neighbours exited with reason: {{{badmatch,{error,{asn1,{‘Type not compatible with table constraint’,{{component,’Type’},{value,{5,<<>>}},{unique_name_and_value,id,{1,2,840,113549,1,1,11}}}}}}},[{public_key,pkix_decode_cert,2},{ssl_certificate,trusted_cert_and_path,3},{ssl_handshake,certify,7},{ssl_connection,certify,2},{ssl_connection,next_state,3},{gen_fsm,handle_msg,7},{proc_lib,init_p_do_apply,3}]},{gen_fsm,sync_send_all_state_event,[<0.8390.46>,start,infinity]}} in gen_server:terminate/6

I got this one when trying CouchDB replication via SSL.

The issue? Older Erlang versions do not support SHA256 signed SSL certificates.

It will fail with the above rather useless message which only gives vague hints at what happens (a type mismatch in the decode certificates function).


Your ticket system needs more monitoring

Often enough, they have one rather common problem: they do not track actual business processes.

Take my case, for example.

On 2014-07-01, I used an online form to order some internets from NetCologne to my new abode in Cologne. I got an email confirmation a few minutes later, informing me that I’ll probably get my line connected on 2014-07-24, and I’d receive further instructions via snail mail.

The late connection date is due to the line being operated by the Deutsche Telekom, where one of their techs has to be requested to do the actual measuring of quality and hooking it up to the local ISP. (Yes, really, but that’s a rather different affair.)

I sent out the SEPA mandate a bit late because I kind of forgot it was supposed to be sent out.

Fast forward to 2014-07-21, I haven’t received any kind of information yet. I call the hotline. They tell me that “Despite what we say, 4-6 weeks can happen.” without any further kind of comment on my situation.

Fast forward to 2014-08-11. I call again to ask what the hell is happening; somebody told me they see that there’s a problem with my account and that they’d be happy to help and they’d do everything necessary.

2014-08-21. I again call, rather chilled and relaxed from a short vacation, and ask if they’d kindly supply me with some kind of information. “There’s been a technical problem”, they say. “You should be receiving info soon!”

2014-08-28. Call to the hotline. Get forwarded to L2. “Yeah, I found your ticket here. Says so that the Telekom tech checked the line on 2014-07-24 (!) and said it was okay. The system got stuck in a weird state. You’ll just have to go to a store and get your device.”

Cue me being slightly flabberghasted. It was after business closing times for the stores, so I couldn’t go to them just now. I remembered that I forgot to ask for my credentials – I never got something from NetCologne other than the entry receipt of my order.

Call to the hotline again. “Yes, I can see here that due to a system error, your order confirmation never got sent out. I asked some people here to speed up the whole process, you should have Internet really soon now!”.

Next day, I visit the store. Pick up a router. Unpack it, notice that the splitter for the DSL line was not in the box, try to connect the box, connection doesn’t work. Borrow a splitter from a friend, connect it again, still nothing.

2014-09-01. I call the hotline to kindly inquire as to why my modem isn’t getting a DSL sync.

“Well, I see here that your line is supposed to be hooked up on 2014-09-22.” “Uhm, you told me last week it was hooked up and I just needed to get my box?” “I’m just the tech guy, and it says here your line will be hooked up on 2014-09-22.” Redial, ask for accounts, inquire as to wtf happened. “Yes, we had to order a new connection because of reasons, and the Telekom technician delay is the usual three weeks.” Cue mildly irritated questioning from my side. “I’m sorry, I can only escalate this to conflict management to see if we can speed up the process.” Do so, hang up.

So, what went wrong? One sample scenario:

  1. NetCologne receives a ticket, autoresponder acknowledges reception.
  2. Employee works on ticket, maybe notices the lack of a mandate, and pushes it back for follow-up.
  3. Employee leaves on vacation, someone else possibly processes document scan of the mandate, ticket system has the ticket locked or similar, preventing an update for “information received”.
  4. External callback from the Telekom, line is okay. Noted somewhere in the ticket system.
  5. Returns to work, ticket does not appear actionable.
  6. User starts enquiring particularly loudly, someone looks up the ticket and notices the confirmation wasn’t even sent.
  7. Slight crapping of pants.
  8. Escalate ticket to techies to check connection.
  9. Techies think $something is wrong with the line, issue external call to Telekom.
  10. Customer pissed.

That’s just one sample scenario, mind, but something along those lines will most likely have happened.

Why did this even happen? Because there’s no externally controlled process management.

Following steps should happen:

  1. New customer gets accepted, spawns a checklist, containing things like “order confirmation”, “line check”, “connection delegation”, “provisioning configuration” etc.
  2. Checklist should be able to be viewed by anyone and acked while referring to tickets that ack them.
  3. Checklist should escalate when actions exceed usual timeframes or nobody is working on them.

It shouldn’t be particularly hard to implement a system like this or similar, probably even with some OTRS or RT magics for the low-end variant. For detached ticket-based companies, it’s an absolute must that some sort of control exists to ensure processes actually get handled.

Else you have “customers” you fail to convert into money for over two months who are increasingly pissed for no better reason than “Sorry, we forgot about you” – even though they’ve regularly been saying hi.

Amazon and Hachette

Amazon and Hachette are currently in a bit of a turf war on what they expect from each other with regards to ebooks an pricing. The TL;DR version is “Amazon thinks Hachette are grubby moneypinchers (read: wants more money), Hachette thinks Amazon are thieving scumbags (read: wants more money)”.

The thing is that Amazon has now escalated this to a shooting war. They officially declared that Amazon is now:

  1. Not selling ebooks from Hachette
  2. Not stocking supplies for Hachette physical books
  3. Not allowing preorders for Hachette books

This leads, in term, to following customer effects:

  1. Most Kindle users won’t be reading Hachette ebooks (due to a medium-height walled garden)
  2. All Hachette books will be ordered on demand from the publisher, making the usual one to two day deliveries of Amazon utopic

Amazon actually encourages people to use their competitors to buy Hachette books. Many people think that Amazon is shooting themselves in the foot with this tactic, as they’re just excluding themselves from the potential revenue.

What people aren’t actually considering is that the humongous spread of Kindles makes getting a Hachette eBook for your average customer neigh-on impractical. They have a lot of books in their Kindle library; the advantages of the “instant buy, instant availability” system is a constant fact for Kindle users. Of course they could try to get a DRM-free eBook (which, depending on your regional market, is a hassle), but Hachette themselves is only offering it with Adobe DRM, so that’s a no-go for Kindles.

So, if you really want that book offered by Hachette, and own a Kindle, you either have to

  1. Buy a hardcopy version
  2. Get another expensive ereader

(Well, or maybe get a mobile app, but that’s a huge YMMV point).

So, imagine you just want to read a book right now. The Amazon storefront doesn’t even show the books as available. Will you order a paperback? No. Will you, on mobile, hassle yourself with getting a copy that might work on your computer? No. You’ll just buy another book.

And that’s where the power of this boycott lies.

Authenticating sudo with the SSH agent

I recently stumbled upon the rather intriguing idea of using your SSH agent to do… sudo authentication!

Sounds weird, right? But somebody implemented it. I haven’t audited the code, but it mostly does what it’s supposed to and doesn’t appear to be malicious.

What it is, though, is a PAM module that gives you an ‘auth’ module for PAM1 As we know, the ‘auth’ module does the whole business of validating that a user is who they claim to be by asking for credentials. Usually, we see e.g. sudo asking the user for their password.

The problem with that: remembering all those sudo passwords for remote hosts you’re administering – because, after all, you aren’t logging in as root directly, and you don’t use the same password at the other end all the time, right? Well, except if you’re using LDAP, anyway. But even then, you’d still have to enter the password (but it is the same, and you’re probably feeling fancy with ansible anyway.)

Enter – just include it in your PAM configuration, have sudo keep SSH_AUTH_SOCK. If you now connect with your SSH agent forwarded, PAM will check the public key you specify against your forwarded SSH agent, and if that check succeeds, proceed along the PAM chain, you being happily authed! Entering a password? Only when you unlock the SSH agent.

Now that the concept has been explained, let’s think about consequences.

Security considerations

Is this method inherently insecure?

Well, not per se; if you think using SSH agent is okay, using it to replace a password, in principle, is okay.

Can this authentication be exploited?

There’s two possible scenarios I can imagine:

  1. Someone manages to take over the SSH agent.
  2. Someone modifies the specified authorized_keys file.

I personally do not assume that taking over the SSH agent is a significant risk; you’re probably the admin setting this up, so you trust the server and the machine you’re connecting from. The only person on the remote side that could abuse the auth socket is root, your user and someone using an 0day, but being afraid of the last won’t get you anywhere. Thus we can safely disregard that.

The only real problem I see is that somebody manages to overwrite the authorized_keys file. pam_ssh_agent_auth allows you to specify where the authorized key files are kept – you can allow them to be in any place you’d like, and there’s shorthand macros for the user’s home, the system’s hostname and the user name itself. A setup I personally like is using $HOME/.ssh/authorized_keys, because it’s a no change in place operation.


Anyone who can somehow modify or add to your authorized_keys file can take over your account and its sudo privileges!

Sample attack scenario:

  1. You’re an idiot and ~/.ssh/authorized_keys is world-writable.
  2. Someone else on the system appends their own key to your authorized_keys.
  3. They are connected with their own SSH agent and just do a sudo -l -u $you.
  4. This will now work because PAM asks the attacker’s SSH agent to unlock their key.

Is this an issue? Only if your users are idiots. Or 0day, but see above.

The easy way to work around this is to simply use a only root-controlled file, i.e. create something like /etc/security/sudoers/%u.key for each user. Or just a globally defined one where you pipe new keys in, whatever floats your boat.

But, except for taking care, this in my case is no particularly viable attack scenario either.

If anyone comes up with a good one, please let me know.

How to implement it

Simple! Just run this Puppet manifest if you’re running Debian/Ubuntu and trust me. You probably shouldn’t, but please look at the manifest anyway and improve my Puppetfu by giving clever comments about how I should approach this ‘style’ of sharing configuration.

Essentially, you need to do the following steps:

  1. Install pam_ssh_agent_auth, just use my Debian/Ubuntu repos (deb $your_release main)) or go to the official site.
  2. Add SSH_AUTH_SOCK to the env_keep defaults in /etc/sudoers.
  3. Add auth sufficient file=%h/.ssh/authorized_keys to /etc/pam.d/sudo, ideally before common-auth.
  4. That it’s. Open a new connection, sudo -k; sudo -l should work without you having to enter a password.2

Simple as that.

  1. If you really don’t know what PAM is about, read this article to get a bit of an overview.
  2. If not – that’s what you have that other shell for you didn’t close or reuse just now!

Allowing your users to manage their DNS zone

You’ve been in this situation before. You’re being the host for a couple of friends (or straight out customers) whom you’re giving virtual machines on that blade server you’re likely renting from a hosting provider. You’ve got everything mostly set up right, even wrangled libvirt so that your users can connect remotely to restart and VNC their own machine (article on this is pending).

But then there’s the issue of allowing people to update the DNS. If you give them access to a zone file, that sort of works – but you’ve either got to give them access to the machine running the DNS server, or rig up some rather fuzzy and failure-prone system to transfer the zone files to where they’re actually useful. Both cases aren’t ideal.

So here’s how to do it right – by using TSIG keys and nsupdate. I assume you’re clever enough to replace obvious placeholder variables. If you aren’t, you shouldn’t be fiddling with this anyway.

The goal will be that users can rather simply use nsupdate on their end without ever having to hassle the DNS admin to enter a host into the zone file for them.

Generating TSIG keys

This a simple process; you need dnssec-keygen, which comes shippend with bind9utils, for example; you can install it without having to install bind itself, for what it’s worth. Then, you run:

# dnssec-keygen -r /dev/urandom -a HMAC-MD5 -b 512 -n HOST $username

For each user $username you want to give a key to. Simple as that. Be careful not to use anything else than HMAC-MD5, sadly enough, since that’s what TSIG wants to see.

You’ll end up with two files, namely K${username}+157+${somenumber}.{key,private}. .key contains the public key, .private contains the private key.

Server configuration

Simple define resp. modify the following sections in your named configuration:

  1. Define the key
    key "$username." {
      algorithm hmac-md5;
      secret $(public key - contents of the .key file);
  2. Allow the key to update the zone
    zone "" {
            allow-update { key "$username."; };
TSIG support is officially experimental in PDNS; I’m only copypasting the instructions here, I haven’t checked them for correctness. All input examples manipulate the SQL backend.

  1. Set experimental-rfc2136=yes. If you do not change allow-2136-from, any IP can push dynamic updates (as with the BIND setup).
  2. Push the TSIG key into your configuration:
    > insert into tsigkeys (name, algorithm, secret) \
      values ('$username', 'hmac-md5', '$(public key)');
  3. Allow updates by the key to the zone:
    > select id from domains where name='';
    > insert into domainmetadata (domain_id, kind, content) \ 
      values (X, 'TSIG-ALLOW-2136', '$username');
  4. Optionally, limit updates to a specific IP, X as above:
    insert into domainmetadata(domain_id, kind, content) \ 
      values (X, ‘ALLOW-2136-FROM’,’a.b.c.d/32’);
You’re probably getting ready to berate me anyway, elitist schmuck. Do it yourself.

Client usage

Ensure that you supply the private key file to your user. (They don’t need the public key.)

Using nsupdate on a client is a rather simple (if not entirely trivial) affair. This is an example session:

nsupdate -k $privatekeyfile
> server dns.your.domain.tld
> zone
> update add 86400 A
> show
> send

This will add as an A record with IP to You get the drift. The syntax is as you’d expect, and is very well documented in nsupdate(1).

You could also think about handing out pre-written files to your users, or a little script to do it for you, or handing out puppet manifests to get new machines to add themselves to your DNS.

Have fun.

SEPA und Du

SEPA stellt gerade für den gemeinen Deutschen recht viel um, was die Überweisung angeht. Bisher waren wir folgendes gewohnt:

  • Auftraggeber: Textfeld
  • Empfänger: Textfeld, Kontonr., Bankleitzahl
  • Verwendungszweck: 379 Zeichen (14 x 27)
  • Eventuelle Typmarkierung (Lohnzahlung etc.)
  • Buchungsdatum, Wertstellung
  • Betrag

Dabei sind die Textfelder (inzwischen) ungeprüft, wobei die Bank einem üblicherweise nicht erlaubt, einen beliebigen Text als Auftraggeber einzutragen.

Die Buchung selber bekommt man als Empfänger üblicherweise erst mit, wenn sich die Bank dazu erarmt, es auf’s eigene Konto zu buchen.

Der Verwendungzweck, wie man ihn kennt, war oft ein erbärmlicher Haufen Text, und gerade bei Webinterfaces üblicherweise fast unleserlich, da diese sich nicht an die Festbreitendarstellung des Feldes halten. Vor allem aber war es Freitext, und man musste daraus interpretieren.

Mit SEPA wird das ganze programmatischer. Weg ist das alte Format, in Deutschland DTAUS genannt, mit seiner low-level Definition, damit man Spezifikationen für Hardware hat, die das Format direkt auslesen kann.

Denn SEPA-Überweisungen sind XML, mit all den Vor- und Nachteilen die dadurch entstehen.

Wenn ihr euch also schon gewundert habt, was diese ganzen lustigen Felder bei einer SEPA-Überweisung auf Eurem Konto eigentlich aussagen, horcht auf.

Das neue Format zum Einreichen von Überweisungen ist der ISO 20022, “UNIFI” (Universal Financial Industry message scheme). Was man als Endnutzer dann an die Bank schickt nennt sich eine “Payment Initiation”, abgekürzt “pain”. Das sagen die tatsächlich ohne mit der Wimper zu zucken.

In einer PAIN befinden sich folgende Felder, die am Ende bei euch ankommen:

  • Name als Freitextfeld
  • IBAN, BIC — die “neuen” Kontonummern und BLZ, nur jetzt global gültig.
    “International Bank Account Number”, genau das. Setzt sich für uns Deutsche als “DE” zusammen.
    “Bank Identification Code”. Aus dem BIC lässt sich unter anderem das Land der Bank ablesen, zusätzlich — wenn benutzt — auch solche Details wie die Filiale der Bank. Ist nur eine Übergangslösung und wird bis 2016 oder so bei Überweisungen unnötig. Beispiele:

    • COKSDE33XXX – Kreissparkasse Köln: Cologne Kreissparkasse, Deeutschland. Die “33” ist der Ortscode, der nicht aus Zahlen bestehen muss, sondern auch Buchstaben haben kann. Hier scheint’s einen Standard zu geben, der aber nicht publik ist. Das “XXX” kommt davon, dass die KSK keine Filialenidentifikation nutzt, der Code aber je nachdem 11 Zeichen lang sein muss.
    • MALADE51MNZ – Sparkasse Mainz: Gute Frage. Es sieht nach “Mainzer Landesbank” aus, die 51 hat bestimmt auch was tolles zu sagen, nur “MNZ” sieht offensichtlich aus.
    • DEUTDEFFXXX – Deutsche Bank, mit Sitz in Frankfurt. Filialcodes gibt’s auch. Aber die Deutsche Bank Köln hat zum Beispiel DEUTDEDK402 für die Filiale(n) dort.
  • Sequenztyp: SEPA ist kontextsensitiv, d.h. es wird mitgeführt, ob’s sich um eine einzelne Überweisung handelt oder um sich wiederholende Zahlung. Dafür dient dieses Feld. Hierbei wird auch noch unterschieden, ob’s die erste, eine laufende oder die letzte Überweisung einer Sequenz ist.
  • EREF: Endkundenreferenz. Diese dient dazu, der Zahlung eine eindeutige ID (vom Auftraggeber) zu geben. Vorteil: Wenn eine Zahlung zurückkommt hat sie weiterhin genau diese ID, weswegen man nicht umständlich matchen muss.
  • MREF: Mandatsreferenz. Dies bezeichnet effektiv die Kundennummer, die man beim Geldempfänger hat. Somit kann man leicht aus Daten heraus eindeutig filtern, wieder ohne extra Freitext zu parsen.
  • CRED: Creditor ID, die “Gläubiger-Identifikationsnummer”. Das ist eine von der z.B. Deutschen Bundesbank eindeutig vergebene Nummer, wer gerade das Geld einzieht. Das verhindert parsen des Freitextfeldes, Namensänderung von Firmen, etc. pp.
  • SVWZ: Der klassische Verwendungszweck. Passend für die Twittergeneration in 140 Zeichen.
  • Buchungsdatum, Wertstellungsdatum

Durch den definierten Standard hat’s vor allem den Vorteil, dass Ihr Zahlungen schon zu dem Zeitpunkt, wo sie eingestellt werden, sehen könnt – und nicht erst zur Wertstellung.

Somit habt ihr mal ‘ne Übersicht, was diese lustigen Felder alles bedeutet und was Ihr daraus erfahren könnt – oder eventuell sogar benutzen könnt. Bei weiteren Fragen nicht zögern.

Simple index of external media on Linux

If you’re not the fan of any kind of web-based or GUI application to index your files on external media for you, there’s a way simpler solution for the command line afficiandos out there: use locate.

locate is usually known as the prepared man’s find as it offers a subset of the functionality (finding files by name) with the adventage of it being nearly instantaneous. It does this by calling updatedb to simply index your filesystem into a simple hashed database which locate uses.

Normally, this does fairly well for your usual administrative tasks like “Where the hell is this file?”.

But, being a nice tool, locate also allows you to generate custom databases. Which is pretty useful when handling external drives and having an easy index of them.

I recommend creating ~/.locatedbs and storing database files there kind of like this:

updatedb -U $mountpoint -o $HOME/.locatedbs/$label

This can be explicitly queried like this:

locate -d $HOME/.locatedbs/$label $pattern

This works pretty well with modern environments where the mountpoint includes the label of the device, as this is the only (easy) way to find out where the file you’re looking at:

$ locate -d ~/.locatedbs/imbrium.db win8-usb.img

Of course, the usability here still sucks. Recent versions of locate support setting the environment variable LOCATE_PATH, which specifies (depending on the version: additional) databases to be searched. In case of Debian and Ubuntu, it’s an additional database path. Thus by inserting

export LOCATE_PATH=$(echo $HOME/.locatedbs/* | sed 's/ /:/g')

into your shell profile, any future logins will be able to simply use locate to search all indexed external drives.

To further increase usability, you’d ideally call an update script shortly before unmounting a drive instead of doing it manually, but I haven’t yet found a convenient way to do so neatly.