pfSense: SLAAC+DHCPv6 prefix delegation

pfSense is pretty awe­some, but there’s one flaw in con­fi­gu­ring an inter­face which can be pretty annoy­ing: you can­not use more than one IP con­fi­gu­ra­tion scheme per inter­face.
This is not much of an issue in most aver­age use cases — and IP alia­sing has a whole dif­fe­rent set of con­fi­gu­ra­tion opti­ons. The lat­ter does require you to create alia­ses for local IPs if there are mul­ti­ple on an inter­face you want to cover with fire­wall rules, though.

A rather spe­cial use case — one which is an issue when you use the Ger­man pro­vi­der Net­Co­lo­gne, for example — is that you might want to

  1. Use SLAAC to get an inter­face IPv6 address
  2. Use DHCPv6 to request a /48 pre­fix dele­ga­tion via the auto­con­fi­gu­red IPv6 address

With the GUI, the lat­ter is sadly impos­si­ble to rep­li­cate as to my knowledge.

Which is why you have to do a bit of fiddling to get it to run.

Zeroth, ensure you have a valid SLAAC con­fi­gu­ra­tion on your WAN interface.

First, you’ll need to create a con­fi­gu­ra­tion for dhcp6c, which only requests your pre­fix dele­ga­tion. Use the fol­lo­wing as /etc/ipv6-prefix.conf, taking care to replace the inter­face name, if required:

interface pppoe0 {
    send ia-pd 1;                                                                                   	
id-assoc pd 1 {                                                                                         	
    prefix-interface vr1 {                                                                          	
        sla-id 0;                                                                               	
        sla-len 16;                                                                             	

The above snip­pet will request a /48 pre­fix (64 net­work bits — sla-len (16) = 48).

Secondly, you will have to inte­grate it into the sys­tem star­tup. You can eit­her do this by using the shell­cmd package or by adding /usr/local/etc/rc.d/ with the fol­lo­wing content:

/usr/local/sbin/dhcp6c -c /etc/ipv6-prefix.conf pppoe0

Manu­ally start it (or reboot, if you’re into that way of star­ting pro­grams … you shouldn’t) and voila, you’ll have a pre­fix. You can then use the pre­fix you get in your DHCPv6 Server/RA con­fig; you’ll need to manu­ally enter it.

Disabling SSL < TLS in Dovecot and Postfix

Since many of you pro­bably haven’t:


smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3
smtpd_tls_protocols = !SSLv2, !SSLv3
smtp_tls_mandatory_protocols = !SSLv2, !SSLv3
smtp_tls_protocols = !SSLv2, !SSLv3
smtpd_tls_exclude_ciphers =  aNULL, DES, 3DES, MD5, DES+MD5, RC4, eNULL, LOW, EXP, PSK, SRP, DSS


ssl_protocols = !SSLv2 !SSLv3
ssl_cipher_list = ALL:!LOW:!SSLv2:!SSLv3:!EXP:!aNULL

[Bug watch] Erlang and SHA256 SSL certificates

A quick bug watch post to help with Googleability.

If you’re using Erlang (e.g.) with CouchDB in still in use ver­sion like Erlang 14 in Ubuntu 12.04, you might get the fol­lo­wing error:

[10:35:50.859 UTC] [<0.8389.46>] [error] gen_server <0.8389.46> ter­mi­na­ted with rea­son: {{{badmatch,{error,{asn1,{‘Type not com­pa­ti­ble with table constraint’,{{component,‘Type’},{value,{5,«»}},{unique_name_and_value,id,{1,2,840,113549,1,1,11}}}}}}},[{public_key,pkix_decode_cert,2},{ssl_certificate,trusted_cert_and_path,3},{ssl_handshake,certify,7},{ssl_connection,certify,2},{ssl_connection,next_state,3},{gen_fsm,handle_msg,7},{proc_lib,init_p_do_apply,3}]},{gen_fsm,sync_send_all_state_event,[<0.8390.46>,start,infinity]}}
[10:35:50.860 UTC] [<0.8389.46>] [error] CRASH REPORT Pro­cess <0.8389.46> with 0 neigh­bours exi­ted with rea­son: {{{badmatch,{error,{asn1,{‘Type not com­pa­ti­ble with table constraint’,{{component,‘Type’},{value,{5,«»}},{unique_name_and_value,id,{1,2,840,113549,1,1,11}}}}}}},[{public_key,pkix_decode_cert,2},{ssl_certificate,trusted_cert_and_path,3},{ssl_handshake,certify,7},{ssl_connection,certify,2},{ssl_connection,next_state,3},{gen_fsm,handle_msg,7},{proc_lib,init_p_do_apply,3}]},{gen_fsm,sync_send_all_state_event,[<0.8390.46>,start,infinity]}} in gen_server:terminate/6

I got this one when try­ing CouchDB rep­li­ca­tion via SSL.

The issue? Older Erlang ver­si­ons do not sup­port SHA256 signed SSL certificates.

It will fail with the above rather use­l­ess mes­sage which only gives vague hints at what hap­pens (a type mis­match in the decode cer­ti­fi­ca­tes function).


Your ticket system needs more monitoring

Often enough, they have one rather com­mon pro­blem: they do not track actual busi­ness processes.

Take my case, for example.

On 2014-07-01, I used an online form to order some inter­nets from Net­Co­lo­gne to my new abode in Colo­gne. I got an email con­fir­ma­tion a few minu­tes later, infor­ming me that I’ll pro­bably get my line con­nec­ted on 2014-07-24, and I’d receive fur­ther instruc­tions via snail mail.

The late con­nec­tion date is due to the line being ope­ra­ted by the Deut­sche Tele­kom, where one of their techs has to be reques­ted to do the actual mea­su­ring of qua­lity and hoo­king it up to the local ISP. (Yes, really, but that’s a rather dif­fe­rent affair.)

I sent out the SEPA man­date a bit late because I kind of for­got it was sup­po­sed to be sent out.

Fast for­ward to 2014-07-21, I haven’t recei­ved any kind of infor­ma­tion yet. I call the hot­line. They tell me that “Des­pite what we say, 4–6 weeks can hap­pen.” wit­hout any fur­ther kind of com­ment on my situation.

Fast for­ward to 2014-08-11. I call again to ask what the hell is hap­pe­ning; some­body told me they see that there’s a pro­blem with my account and that they’d be happy to help and they’d do ever­y­thing necessary.

2014-08-21. I again call, rather chil­led and rela­xed from a short vaca­tion, and ask if they’d kindly supply me with some kind of infor­ma­tion. “There’s been a tech­ni­cal pro­blem”, they say. “You should be recei­ving info soon!”

2014-08-28. Call to the hot­line. Get for­war­ded to L2. “Yeah, I found your ticket here. Says so that the Tele­kom tech che­cked the line on 2014-07-24 (!) and said it was okay. The sys­tem got stuck in a weird state. You’ll just have to go to a store and get your device.”

Cue me being slightly flab­berg­has­ted. It was after busi­ness clo­sing times for the stores, so I couldn’t go to them just now. I remem­be­red that I for­got to ask for my creden­ti­als — I never got some­thing from Net­Co­lo­gne other than the entry rece­ipt of my order.

Call to the hot­line again. “Yes, I can see here that due to a sys­tem error, your order con­fir­ma­tion never got sent out. I asked some people here to speed up the whole pro­cess, you should have Inter­net really soon now!”.

Next day, I visit the store. Pick up a rou­ter. Unpack it, notice that the split­ter for the DSL line was not in the box, try to con­nect the box, con­nec­tion doesn’t work. Bor­row a split­ter from a fri­end, con­nect it again, still nothing.

2014-09-01. I call the hot­line to kindly inquire as to why my modem isn’t get­ting a DSL sync.

“Well, I see here that your line is sup­po­sed to be hooked up on 2014-09-22.” “Uhm, you told me last week it was hooked up and I just nee­ded to get my box?” “I’m just the tech guy, and it says here your line will be hooked up on 2014-09-22.” Redial, ask for accounts, inquire as to wtf hap­pened. “Yes, we had to order a new con­nec­tion because of rea­sons, and the Tele­kom tech­ni­cian delay is the usual three weeks.” Cue mildly irri­ta­ted ques­tio­ning from my side. “I’m sorry, I can only esca­late this to con­flict manage­ment to see if we can speed up the pro­cess.” Do so, hang up.

So, what went wrong? One sample scenario:

  1. Net­Co­lo­gne recei­ves a ticket, auto­re­spon­der ack­now­ledges reception.
  2. Employee works on ticket, maybe noti­ces the lack of a man­date, and pus­hes it back for follow-up.
  3. Employee lea­ves on vaca­tion, someone else pos­si­bly pro­ces­ses docu­ment scan of the man­date, ticket sys­tem has the ticket locked or simi­lar, preven­ting an update for “infor­ma­tion received”.
  4. Exter­nal call­back from the Tele­kom, line is okay. Noted some­where in the ticket system.
  5. Returns to work, ticket does not appear actionable.
  6. User starts enqui­ring par­ti­cu­larly loudly, someone looks up the ticket and noti­ces the con­fir­ma­tion wasn’t even sent.
  7. Slight crap­ping of pants.
  8. Esca­late ticket to techies to check connection.
  9. Techies think $some­thing is wrong with the line, issue exter­nal call to Telekom.
  10. Cust­o­mer pissed.

That’s just one sample sce­na­rio, mind, but some­thing along those lines will most likely have happened.

Why did this even hap­pen? Because there’s no exter­nally con­trol­led pro­cess management.

Fol­lo­wing steps should happen:

  1. New cust­o­mer gets accep­ted, spa­wns a check­list, con­tai­ning things like “order con­fir­ma­tion”, “line check”, “con­nec­tion dele­ga­tion”, “pro­vi­sio­ning con­fi­gu­ra­tion” etc.
  2. Check­list should be able to be viewed by anyone and acked while refer­ring to tickets that ack them.
  3. Check­list should esca­late when actions exceed usual time­frames or nobody is working on them.

It shouldn’t be par­ti­cu­larly hard to imple­ment a sys­tem like this or simi­lar, pro­bably even with some OTRS or RT magics for the low-end vari­ant. For deta­ched ticket-based com­pa­nies, it’s an abso­lute must that some sort of con­trol exists to ensure pro­ces­ses actually get handled.

Else you have “cust­o­mers” you fail to con­vert into money for over two months who are incre­a­sin­gly pis­sed for no bet­ter rea­son than “Sorry, we for­got about you” — even though they’ve regu­larly been say­ing hi.

Amazon and Hachette

Ama­zon and Hachette are cur­rently in a bit of a turf war on what they expect from each other with regards to ebooks an pri­cing. The TL;DR ver­sion is “Ama­zon thinks Hachette are grubby moneyp­in­chers (read: wants more money), Hachette thinks Ama­zon are thie­ving scum­bags (read: wants more money)”.

The thing is that Ama­zon has now esca­la­ted this to a shoo­ting war. They offi­ci­ally decla­red that Ama­zon is now:

  1. Not sel­ling ebooks from Hachette
  2. Not sto­cking supplies for Hachette phy­si­cal books
  3. Not allo­wing pre­or­ders for Hachette books

This leads, in term, to fol­lo­wing cust­o­mer effects:

  1. Most Kindle users won’t be rea­ding Hachette ebooks (due to a medium-height wal­led garden)
  2. All Hachette books will be orde­red on demand from the publis­her, making the usual one to two day deli­ve­ries of Ama­zon utopic

Ama­zon actually encou­ra­ges people to use their com­pe­ti­tors to buy Hachette books. Many people think that Ama­zon is shoo­ting them­sel­ves in the foot with this tac­tic, as they’re just exclu­ding them­sel­ves from the poten­tial revenue.

What people aren’t actually con­side­ring is that the humon­gous spread of Kind­les makes get­ting a Hachette eBook for your aver­age cust­o­mer neigh-on imprac­tical. They have a lot of books in their Kindle library; the advan­ta­ges of the “instant buy, instant avai­l­a­bi­lity” sys­tem is a con­stant fact for Kindle users. Of course they could try to get a DRM-free eBook (which, depen­ding on your regio­nal mar­ket, is a hassle), but Hachette them­sel­ves is only offe­ring it with Adobe DRM, so that’s a no-go for Kindles.

So, if you really want that book offe­red by Hachette, and own a Kindle, you eit­her have to

  1. Buy a hard­copy version
  2. Get ano­ther expen­sive ereader

(Well, or maybe get a mobile app, but that’s a huge YMMV point).

So, ima­gine you just want to read a book right now. The Ama­zon stor­e­front doesn’t even show the books as avail­able. Will you order a paper­back? No. Will you, on mobile, hassle your­self with get­ting a copy that might work on your com­pu­ter? No. You’ll just buy ano­ther book.

And that’s where the power of this boy­cott lies.

Authenticating sudo with the SSH agent

I recently stum­bled upon the rather intri­guing idea of using your SSH agent to do… sudo authentication!

Sounds weird, right? But some­body imple­men­ted it. I haven’t audi­ted the code, but it mostly does what it’s sup­po­sed to and doesn’t appear to be malicious.

What it is, though, is a PAM module that gives you an ‘auth’ module for PAM1 As we know, the ‘auth’ module does the whole busi­ness of vali­da­ting that a user is who they claim to be by asking for creden­ti­als. Usually, we see e.g. sudo asking the user for their password.

The pro­blem with that: remem­be­ring all those sudo pass­words for remote hosts you’re admi­nis­te­ring — because, after all, you aren’t log­ging in as root directly, and you don’t use the same pass­word at the other end all the time, right? Well, except if you’re using LDAP, any­way. But even then, you’d still have to enter the pass­word (but it is the same, and you’re pro­bably fee­ling fancy with ansi­ble anyway.)

Enter — just include it in your PAM con­fi­gu­ra­tion, have sudo keep SSH_AUTH_SOCK. If you now con­nect with your SSH agent for­war­ded, PAM will check the public key you spe­cify against your for­war­ded SSH agent, and if that check suc­ceeds, pro­ceed along the PAM chain, you being hap­pily authed! Ente­ring a pass­word? Only when you unlock the SSH agent.

Now that the con­cept has been explai­ned, let’s think about consequences.

Secu­rity considerations

Is this method inher­ently insecure?

Well, not per se; if you think using SSH agent is okay, using it to replace a pass­word, in prin­ciple, is okay.

Can this authen­ti­ca­tion be exploited?

There’s two pos­si­ble sce­na­rios I can imagine:

  1. Someone mana­ges to take over the SSH agent.
  2. Someone modi­fies the spe­ci­fied authorized_keys file.

I per­so­nally do not assume that taking over the SSH agent is a signi­fi­cant risk; you’re pro­bably the admin set­ting this up, so you trust the ser­ver and the machine you’re con­nec­ting from. The only per­son on the remote side that could abuse the auth socket is root, your user and someone using an 0day, but being afraid of the last won’t get you any­where. Thus we can safely dis­re­gard that.

The only real pro­blem I see is that some­body mana­ges to over­write the authorized_keys file. pam_ssh_agent_auth allows you to spe­cify where the aut­ho­ri­zed key files are kept — you can allow them to be in any place you’d like, and there’s short­hand macros for the user’s home, the system’s host­name and the user name its­elf. A setup I per­so­nally like is using $HOME/.ssh/authorized_keys, because it’s a no change in place operation.


Anyone who can somehow modify or add to your authorized_keys file can take over your account and its sudo privileges!

Sample attack scenario:

  1. You’re an idiot and ~/.ssh/authorized_keys is world-writable.
  2. Someone else on the sys­tem appends their own key to your authorized_keys.
  3. They are con­nec­ted with their own SSH agent and just do a sudo -l -u $you.
  4. This will now work because PAM asks the attacker’s SSH agent to unlock their key.

Is this an issue? Only if your users are idi­ots. Or 0day, but see above.

The easy way to work around this is to sim­ply use a only root-controlled file, i.e. create some­thing like /etc/security/sudoers/%u.key for each user. Or just a glo­bally defined one where you pipe new keys in, wha­te­ver floats your boat.

But, except for taking care, this in my case is no par­ti­cu­larly via­ble attack sce­na­rio either.

If anyone comes up with a good one, please let me know.

How to imple­ment it

Sim­ple! Just run this Pup­pet mani­fest if you’re run­ning Debian/Ubuntu and trust me. You pro­bably shouldn’t, but please look at the mani­fest any­way and improve my Pup­petfu by giving cle­ver com­ments about how I should approach this ‘style’ of sharing configuration.

Essen­ti­ally, you need to do the fol­lo­wing steps:

  1. Install pam_ssh_agent_auth, just use my Debian/Ubuntu repos (deb $your_release main)) or go to the offi­cial site.
  2. Add SSH_AUTH_SOCK to the env_keep defaults in /etc/sudoers.
  3. Add auth sufficient file=%h/.ssh/authorized_keys to /etc/pam.d/sudo, ide­ally before common-auth.
  4. That it’s. Open a new con­nec­tion, sudo -k; sudo -l should work wit­hout you having to enter a pass­word.2

Sim­ple as that.

  1. If you really don’t know what PAM is about, read this arti­cle to get a bit of an over­view.
  2. If not — that’s what you have that other shell for you didn’t close or reuse just now!

Allowing your users to manage their DNS zone

You’ve been in this situa­tion before. You’re being the host for a couple of fri­ends (or strai­ght out cust­o­mers) whom you’re giving vir­tual machi­nes on that blade ser­ver you’re likely ren­ting from a hos­ting pro­vi­der. You’ve got ever­y­thing mostly set up right, even wran­g­led libvirt so that your users can con­nect remo­tely to restart and VNC their own machine (arti­cle on this is pending).

But then there’s the issue of allo­wing people to update the DNS. If you give them access to a zone file, that sort of works — but you’ve eit­her got to give them access to the machine run­ning the DNS ser­ver, or rig up some rather fuzzy and failure-prone sys­tem to trans­fer the zone files to where they’re actually use­ful. Both cases aren’t ideal.

So here’s how to do it right — by using TSIG keys and nsupdate. I assume you’re cle­ver enough to replace obvious pla­ce­hol­der varia­bles. If you aren’t, you shouldn’t be fiddling with this anyway.

The goal will be that users can rather sim­ply use nsup­date on their end wit­hout ever having to hassle the DNS admin to enter a host into the zone file for them.

Gene­ra­ting TSIG keys

This a sim­ple pro­cess; you need dnssec-keygen, which comes ship­pend with bind9utils, for example; you can install it wit­hout having to install bind its­elf, for what it’s worth. Then, you run:

# dnssec-keygen -r /dev/urandom -a HMAC-MD5 -b 512 -n HOST $username

For each user $user­name you want to give a key to. Sim­ple as that. Be care­ful not to use anything else than HMAC-MD5, sadly enough, since that’s what TSIG wants to see.

You’ll end up with two files, namely K${username}+157+${somenumber}.{key,private}. .key con­tains the public key, .private con­tains the pri­vate key.

Ser­ver configuration

Sim­ple define resp. modify the fol­lo­wing sec­tions in your named configuration:

  1. Define the key
    key "$username." {
      algorithm hmac-md5;
      secret $(public key - contents of the .key file);
  2. Allow the key to update the zone
    zone "" {
            allow-update { key "$username."; };
TSIG sup­port is offi­ci­ally expe­ri­men­tal in PDNS; I’m only copy­pas­ting the instruc­tions here, I haven’t che­cked them for cor­rect­ness. All input exam­ples mani­pu­late the SQL backend.

  1. Set experimental-rfc2136=yes. If you do not change allow-2136-from, any IP can push dyna­mic updates (as with the BIND setup).
  2. Push the TSIG key into your configuration:
    > insert into tsigkeys (name, algorithm, secret) \
      values ('$username', 'hmac-md5', '$(public key)');
  3. Allow updates by the key to the zone:
    > select id from domains where name='';
    > insert into domainmetadata (domain_id, kind, content) \ 
      values (X, 'TSIG-ALLOW-2136', '$username');
  4. Optio­nally, limit updates to a spe­ci­fic IP, X as above:
    insert into domainmetadata(domain_id, kind, content) \ 
      values (X, ‘ALLOW-2136-FROM’,’a.b.c.d/32’);
You’re pro­bably get­ting ready to berate me any­way, eli­tist schmuck. Do it yourself.

Cli­ent usage

Ensure that you supply the pri­vate key file to your user. (They don’t need the public key.)

Using nsupdate on a cli­ent is a rather sim­ple (if not ent­i­rely tri­vial) affair. This is an example session:

nsupdate -k $privatekeyfile
> server dns.your.domain.tld
> zone
> update add 86400 A
> show
> send

This will add as an A record with IP to You get the drift. The syn­tax is as you’d expect, and is very well docu­men­ted in nsupdate(1).

You could also think about han­ding out pre-written files to your users, or a little script to do it for you, or han­ding out pup­pet mani­fests to get new machi­nes to add them­sel­ves to your DNS.

Have fun.

SEPA und Du

SEPA stellt gerade für den gemei­nen Deut­schen recht viel um, was die Über­wei­sung angeht. Bis­her waren wir fol­gen­des gewohnt:

  • Auf­trag­ge­ber: Textfeld
  • Emp­fän­ger: Text­feld, Kon­tonr., Bankleitzahl
  • Ver­wen­dungs­zweck: 379 Zei­chen (14 x 27)
  • Even­tu­elle Typ­mar­kie­rung (Lohn­zah­lung etc.)
  • Buchungs­da­tum, Wertstellung
  • Betrag

Dabei sind die Text­fel­der (inzwi­schen) unge­prüft, wobei die Bank einem übli­cher­weise nicht erlaubt, einen belie­bi­gen Text als Auf­trag­ge­ber einzutragen.

Die Buchung sel­ber bekommt man als Emp­fän­ger übli­cher­weise erst mit, wenn sich die Bank dazu erarmt, es auf’s eigene Konto zu buchen.

Der Ver­wen­dung­zweck, wie man ihn kennt, war oft ein erbärm­li­cher Hau­fen Text, und gerade bei Web­in­ter­faces übli­cher­weise fast unle­ser­lich, da diese sich nicht an die Fest­brei­ten­dar­stel­lung des Fel­des hal­ten. Vor allem aber war es Freitext, und man musste dar­aus interpretieren.

Mit SEPA wird das ganze pro­gram­ma­ti­scher. Weg ist das alte For­mat, in Deutsch­land DTAUS genannt, mit sei­ner low-level Defi­ni­tion, damit man Spe­zi­fi­ka­tio­nen für Hard­ware hat, die das For­mat direkt aus­le­sen kann.

Denn SEPA-Überweisungen sind XML, mit all den Vor– und Nach­tei­len die dadurch entstehen.

Wenn ihr euch also schon gewun­dert habt, was diese gan­zen lus­ti­gen Fel­der bei einer SEPA-Überweisung auf Eurem Konto eigent­lich aus­sa­gen, horcht auf.

Das neue For­mat zum Ein­rei­chen von Über­wei­sun­gen ist der ISO 20022, “UNIFI” (Uni­ver­sal Finan­cial Indus­try mes­sage scheme). Was man als End­nut­zer dann an die Bank schickt nennt sich eine “Pay­ment Initia­tion”, abge­kürzt “pain”. Das sagen die tat­säch­lich ohne mit der Wim­per zu zucken.

In einer PAIN befin­den sich fol­gende Fel­der, die am Ende bei euch ankommen:

  • Name als Freitextfeld
  • IBAN, BIC — die “neuen” Kon­to­num­mern und BLZ, nur jetzt glo­bal gültig.
    “Inter­na­tio­nal Bank Account Num­ber”, genau das. Setzt sich für uns Deut­sche als “DE” zusam­men.
    “Bank Iden­ti­fi­ca­tion Code”. Aus dem BIC lässt sich unter ande­rem das Land der Bank able­sen, zusätz­lich — wenn benutzt — auch sol­che Details wie die Filiale der Bank. Ist nur eine Überg­angs­lö­sung und wird bis 2016 oder so bei Über­wei­sun­gen unnö­tig. Beispiele:

    • COKSDE33XXX — Kreis­spar­kasse Köln: Cologne Kreisspar­kasse, Deeutsch­land. Die “33” ist der Orts­code, der nicht aus Zah­len beste­hen muss, son­dern auch Buch­sta­ben haben kann. Hier scheint’s einen Stan­dard zu geben, der aber nicht publik ist. Das “XXX” kommt davon, dass die KSK keine Filia­len­iden­ti­fi­ka­tion nutzt, der Code aber je nach­dem 11 Zei­chen lang sein muss.
    • MALADE51MNZ — Spar­kasse Mainz: Gute Frage. Es sieht nach “Main­zer Lan­des­bank” aus, die 51 hat bestimmt auch was tol­les zu sagen, nur “MNZ” sieht offen­sicht­lich aus.
    • DEUTDEFFXXX — Deut­sche Bank, mit Sitz in Frank­furt. Fili­al­codes gibt’s auch. Aber die Deut­sche Bank Köln hat zum Bei­spiel DEUTDEDK402 für die Filiale(n) dort.
  • Sequenz­typ: SEPA ist kon­text­sen­si­tiv, d.h. es wird mit­ge­führt, ob’s sich um eine ein­zelne Über­wei­sung han­delt oder um sich wie­der­ho­lende Zah­lung. Dafür dient die­ses Feld. Hier­bei wird auch noch unter­schie­den, ob’s die erste, eine lau­fende oder die letzte Über­wei­sung einer Sequenz ist.
  • EREF: End­kun­den­re­fe­renz. Diese dient dazu, der Zah­lung eine ein­deu­tige ID (vom Auf­trag­ge­ber) zu geben. Vor­teil: Wenn eine Zah­lung zurück­kommt hat sie wei­ter­hin genau diese ID, wes­we­gen man nicht umständ­lich matchen muss.
  • MREF: Man­dats­re­fe­renz. Dies bezeich­net effek­tiv die Kun­den­num­mer, die man beim Geld­emp­fän­ger hat. Somit kann man leicht aus Daten her­aus ein­deu­tig fil­tern, wie­der ohne extra Freitext zu parsen.
  • CRED: Credi­tor ID, die “Gläubiger-Identifikationsnummer”. Das ist eine von der z.B. Deut­schen Bun­des­bank ein­deu­tig ver­ge­bene Num­mer, wer gerade das Geld ein­zieht. Das ver­hin­dert par­sen des Freitext­fel­des, Namens­än­de­rung von Fir­men, etc. pp.
  • SVWZ: Der klas­si­sche Ver­wen­dungs­zweck. Pas­send für die Twit­ter­ge­ne­ra­tion in 140 Zeichen.
  • Buchungs­da­tum, Wertstellungsdatum

Durch den defi­nier­ten Stan­dard hat’s vor allem den Vor­teil, dass Ihr Zah­lun­gen schon zu dem Zeit­punkt, wo sie ein­ge­stellt wer­den, sehen könnt — und nicht erst zur Wertstellung.

Somit habt ihr mal ‘ne Über­sicht, was diese lus­ti­gen Fel­der alles bedeu­tet und was Ihr dar­aus erfah­ren könnt — oder even­tu­ell sogar benut­zen könnt. Bei wei­te­ren Fra­gen nicht zögern.

Simple index of external media on Linux

If you’re not the fan of any kind of web-based or GUI app­li­ca­tion to index your files on exter­nal media for you, there’s a way sim­pler solu­tion for the com­mand line affi­ci­an­dos out there: use locate.

locate is usually known as the pre­pa­red man’s find as it offers a sub­set of the func­tio­na­lity (fin­ding files by name) with the adven­tage of it being nearly instan­ta­neous. It does this by cal­ling updatedb to sim­ply index your file­sys­tem into a sim­ple has­hed data­base which locate uses.

Nor­mally, this does fairly well for your usual admi­nis­tra­tive tasks like “Where the hell is this file?”.

But, being a nice tool, locate also allows you to gene­rate custom data­ba­ses. Which is pretty use­ful when hand­ling exter­nal dri­ves and having an easy index of them.

I recom­mend crea­ting ~/.locatedbs and sto­ring data­base files there kind of like this:

updatedb -U $mountpoint -o $HOME/.locatedbs/$label

This can be exp­li­citly que­ried like this:

locate -d $HOME/.locatedbs/$label $pattern

This works pretty well with modern environ­ments where the mount­point inclu­des the label of the device, as this is the only (easy) way to find out where the file you’re loo­king at:

$ locate -d ~/.locatedbs/imbrium.db win8-usb.img

Of course, the usa­bi­lity here still sucks. Recent ver­si­ons of locate sup­port set­ting the environ­ment varia­ble LOCATE_PATH, which spe­ci­fies (depen­ding on the ver­sion: addi­tio­nal) data­ba­ses to be sear­ched. In case of Debian and Ubuntu, it’s an addi­tio­nal data­base path. Thus by inserting

export LOCATE_PATH=$(echo $HOME/.locatedbs/* | sed 's/ /:/g')

into your shell pro­file, any future log­ins will be able to sim­ply use locate to search all inde­xed exter­nal drives.

To fur­ther increase usa­bi­lity, you’d ide­ally call an update script shortly before unmoun­ting a drive instead of doing it manu­ally, but I haven’t yet found a con­ve­ni­ent way to do so neatly.