Sunday, December 22, 2013

The UK "Porn" Filter Blocks Kids' Access To Tech, Civil Liberties Websites

It fell to the UK Tories to actually implement the Nanny State. Too bad Nanny Tory does not want kinds to read up on tech web sites, or civil liberties ones. Read on for a small sample of what the filter blocks, from a blocked-by-default tech writer.

[Updated 2x, scroll down]

Regular readers (at least those of you who also follow me on twitter) will know that I'm more than a little skeptical of censorship in general. And you may have seen, as evidenced by this tweet that I found the decision to implement a nationwide, on-by-default-but-possible-to-opt-out-of web filtering scheme in the UK to be a seriously stupid idea.

But then I was never very likely to become a UK resident or anything more than a very temporary customer of any UK ISP during visits to the country, so I did not give the matter another thought until today, when this tweet announced that you could indeed check whether your web site was blocked. The tweet points you to http://urlchecker.o2.co.uk/urlcheck.aspx, which appears to be a checking engine for UK ISP O2, which is among the ISPs to implement the blocking regime.

I used that URL checker to find the blocking status of various sites where I'm either part or the content-generating team or sites that I find interesting enough to visit every now and then. The sites appear in the semi-random order that I visited them on December 22, 2013, starting a little after 16:00 CET:

bsdly.net: I checked my own personal web site first, www.bsdly.net. I was a bit surprised to find that it was blocked in the default Parental control regime. Users of the archive.org Internet Wayback Machine may be able to find one page that contained a reference to a picture of "a blonde chick with a cute pussy", but the intrepid searcher will find that the picture in question in fact was of juvenile poultry and felines, respectively. The site is mainly tech content, with some resources such as the hourly updated list of greytrapped spam senders (see eg this blog post for some explanation of that list and its purpose).

nuug.no: Next up I tried the national Norwegian Unix Users' group web site www.nuug.no, with a somewhat odd result - "The URL has not yet been classified. If you would like it to be classified please press Reclassify URL". There was no Reclassify URL option visible in the web interface, but I would assume that in a default to block regime, the site would be blocked anyway. It would be nice to have confirmation of this from actual O2 customers or other people in the UK.

But NUUG hosts a few specific items I care about, such as my NUUG home page with links to slides from my talks and other resources I've produced over the years. Entering http://home.nuug.no and http://home.nuug.no/~peter/pf/ (the path to my PF tutorial material) both produced an "Invalid URL" message. This looks like bug in the URL checker code, but once again it would be nice to have confirmation from persons who are UK residents and/or O2 customers about the blocking status for those URLs.

usenix.org:Next I tried www.usenix.org, the main site for USENIX, the US-based but actually quite international Unix user group. This also turned out to be apparently blocked in the Parental control regime.

ukuug.org and flossuk.org: But if you're a UK resident, your first port of call for finding out about Unix-like systems is likely to be UK Unix User Group instead, so I checked both www.ukuug.org and flossuk.org, and both showed up as blocked in the Parental control regime (ukuug.org, flossuk.org).

So it appears that it's the official line that kids under 12 in the UK should not be taught about free or open source software, according to the default filtering settings.

eff.org: You will have guessed by now that I'm a civil liberties man, so the next site URL I tried was www.eff.org, which was also blocked by the Parental Control regime. So UK kids need protection from learning about civil liberties and privacy online.

amnesty.org.uk: A little closer to home for UK kids, I thought perhaps a thoroughly benign organization such as Amnesty International would somehow be pre-approved. But no go: I tried the UK web site, amnesty.org.uk, and it, to was blocked by the Parental Control regime. UK kids apparently need to be shielded from the sly propaganda of an organization that has worked, among other things for releasing political prisoners and against cruel and unusual punishment such as the death penalty everywhere.

slashdot.org: Next up in my quasi-random sequence was the tech new site slashdot.org, which may at times be informal in tone, but still so popular that I was somewhat surprised to find that it, too was blocked by the Parental Control regime.

linuxtoday.com: Another popular tech news site is linuxtoday.com, with, as the name says, has a free and open source software slant. Like slashdot, this one was also blocked by the Parental Control regime.

bsdly.blogspot.com: Circling back to my own turf, I decided to check the site where I publish the most often, bsdly.blogspot.com. By this time I wasn't terribly surprised to find that my writing too has fallen afould of something or other and is by default blocked by the Parental Control regime.

nostarch.com: Blocking an individual writer most people probably haven't heard about in a default to block regime isn't very surprising, but would they not at least pre-approve well known publishers? I tried nostarch.com (home of among others a series of LEGO-themed tech/science books for kids as well as Manga guides to various sciences, as well as various BSD and Linux books). No matter, they too were blocked by the Parental Control regime.

blogspot.com: Along the same lines as in the nostarch.com case, if they default to block they may well have an unknown scribe blocked, but would they block an entire blogging site's domain? So I tried blogspot.com. The result is that it's apparently registered that the site has "dynamic content" so even the "default safety" settings may end up blocking. But of course, another one that's blocked by the Parental Control regime.

arstechnica.com: I still couldn't see any clear logic besides a probable default to block, so I tried another popular tech news site, arstechnica.com. I was a bit annoyed, but not too surprised that this too was blocked by the Parental Control regime.

The last four I tried mainly to get confirmation of what I already suspected:

www.openbsd.org: What could possibly be offensive or subversive about the most secure free operating system's website? I don't know, but the site is apparently too risky for minors, blocked by the Parental Control regime as it is.

undeadly.org: The site undeadly.org is possibly marginally better known under the name OpenBSD Journal. It exists to collect and publish news relevant to the OpenBSD operating system, its developers and users. For Nanny only knows what reason, this site was also blocked by the Parental Control regime.

www.freebsd.org: www.freebsd.org is the home site of FreeBSD, another fairly popular free BSD operating system (which among others Apple has found useful as a source of code that works better in a public maintenance regime). I thought perhaps the incrementally larger community size would have put this site on Nanny's horizon, but apparently not: FreeBSD.org remains blocked by the Parental Control regime.

www.geekculture.com: How about a little geek humor, then? www.geekculture.com is home to several web comics, and The Joy of Tech remains a favorite, even with the marked Apple slant. But apparently that too, is too much for the children of the United Kingdom: Geekculture.com is blocked by the Parental Control regime.

www.linux.com: And finally, the penguins: By now it should not surprise anyone that www.linux.com, a common starting point for anyone looking for information about that operating system, like the others is blocked by the Parental Control regime.

So summing up, checking a semi-random collection of mainly fairly mainstream and some rather obscure tech URLs shows that far from focusing on its stated main objective, keeping innocent children away from online porn, the UK Internet filter shuts the UK's children out of a number of valuable IT resources, was well as several important civil liberties resources.

And if this is the true face of Parental Controls, I for one would take using controls like these as a sufficient indicator that the parents in question are in fact not qualified to do their parenting without proper supervision.

If this is an indicator of how the collective of United Kingdom Internet Nannies is to maintain their filtering regime, they are most certainly part of a bigger problem than the one they claim to be working to solve.


If you are a UK resident or other victim of automated censorship, I would like to hear from you. Please submit your story in comments or send me an email at blockage@bsdly.net


Update 2013-12-23 13:05 CET: A reader alerted me to the fact that the URL Checker is down, and that URL now leads to a page that claims the operators are "in the process of reviewing and updating" their offerings.

Update 2013-12-24 19:30 CET: O2 contacted me via twitter direct message, pointing me to their FAQ at http://news.o2.co.uk/2013/12/24/parental-control-questions-answered/. As non-responsive responses go, it was fairly useful, if not entirely constructive. The most useful bit of information is possibly that the service as presented is apparently specific to O2 customers, not the frequently cited national, Tory-backed regime.

As the FAQ document clearly demonstrates, the underlying problem is that some of their customers, for whatever reason, have chosen to leave the monitoring and mentoring of their children's reading to an automated service.

The world contains a multitude of dangers, and most of us, in the UK or elsewhere, would agree that it is a parent's duty both to protect their offspring and to educate them in how to avoid danger or handle problems they encounter.


Ignorance has yet to help anyone solve a problem

There are several ways to protect and educate, and I feel that the approach offered by O2's service is the wrong approach in several important ways. First off, by limiting children's access to information, it strongly recommends choosing ignorance instead of education as the main defense against the perceived evils of the world.

If a person would advise that you chain your children to the wall and burn their library cards, you as a responsible parent would perhaps be reluctant to accept that advice as valid. But O2 has no qualms about offering a commercial service that does just that, only via digital means.

But the engineer in me also compels me to point out that the "Parental Control" is designed only to attack a specific symptom of a wider problem, and it fails to address that problem. And making matters slightly worse, the proposed solution is to apply a technical solution to a human or perhaps social problem.

The real problem is that some number of parents do not feel up to the task of mentoring and educating their children in safe and sensible use of their gadgets and the information that is accessible through the gadgets. Parents failing, or perceiving that they may be failing, to adequately educate or mentor their children is the real problem here. Fix that problem, and your symptoms go away.

If a significant subset of O2's customers feel they are unable to handle their parenting duties, the problem may very well be that society is failing to adequately support parents' needs during their child-rearing years.

The solution may well be political, and may very well involve matters that are best resolved by making a proper choice at the ballot box after well reasoned debates. In the meantime, O2 is only making matters worse by answering the needs of persons who feel the symptoms of the deeper problem by catering to a perhaps understandable, but in fact utterly counterproductive, drive for ignorance.

Ignorance never helped anybody solve a problem. Children need to be nurtured, educated, mentored and stimulated to explore. Please do not force them into ignorance instead.

Monday, December 16, 2013

Three Books You Too Should Read This Year (Or Early 2014)

For the holiday season, The Grumpy Reader fishes out a selecton of recent books you should read even if you think you're too busy.

I'm sure you've had that feeling too: There are times when there's too much coming your way when you're already busy, so some things just fall by the wayside for too long. In my case the victims of my unpredictable schedule were books that publishers sent me for review in one form or the other, and those reviews just never got written as I wanted to in between other projects that were likely less interesting to the public at large.

But enough about me, here by way of making up for not getting around to this before are my slightly compressed thoughts about some important books released this year, just in time for your holiday shopping:


The Practice of Network Security Monitoring: The Best Surveillance Book You'll Read Anytime Soon

When I first heard that Richard Beijtlich was working on a No Starch Press title quite some months back, I immediately told my contacts at No Starch that I'd love to have a review copy, the sooner the better.

If Richard's name does not ring a bell, you may not have followed Internet security writing too closely and you could do worse than head over to Richard's blog at Tao Security and browse his online articles. In addition to prolific blogging and consulting activities, he is also the author of several highly acclaimed books in the field, and every now and then it's possible to sign up for his classes (see the blog reference for links).

The Practice of Network Security Monitoring is one of those books that I've very much enjoyed reading, but also one that for various reasons I found surprisingly difficult to review in a way that I feel does the book and its author justice.

It reads well. Richard spends enough time on basic concepts of network security monitoring early that the novice will be encouraged to go on, and once the basic concepts are laid out, the text alternates nicely between short expositions of theory and follow-on hands on sections that offer enough detail that the techies will have enough pointers to start exploring further but are hopefully not extensive enough to scare off those readers who really want most of all to follow the logic of the may sub-activities in the network security monitoring field.

It offers a lot of useful information in a reasonably compact format. But interesting and useful in this context on a technical level also means that you, dear reader, may be entering an area with a large set of legal pitfalls.

The network security monitoring system described in The Practice of Network Security Monitoring (all of it free software, fortunately) is designed to capture and store all network traffic passing through designated interfaces. That certainly has its uses, and the book offers a few delightful examples of analysis, including one scenario that reconstructs the exact sequence of events in a targeted malware attack.

But the level of detail recorded by these tools, including the content of all traffic, comes with a big warning: While the details will vary from jurisdiction to jurisdiction, setting up and using the tools as described here outside of a strictly controlled lab environment for pure research purposes is likely to be unconditionally illegal or at least require you to obtain specific permission from the relevant authorities or to be a member of a government that has already acquired a specific warrant.

The fact that the book was published at more or less the same time the various revelations about NSA's surveillance activities became public may have helped it sales, but the somewhat charged atmosphere those revelations created also made it a little harder to write this review. The trickle of leaked documents looks set to go on for a while more, but I feel rather confident that The Practice of Network Security Monitoring is likely to be the best technical book about surveillance you read this year or the next.

The Practice of Network Security Monitoring: Understanding Incident Detection and Response by Richard Bejtlich, No Starch Press, July 2013, 376 pp. ISBN: 978-1-59327-509-9. Available Here and at better bookstores.


Sudo Mastery: You're Doing It Wrong, But Not For Want Of Trying

If you're a system administrator or a user of Unix-like systems, you're likely to at least know about the sudo command, which lets ordinary users execute commands with other than usual permissions and privileges. But it's a program that comes with its own set of quasi-mythological misunderstandings.

In fact, as this book aptly demonstrates, most people who use sudo on a daily basis more likely than not are doing it wrong. Contrary to common belief, sudo is not actually 'the program that gets you root access'.

There were no good books about sudo around, so Michael W. Lucas set out to write one as part of his Mastery series (I've covered some of the titles in the series before, see my reviews of SSH Mastery and DNSSEC Mastery).

Like the other titles in the series, Sudo Mastery is a compact book (the PDF version comes to 135) that focuses on an important tool in the sysadmin's toolbox. It's clearly written for a sysadmin audience, but Michael does walk the reader through the basics of the Unix users, groups and permissions based security model and discusses some of its problems before he dives into how to make sudo do its best for you.

The book's subtitle is User Access Control for Real People, and this thinking shows through clearly in the text. Sudo Mastery is written with the working sysadmin in mind, and at most times the description of a new feature comes with an anecdote that clearly stems from practical experience.

At the end of the book, you will have been exposed to the bulk of sudo's features, and you will have learned how to construct your own access system that for all practical purposes, Role Based Access Control system. Or, at the very least, a system that will be more logical and maintainable than what you started with, and one that is far superior the binary root/not root game sysadmins and their users play all too often.

Like anything else Michael has written, this comes highly recommended. You can get your copy of Sudo Mastery directly from Tilted Windmill press here or through good bookstores.

Sudo Mastery: User Access Control for Real People, Tilted Windmill Press, November 2013 ISBN-10: 1493626205 ISBN-13: 978-1493626205


Absolute OpenBSD, 2nd, edition: The Book About My Favorite Operating System

Regular readers will know that I have a favorite operating system, and it's called OpenBSD. Until April of this year, the most recent widely known book about OpenBSD was Michael W. Lucas' 2003 title Absolute OpenBSD. Then, finally, the much refreshed Absolute OpenBSD, 2nd edition was published.

I was close enough to the project myself as that book's technical editor that I was a little shy about writing much about the title myself when it came out, but after not looking at it for some months I can say that the result is definitely worth your time.

I even think that it would be a good idea to hand this book and an OpenBSD CD set to students as their first Unix. OpenBSD is a lot more compact and logically structured than a lot of the competition, and with Michael's 2nd edition to supplement the included man pages and FAQ, there's even a chance they will learn to expect that the system defaults are set to sane values and that there is a perfectly logical reason for everything your system does.

Absolute OpenBSD, 2nd Edition: Unix for the Practical Paranoid by Michael W. Lucas No Starch Press, April 2013, 536 pp. ISBN: 978-1-59327-476-4. Available from the publisher here and through good bookstores.

If you still haven't done your geek holiday shopping, these are my season's recommendations. Even if you read this some time past the holidays, all of these titles will be valuable additions to the actively used parts of your tech library.

Monday, November 11, 2013

Compatibility Is Hard: CHARTEST.DOC Is From 1989, Was Not Readable By 2003

A late 1980s ASCII table printer test, recently exhumed, reveals the perils and pitfalls of naive belief in document format compatibility.

Most documents, I am frequently told, are trivial things of two pages or less and with an expected useful lifetime measured in weeks or less. That may be true in some cases, but I've been a writer of some kind for long enough to care about keeping information available and editable.

The stuff I write tends to live on for years in some form, and tends to require revision at intervals that are just long enough that it's likely I've forgotten the exact details since last time I went over that particular text. If you're only a tiny bit like me, you'll be nodding intensely at those observations.

And of course as a technical writer, I've had enough exposure to Microsoft Word that I concur emphatically with essentially all that Charlie Stross wrote in his recent blog post, Why Microsoft Word must Die.

Immediately after reading Charlie's post, I added some thoughts of my own in the comments. Do read the comments or at least skim some, there are some gems there. Including some comments from people who were apparently involved in developing Microsoft Word over the years. One of those commenters makes the claim that you can reliably read in current Word versions any files you may have created with earlier versions.

Reading that, I remembered an episode about ten years ago when Gisle Hannemyr asked for help on a mailing list we both subscribed to, for problems some ex-colleagues of his had accessing documentation of approximately 1998-1999 vintage.

The files had been created using whatever version of Microsoft Word was current at the time, but whatever version was current in 2003 was apparently either refusing to load the files or rendering them as complete gibberish. If I remember correctly, the problem was eventually solved by getting hold of a roughly same-vintage machine with an old Word version, loading the files and saving as RTF. That way, you would stand a fighting chance of preserving your work across versions (I've ranted about related matters earlier, read at your own peril).

At the time, I had several old versions of Word stashed away, including a possibly illicit copy of Word 5.0 for MS-DOS that I remembered keeping. I offered to help Gisle's colleagues of course, but as I remember they found a way to read their data without my help.

Anyway, when I looked in the program directory I must by then have copied across about a sequence of possibly a dozen different machines, there was also a copy of an extremely simple print test document, called CHARTEST.DOC. Do not click on that link unless you also have a copy of Microsoft Word for DOS around. It will be gibberish except for the sequence that represents an ASCII table.

Why this was interesting is probably better illustrated by the screen dump I made of the directory listing:



The document's time stamp, April 27, 1989, reminded me of a quick solution to an everyday need we had back then.

If you're old enough to remember MS-DOS code pages, you will understand why I found it necessary to generate an ASCII table for a print test.

The place I worked then was in the business of producing Norwegian and other Scandinavian language versions of software and related documentation. This was MS-DOS 3.something and Microsoft Windows 2.something era, when documentation came mainly in printed form, and to produce our camera-ready copy, we needed to make really sure that the machine was properly configured to make the Apple LaserWriter attached to its serial port produce copy that included the two characters IBM famously forgot when they created the initial IBM PC character set ('ø' and the uppercase version, 'Ø', which would otherwise appear on the printed page as cent and yen signs, respectively).

There were other characters we needed to see as well, some of them accessible in Word only by pressing and holding the Alt key while tapping a three-digit sequence on the numeric keypad.

My solution to the problem at hand was to generate a print sample of all printable ASCII characters, using a Turbo Pascal program that likely read something like this:

program chartest;

var
   outfile  : file;
   c        : byte;

begin
  outfile := "chartest.txt"
  for c := 0 to 255 do begin
     if isprint(c) then
        writeln(c, " = " chr(c), );
     end;
end;

It's been almost that long (1989-ish) since I wrote any Turbo Pascal code, and like the other software I'm writing about in this article, I no longer have access to a copy, so there may well be syntax errors in that, but you will get the general idea.

My next step was to load the resulting file into Word, saving as Word's .DOC format, and we used that as a test print before any important Scandinavian-language document was to be printed. The program itself (likely called CHARTEST.PAS and possibly saved somewhere in the Turbo Pascal directory tree on my machine, or more likely too trivial to merit even saving once it had produced the output we wanted), does not survive. Neither does CHARTEST.TXT, but the .DOC file survives because I found it so useful I copied it to the Word program directory.

The Gisle's request came, and I still had a machine somewhere that had Microsoft Word 2000 on it, which was unable to read the file correctly, as the first screenshot shows:


(You can get the raw version from here)

My offer then to help convert probably means I still had other historical versions available at the time, but none of them survive in my archives today. (I left the company in 2008 after selling off my founder's share and was very careful to keep only what I had a clear right to keep).

The PDF file shows expected printer output as I prepared it back then, generated from the PostScript file which I assume is closer to the original date.

Compare the screenshots in the "winword" sequence, winword1.jpg, winword2.jpg, winword3.jpg, winword4.jpg, winword5.jpg, winword6.jpg with what the document looked like in the DOS version, wordscreen1.jpg, wordscreen2.jpg, wordscreen3.jpg, wordscreen4.jpg, wordscreen5.jpg and finally wordscreen6.jpg. You can find a particularly puzzling screenshot chartest-after-convpack.jpg here, with both programs showing an overlapping sequence, or you can grab all the files from here if you like.

The question that lingers in my mind is, when a simple character set table is not an obvious conversion, and a relatively close successor version was clearly unable to read the original file correctly, what other possibly content-deforming incompatibilites will come back to bite us when we need to go back to earlier material?

To my mind, flatly denying the problem like the commenters over at Charlie's blog purporting to be Microsoft insiders do will simply not do.

I invite your comments.

Saturday, October 5, 2013

The Hail Mary Cloud And The Lessons Learned

Against ridiculous odds and even after gaining some media focus, the botnet dubbed The Hail Mary Cloud apparently succeeded in staying under the radar and kept compromising Linux machines for several years. This article, based on my BSDCan 2013 talk, sums up known facts about the botnet and suggests some common-sense measures to be taken going forward.

The Hail Mary Cloud was a widely distributed, low intensity password guessing botnet that targeted Secure Shell (ssh) servers on the public Internet.

The first activity may have been as early as 2007, but our first recorded data start in late 2008. Links to full data and extracts are included in this article.

We present the basic behavior and algorithms, and point to possible policies for staying safe(r) from similar present or future attacks.

But first, a few words about the devil we knew before the incidents that form the core of the narrative.

The Traditional SSH Bruteforce Attack

If you run an Internet-facing SSH service, you have seen something like this in your logs:

Sep 26 03:12:34 skapet sshd[25771]: Failed password for root from 200.72.41.31 port 40992 ssh2
Sep 26 03:12:34 skapet sshd[5279]: Failed password for root from 200.72.41.31 port 40992 ssh2
Sep 26 03:12:35 skapet sshd[5279]: Received disconnect from 200.72.41.31: 11: Bye Bye
Sep 26 03:12:44 skapet sshd[29635]: Invalid user admin from 200.72.41.31
Sep 26 03:12:44 skapet sshd[24703]: input_userauth_request: invalid user admin
Sep 26 03:12:44 skapet sshd[24703]: Failed password for invalid user admin from 200.72.41.31 port 41484 ssh2
Sep 26 03:12:44 skapet sshd[29635]: Failed password for invalid user admin from 200.72.41.31 port 41484 ssh2
Sep 26 03:12:45 skapet sshd[24703]: Connection closed by 200.72.41.31
Sep 26 03:13:10 skapet sshd[11459]: Failed password for root from 200.72.41.31 port 43344 ssh2


This is the classic, rapid-fire type of bruteforce attack, with rapid-fire login attempts from one source. (And yes, skapet is the Internet-facing host on my home network.)

The Likely Business Plan

These attempts are often preceded by a port scan, but in other cases it appears that the miscreants are just blasting away at random. In my experience, with the gateway usually at the lowest-numbered address, the activity usually turns up first there, before trying higher-numbered hosts. I'm not really in a mind to offer help or advice to the people running those scripts, but it might be possible to scan the internet from 255.255.255.255 downwards next time. Anyway, looking at the log excerpts, miscreants' likely plan is
  1. Try for likely user names, hope for guessable password, keep guessing until successful.
  2. PROFIT!
But then the attempts usually come in faster than most of us can type, so with a little help from toolmakers, we came up with an inexpensive first line of defense, easily implemented in perimeter packet filters (aka firewalls).

Traditional Anti-Bruteforce Rules

Rapid-fire bruteforce attacks are easy to head off. I tend to use OpenBSD on internet facing hosts, so first we present the technique as it has been available in OpenBSD since version 3.5 (released in 2005), where state tracking options are used to set limits we later act on:

In your /etc/pf.conf, you add a table to store addresses, block access for all traffic coming from members of that table, and finally amend your typical pass rule with some state tracking options. The result looks something like this:

table <bruteforce> persist
block quick from <bruteforce>
pass inet proto tcp to $int_if:network port $tcp_services \
        keep state (max-src-conn 100, max-src-conn-rate 15/5, \
         overload <bruteforce> flush global)

Here, max-src-conn is the maximum number of concurrent connections allowed from one host

max-src-conn-rate is the maximum allowed rate of new connections, here 15 connections per 5 seconds.

overload <bruteforce> means that any hosts that exceed either of these limits are have their adress added to this table

and, just for good measure, flush global means that for host that is added to our overload table, we kill all existing connections too.

Basically, problem solved - the noise from rapid-fire bruteforcers generally disappears instantly or after a very few attempts. If you are about to implement something like this (and many do -- the bruteforcer section in my PF tutorial appears to be among the more popular ones), you probably need to watch your logs to find useful numbers for your site, and tweak rules accordingly. I have yet to meet an admin who plausibly claims to never have been tripped up by their overload rules at some point. That's when you learn to appreciate having an alternative way in to your systems, such as a separate admin network.

Traditional Anti-Bruteforce Rules, Linux Style

For those not yet converted to the fine OpenBSD toolset (available in FreeBSD and other BSDs too, with only minor if any variations in details for this particular context), the Linux equivalent would be something like

sudo iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH
sudo iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW -m recent --update --seconds 5 \
--hitcount 15 --rttl --name SSH -j DROP

But be warned: this will still be minus the maximum number of connections limit, plus the usual iptables warts. And you'd need a separate set of commands for ip6tables.

It's likely something similar is doable with other tools and products too, including possibly some proprietary ones. I've made something of an effort to limit my exposure to the non-free tools, so I can't offer you any more detail. To find out what your present product can do, please dive into the documentation for whichever product you are using. Or come back for some further OpenBSD goodness.

But as you can see, for all practical purposes the rapid-fire bruteforce or floods problem has been solved with trivial configuration tweaks.

But then something happened.

What's That? Something New!

On November 19th, 2008 (or shortly thereafter), I noticed this in my authentication logs:

Nov 19 15:04:22 rosalita sshd[40232]: error: PAM: authentication error for illegal user alias from s514.nxs.nl
Nov 19 15:07:32 rosalita sshd[40239]: error: PAM: authentication error for illegal user alias from c90678d3.static.spo.virtua.com.br
Nov 19 15:10:20 rosalita sshd[40247]: error: PAM: authentication error for illegal user alias from 207-47-162-126.prna.static.sasknet.sk.ca
Nov 19 15:13:46 rosalita sshd[40268]: error: PAM: authentication error for illegal user alias from 125-236-218-109.adsl.xtra.co.nz
Nov 19 15:16:29 rosalita sshd[40275]: error: PAM: authentication error for illegal user alias from 200.93.147.114
Nov 19 15:19:12 rosalita sshd[40279]: error: PAM: authentication error for illegal user alias from 62.225.15.82
Nov 19 15:22:29 rosalita sshd[40298]: error: PAM: authentication error for illegal user alias from 121.33.199.39
Nov 19 15:25:14 rosalita sshd[40305]: error: PAM: authentication error for illegal user alias from 130.red-80-37-213.staticip.rima-tde.net
Nov 19 15:28:23 rosalita sshd[40309]: error: PAM: authentication error for illegal user alias from 70-46-140-187.orl.fdn.com
Nov 19 15:31:17 rosalita sshd[40316]: error: PAM: authentication error for illegal user alias from gate-dialog-simet.jgora.dialog.net.pl
Nov 19 15:34:18 rosalita sshd[40334]: error: PAM: authentication error for illegal user alias from 80.51.31.84
Nov 19 15:37:23 rosalita sshd[40342]: error: PAM: authentication error for illegal user alias from 82.207.104.34
Nov 19 15:40:20 rosalita sshd[40350]: error: PAM: authentication error for illegal user alias from 70-46-140-187.orl.fdn.com
Nov 19 15:43:39 rosalita sshd[40354]: error: PAM: authentication error for illegal user alias from 200.20.187.222
Nov 19 15:46:41 rosalita sshd[40374]: error: PAM: authentication error for illegal user amanda from 58.196.4.2
Nov 19 15:49:31 rosalita sshd[40378]: error: PAM: authentication error for illegal user amanda from host116-164.dissent.birch.net
Nov 19 15:55:47 rosalita sshd[40408]: error: PAM: authentication error for illegal user amanda from robert71.lnk.telstra.net
Nov 19 15:59:08 rosalita sshd[40412]: error: PAM: authentication error for illegal user amanda from static-71-166-159-177.washdc.east.verizon.net

... and so on. The alphabetic progression of user names went on and on.

The pattern seemed to be that several hosts, in widely different networks, try to access our system as the same user, up to minutes apart. When any one host comes back it's more likely than not several user names later. The full sequence (it stopped December 30th), is available here.

Take a few minutes to browse the log data if you like. It's worth noting that rosalita was a server that had a limited set of functions for a limited set of users, and basically no other users than myself ever logged in there via SSH, even if they for various reason had the option open to them. So in contrast to busier sites where sequences like this might have drowned in the noise, here it really stood out. And I suppose after looking at the data, you can understand my initial reaction.

The Initial Reaction

My initial reaction was pure disbelief.

For the first few days I tried tweaking PF rules, playing with the attempts/second values and scratching my head, going, "How do I make this match?"

I spent way too much time on that, and the short version of the answer to that question is, you can't. With the simple and in fact quite elegant state tracking options, you will soon hit limits (especially time limits) that interfere with normal use, and you end up blocking legitimate traffic.

So I gave up on prevention (which really only would have rid me of a bit of noise in my authentication logs), and I started analyzing the data instead, trying to eyeball patterns that would explain what I was seeing. After a while it dawned on me that this could very well be a coordinated effort, using a widely distributed set of compromised hosts.

So there was a bit of reason in there after all. Maybe even a business plan or model. Next, I started analyzing my data, and came up with -

Bruteforcer Business Plan, Distributed Version

The Executive Summary would run something like this: Have more hosts take turns, round robin-ish, at long enough intervals to stay under the radar, guessing for weak passwords.

The plan is much like before, but now we have more host on the attacking side, so
  1. Pick a host from our pool, assign a user name and password (picked from a list, dictionary or pool)
  2. For each host,
    1. Try logging in to the chosen target with the assigned user name and password
    2. If successful, report back to base (we theorize); else wait for instructions (again we speculate)
  3. Go to 1).
  4. For each success at 2.2), PROFIT!

You're The Target

Let's recap, and take step back. What have we learned?

To my mind at least, it all boils down to the basics
  • Your Unix computer (Linux, OpenBSD, FreeBSD or other) is a desirable, powerful thing.
  • If your password is weak, you will be 0WN3D, sooner rather than later.
  • There's a whole fleet out there, and they're coordinated.
At this point I thought I had something useful, so I started my first writeup for publication. I had just started a new job at the time, and I think I mentioned the oddities to some of my new colleagues (that company is unfortunately defunct, but the original linked articles give some information). Anyway, I wrote and published, hoping to generate a little public attention for myself and my employer. And who knows, maybe even move a few more copies of that book I'd written the year before.


Initial Public Reaction

On December 2, 2008, I published the first blog post in what would become a longish sequence, A low intensity, distributed bruteforce attempt, where I summarized my findings. It's slightly more wordy than this piece, but if I've piqued your interest so far, please go ahead and read. And as to a little public attention, I got my wish. The post ended up slashdotted, the first among my colleagues to end up with their name on the front page of Slashdot.

That brought
  • more disbelief, (see slashdot and other comments) but also
  • confirmation, via comments and email, that others were seeing the same thing, and that the first occurrences may have been seen up to a year earlier (November-ish 2007).

The slow bruteforcers were not getting in, so I just went on collecting data. I estimated they'd be going on well past new year's if they were going to reach the end of the alphabet.


On December 30th, 2008, The Attempts Stopped

The attempts came to an end, conveniently while I was away on vacation. The last entries were:

Dec 30 11:03:08 rosalita sshd[51108]: error: PAM: authentication error for illegal user sophia from 201.161.28.9
Dec 30 11:05:08 filehut sshd[54932]: error: PAM: authentication error for illegal user sophia from 201.161.28.9
Dec 30 11:06:35 rosalita sshd[51116]: error: PAM: authentication error for illegal user sophia from static-98-119-110-139.lsanca.dsl-w.verizon.net
Dec 30 11:09:03 filehut sshd[54981]: error: PAM: authentication error for illegal user sophia from static-98-119-110-139.lsanca.dsl-w.verizon.net

That is, not even completing a full alphabetic cycle.

By then they had made 29916 attempts, all failed. You can find the full listing at here).

Trying 6100 user IDs (list by frequency here). More than likely you can guess the top one without even looking.

From a total of 1193 different hosts (list by frequency here).

As I said earlier, there were no successful penetrations. Zero.


Common characteristics

The slashdot story brought comments and feedback, with some observations from other sites. Not a lot of data, but enough that the patterns we had observed were confirmed. The attempts were all password authentication attempts, no other authentication methods attempted.

For the most part the extended incident consisted of attempts on an alphabetic sequence of 'likely' user names, but all sites also saw at least one long run of root only attempts. This pattern was to repeat itself, and also show up in data from other sources.

There would be anything from seconds to minutes between attempts, but attempts from any single host would come at much longer intervals.

First Round Observations, Early Conclusions

Summing up what we had so far, here are a few observations and attempts at early conclusions.

At the site where I had registered the distributed attempts, the Internet-reachable machines all ran either OpenBSD or FreeBSD. Only two FreeBSD boxes were contacted.

The attackers were hungry for root, so having PermitRootLogin no in our sshd config anywhere Internet facing proved to be a good idea.

We hadn't forced our users to keys only, but a bit of luck and John the Ripper (/usr/ports/security/john) saved our behinds.

The number of attempts per user name had decreased over time (as illustrated by this graph), so we speculated in the second article Into a new year, slowly pounding the gates (on slashdot as The Slow Bruteforce Botnet(s) May Be Learning) that success or not was measured at a command and control site, with resources allocated accordingly.

With the sequence not completed, we thought they'd given up. After all, the odds against succeeding seemed monumental.

After all, a couple of slashdotted blog posts couldn't have hurt, could they?


But Of Course They Came Back

As luck would have it, whoever was out there had not totally admitted defeat just yet. In the early hours CET, April 7th, 2009, the slow brutes showed up again:

Apr  7 05:02:07 rosalita sshd[4739]: error: PAM: authentication error for root from ruth.globalcon.net
Apr  7 05:02:15 rosalita sshd[4742]: error: PAM: authentication error for root from ip-206-83-192-201.sterlingnetwork.net
Apr  7 05:02:54 rosalita sshd[4746]: error: PAM: authentication error for root from cyscorpions.com
Apr  7 05:02:59 rosalita sshd[4745]: error: PAM: authentication error for root from smtp.bancomorada.com.br
Apr  7 05:03:10 rosalita sshd[4751]: error: PAM: authentication error for root from 82.192.86.217
Apr  7 05:03:25 rosalita sshd[4754]: error: PAM: authentication error for root from 66.135.60.203
Apr  7 05:03:52 rosalita sshd[4757]: error: PAM: authentication error for root from rainha.florianonet.com.br
Apr  7 05:04:00 rosalita sshd[4760]: error: PAM: authentication error for root from 72.11.144.34
Apr  7 05:04:34 rosalita sshd[4763]: error: PAM: authentication error for root from s1.serverhex.com
Apr  7 05:04:38 rosalita sshd[4765]: error: PAM: authentication error for root from mail.pitnet.com.br


Starting with 2318 attempts at root before moving on to admin and proceeding with the alphabetic sequence. The incident played out pretty much like the previous one, only this time I was sure I had managed to capture all relevant data before my logs were rotated out of existence.

The data is available in the following forms: Full log here, one line per attempt here, users by frequency here, hosts by frequency here.

I couldn't resist kicking up some more publicity, and indeed we got another slashdot storm out of the article The slow brute zombies are back, on slashdot as The Low Intensity Brute-Force Zombies Are Back.

And shortly afterwards, we learned something new -

Introducing dt_ssh5, Linux /tmp Resident

Of course there was a piece of malware involved.

A Linux binary called dt_ssh5 did the grunt work.

The dt_ssh5 file was found installed in /tmp on affected systems. The reason our perpetrators chose to target that directory is likely because the /tmp directory tends to be world readable and world writeable.

Again, this points us to the three basic lessons:
  1. Stay away from guessable passwords
  2. Watch for weird files (stuff you didn't put there yourself) anywhere in your file system, even in /tmp.
  3. Internalize the fact that PermitRootLogin yes is a bad idea.

dt_ssh5: Basic Algorithm

The discovery of dt_ssh5 made for a more complete picture. A rough algorithm suggested itself:

  1. Pick a new host from our pool, assign a user name and password
  2. For each host,
    1. Try user name and password
    2. if successful
      1. drop the dt_ssh5 binary in /tmp; start it
      2. report back to base
      else wait for instructions
  3. Go to 1.
  4. For each success at 2.2, PROFIT!

I never got myself a copy, so the actual mechanism for communicating back to base remains unclear.


The Waves We Saw, 2008 - 2012

We saw eight sequences (complete list of articles in the References section at the end),

From - To AttemptsUser IDsHostsSuccessful Logins
2008-11-19 15:04:22 - 2008-12-30 11:09:0329916610011930
2009-04-07 03:56:25 - 2009-04-12 21:01:371264124911040
2009-09-30 21:15:36 - 2009-10-15 13:42:079998110710
2009-10-28 23:58:35 - 2010-01-22 09:56:2444513811041580
2010-06-17 01:55:34 - 2010-08-11 13:23:0123014388755680
2011-10-23 04:13:00 - 2011-10-29 05:40:0747739443380
2011-11-03 20:56:18 - 2011-11-26 17:42:19490724742520
2012-04-01 12:33:04 - 2012-04-06 14:52:1147571081230


The 2009-09-30 sequence was notable for trying only root, the 2012-04-01 sequence for being the first to attempt access to OpenBSD hosts.

We may have missed earlier sequences, early reports place the first similar attempts as far back as 2007.


For A While, The Botnet Grew

From our point of view, the swarm stayed away for a while and came back stronger, for a couple of iterations, possibly after tweaking their code in the meantime. Or rather, the gaps in our data represent times when it focused elsewhere.

Clearly, not everybody was listening to online rants about guessable passwords.

For a while, the distributed approach appeared to be working.

It was (of course) during a growth period I coined the phrase "The Hail Mary Cloud".

Instantly, a myriad of "Hail Mary" experts joined the insta-punditry on slashdot and elsewhere.

It Went Away Or Dwindled

Between August 2010 and October 2010, things either started going badly for The Hail Mary Cloud, or possibly they focused elsewhere.

I went on collecting data.

There wasn't much to write about, except possibly that the botnet's command and control was redistributing effort based on past success. Aiming at crackable hosts elsewhere.


And Resurfaced In China?

Our last sighting so far was in April 2012. The data is preserved here.

This was the first time we saw Hail Mary Cloud style attempts at accessing OpenBSD systems.

The majority of attempts were spaced at at least 10 seconds apart, and until I revisited the data recently, I thought only two hosts in China were involved.

In fact, 23 hosts made a total of 4757 attempts at 1081 user IDs, netting 0 successful logins.

The new frequency data I thought interesting enough to write about, so I wrote up If We Go One Attempt Every Ten Seconds, We're Under The Radar, and netted another slashdotting. I took another look at the data later and slightly amended the conclusions, the article has been corrected with proper data extracted.


Then What To Do?

The question anybody reading this far will be asking is, what should we do in order to avoid compromise by the password guessing swarms? To my mind, it all boils down to common sense systems administration:

Mind your logs. You can read them yourself, or train a robot to. I use logsentry, other monitoring tools can be taught to look for anomalies (failed logins, etc)

Keep your system up to date. If not OpenBSD, check openssh.com for the latest version, check what your system has and badger the maintainer if it's outdated.

And of course, configure your applications such as sshd properly -

sshd_config: 'PermitRootLogin no' and a few other items

These two settings in your sshd_config will give you the most bang for the buck:

PermitRootLogin no
PasswordAuthentication no

Make your users generate keys, add the *.pub to their ~/.ssh/authorized_keys files.

For a bit of background, Michael W. Lucas: SSH Mastery (Tilted Windmill Press 2013) is a recent and very readable guide to configuring your SSH (server and clients) sensibly. It's compact and affordable too.


Keep Them Out, Keep Them Guessing

At this point, most geeks would wax lyrical about the relative strengths of different encryption schemes and algorithms.

Being a simpler mind, I prefer a different metric for how good your scheme is, or effectivness of obfuscation (also see entropy):

How many bytes does a would-be intruder have to get exactly right?
I've summed up the answer to that question in this table:

Authentication methodNumber of bytes
PasswordPassword length (varies, how long is yours?)
Alternate PortPort number (2 bytes, it's a 16 bit value, remember)
Port KnockingNumber of ports in sequence * 2 (still a 16 bit value)
Single Packet Authentication2 bytes (the port) plus Max 1440 (IPv4/Ethernet) or 1220 (IPv6/Ethernet)
Key OnlyNumber of bytes in key (depending on key strength, up to several kB)


You can of course combine several methods (with endless potential for annoying your users), or use two factor authentication (OpenSSH supports several schemes).



Keys. You've Got To have Keys!

By far the most effective measure is to go keys only for your ssh logins. In your sshd_config, add or uncomment

PasswordAuthentication no

Restart you sshd, and have all users generate keys, like this:

$ ssh-keygen -C "userid@domain.tld"

There are other options to play with, see ssh-keygen(1) for inspiration.

Then add the *.pub to their ~/.ssh/authorized_keys files.

And I'll let you in on a dirty little secret: you can even match on interface in your sshd config for things like these



Why Not Use Port Knocking?

Whenever I mention the Hail Mary Cloud online, two suggestions always turn up: The iptables example I mentioned earlier (or link to the relevant slide), and "Why not use port knocking?". Well, consider this:

Port knocking usually means having all ports closed, but with a daemon reading your firewalls logs for a predetermined sequence of ports. Knock on the correct ports in sequence, your're in.

Another dirty little secret: It's possible to implement port knocking with only the tools in an OpenBSD base system. No, I won't tell you how.

Executive Summary: Don't let this keep you from keeping your system up to date.

To my mind port knocking gives you:
  1. Added complexity or, one more thing that will go wrong. If the daemon dies, you've bricked your system.
  2. An additional password that's hard to change. There's nothing magical about TCP/UDP ports. It's a 16 bit number, and in our context, it's just another alphabet. The swarm will keep guessing. And it's likely the knock sequence (aka password) is the same for all users.
  3. You won't recognize an attack until it succeeds, if even then. Guessing attempts will be indistinguishable from random noise (try a raw tcpdump of any internet-facing interface to see the white noise you mostly block drop anyway), so you will have no early warning.
Port knocking proponents seem to have sort of moved on to single packet authentication instead, but even those implementations still contain the old port knocking code intact.

If you want a longer form or those arguments, my November 4, 2012 rant Why Not Use Port Knocking? was my take (with some inaccuracies, but you'll live).



There's No Safety In High Ports Anymore

Another favorite suggestion is to set your sshd to listen on some alternate port instead of the default port 22/TCP.

People who did so have had a few years of quiet logs, but recent reports show that whoever is out there have the resources to scan alternate ports too.

Once again, don't let running your sshd on an alternate port keep you from keeping your system up to date.

Of course I've ranted about this too, in February 2013, There's No Protection In High Ports Anymore. If Indeed There Ever Was. (which earned me another slashdotting).

Reports with logs trickle in from time to time of such activity at alternate ports, but of course on any site with a default deny packet filtering policy will not see any traces of such scans unless you go looking specifically at the mass of traffic that gets dropped at the perimeter.



Final thoughts, for now

Microsoftish instapundits were quick to assert that ssh is insecure.

They're wrong. OpenSSH (which is what essentially everyone uses) is maintained as an integral part of the OpenBSD project, and as such is a very thoroughly audited mass of code. And it keeps improving with every release.

I consider the Hail Mary Cloud an example of distributed, parallel problem solving, conceptually much like SETI@Home but with different logic and of course a more sinister intent.

Computing power is cheap now, getting cheaper, and even more so when you can leverage other people's spare cycles.

The huge swarm of attackers concept is as I understand it being re-used in the recent WordPress attacks. We should be prepared for swarm attacks on other applications as soon as they reach a critical mass of users.

There may not be a bullseye on your back yet (have you looked lately?), but you are an attractive target.

Fortunately, sane system administration practices will go a long way towards thwarting intrusion attempts, as in
  • keep your system up to date,
  • allow only what's necessary for the task at hand and
  • keep watching your logs for weirdness.
Keep it simple, stay safe.

UPDATE 2013-11-21: A recent ACM Conference on Computer and Communication Security paper, "Detecting stealthy, distributed SSH brute-forcing," penned by Mobin Javed and Vern Paxson, references a large subset of the data and offers some real analysis, including correlation with data from other sites (Spoiler alert: in some waves, almost total overlap of participating machines). One interesting point from the paper is that apparently attack matching our profile were seen at the Lawrence Berkeley National Laboratory as early as 2005.

And in other news, it appears that GitHub has been subject to an attack that matches the characteristics we have described. A number of accounts with weak passwords were cracked. Investigations appears to be still ongoing. Fortunately, GitHub appear to have started offering other authentication methods.

UPDATE 2014-09-28: Since early July 2014, we have been seeing similar activity aimed at our POP3 service, with usernames taken almost exclusively from our spamtrap list. The article Password Gropers Take the Spamtrap Bait has all the details and log data as well as references to the spamtrap list.

UPDATE 2014-12-10: My Passwords14 presentation, Distributed, Stealthy Brute Force Password Guessing Attempts - Slicing and Dicing Data from Recent Incidents has some further data as well as further slicing and dicing of the earlier data (with slightly different results). 

UPDATE 2016-08-10: The POP3 gropers never went away entirely and soon faded into a kind of background noise. In June of 2016, however, they appeared to have hired themselves out to a systematic hunt for Chinese user names. The article Chinese Hunting Chinese over POP3 in Fjord Country has further details, and as always, links to log data and related files.

References

The slides for the talk this article is based on live at http://home.nuug.no/~peter/hailmary2013/, with a zipped version including all data at http://home.nuug.no/~peter/hailmary2013.zip (approx. 26MB) for your convenience.

Mobin Javed and Vern Paxson, "Detecting stealthy, distributed SSH brute-forcing," ACM International Conference on Computer and Communication Security (CCS), November 2013.

The blog posts (field notes) of the various incidents, data links within:

Peter N. M. Hansteen, (2008-12-02) A low intensity, distributed bruteforce attempt (slashdotted)

Peter N. M. Hansteen, (2008-12-06) A Small Update About The Slow Brutes

Peter N. M. Hansteen, (2008-12-21) Into a new year, slowly pounding the gates (slashdotted)

Peter N. M. Hansteen, (2009-01-22) The slow brutes, a final roundup

Peter N. M. Hansteen, (2009-04-12) The slow brute zombies are back (slashdotted)

Peter N. M. Hansteen, (2009-10-04) A Third Time, Uncharmed (slashdotted)

Peter N. M. Hansteen, (2009-11-15) Rickrolled? Get Ready for the Hail Mary Cloud! (slashdotted)

Peter N. M. Hansteen, (2011-10-23) You're Doing It Wrong, Or, The Return Of The Son Of The Hail Mary Cloud

Peter N. M. Hansteen, (2011-10-29) You're Doing It Wrong, Returning Scoundrels

Peter N. M. Hansteen, (2012-04-06) If We Go One Attempt Every Ten Seconds, We're Under The Radar (slashdotted)

Peter N. M. Hansteen, (2012-04-11) Why Not Use Port Knocking?

Peter N. M. Hansteen, (2013-02-16) There's No Protection In High Ports Anymore. If Indeed There Ever Was. (slashdotted)

Other Useful Texts

Marcus Ranum: The Six Dumbest Ideas in Computer Security, September 1, 2005

Michael W. Lucas: SSH Mastery, Tilted Windmill Press 2013 (order direct from the OpenBSD bookstore here)

Michael W. Lucas: Absolute OpenBSD, 2nd edition No Starch Press 2013 (order direct from the OpenBSD bookstore here)

Peter N. M. Hansteen, The Book of PF, 3rd edition, No Starch Press 2014, also the online PF tutorial it grew out of, several formats http://home.nuug.no/~peter/pf/, more extensive slides matching the most recent session at http://home.nuug.no/~peter/pf/newest/

OpenBSDs web http://www.openbsd.org/ -- lots of useful information.


If you enjoyed this: Support OpenBSD!

If you have enjoyed reading this, please buy OpenBSD CDs and other items, and/or donate!

Useful links for this are:

OpenBSD.org Orders Page: http://www.openbsd.org/orders.html

OpenBSD Donations Page: http://www.openbsd.org/donations.html.

OpenBSD Hardware Wanted Page: http://www.openbsd.org/want.html.

Remember: Free software takes real work and real money to develop and maintain.

If you want to support me, buy the book! (if you want to give the OpenBSD project a cut of that, this is the link you want).

Saturday, May 11, 2013

DNSSEC Mastery, Or How To Make Your Name Service Verifiable And Trustworthy

A DNSSEC book for the working sysadmin, likely to put you ahead of the pack in securing an essential Internet service.

I have a confession to make. Michael W. Lucas is a long time favorite of mine among tech authors. When Michael descends on a topic and produces a book, you can expect the result to contain loads of useful information, presented along with humor and real-life anecdotes so you will want to explore the topic in depth on your own systems.

In DNSSEC Mastery (apparently the second installment in what could become an extensive Mastery series -- the first title was SSH Mastery, reviewed here -- from Michael's own Tilted Windmill Press), the topic is how to make your own contribution to making the Internet name service more reliable by having your own systems present verifiable, trustworthy information.

Before addressing the book itself, I'll spend some time explaining why this topic is important. The Domain Name System (usually referred to as DNS or simply 'the name service' even if nitpickers would be right that there is more than one) is one of the old-style Internet services that was created to solve a particluar set of problems (humans are a lot better at remembering names a than strings of numbers) in the early days of networking when security was not really a concern.

Old-fashioned DNS moves data via UDP, the connectionless no-guarantees-ever protocol mainly because the low protocol overhead in most cases means the answer arrives faster than it would have otherwise. Reliable delivery was sacrificed for speed, and in general, the thing just works. DNS is one of those things that makes the Internet usable for techies and non-techies alike.

The other thing that was sacrificed, or more likely never even considered important enough to care about at the time, was any hope of reliably verifying that the information received via the DNS service was in fact authentic and correct.

When you ask an application to look up a name, say you want to see if anything's new at bsdly.blogspot.com or if you want to send me mail to be delivered at bsdly.net, the answer comes back, not necessarily from the host that answers authoritatively for the domain, but more likely from the cache of a name server near you, and serves mainly one or more IP addresses, with no guarantee other than it is, indeed a record type that contains one or more IP addresses that appear to match your application's query.

Or to put it more bluntly, with traditional DNS, it's possible for a well positioned attacker to feed you falsfied information (ie leading your packets to somewhere they don't belong or to somewhere you never intended, potentially along with your confidential data), even if the original DNS designers appear to have considered the scenario rather unlikely back then in the nineteen-eighties.

With the realization that the Internet was becoming mainstream during the 1990s and that non-techies would rely on it for such things as banking services came support cryptographically enhanced versions of several of the protocols that take care of the bulk of Internet traffic payloads, and even the essential and mostly ignored (at least by non-techies) DNS protocol was enhanced several times over the years. Around the turn of the century came the RFCs that describe cryptographic signatures as part of the enhanced name service, and finally in 2005 the trio of RFCs (4033, 4034 and 4035) that form the core of the modern DNSSEC specification were issued.

But up until quite recently, most if not all DNSSEC implementations were either incomplete or considered experimental, and getting a working DNSSEC setup in place has been an admirable if rarely fulfilled ambition among already overworked sysadmins.

Then at what seems to be the exactly right moment, Michael W. Lucas publishes DNSSEC Mastery, which is a compact and and extremely useful guide to creating your own DNSSEC setup, avoiding the many pitfalls and scary manouvres you will find described in the HOWTO-style DNSSEC guides you're likely to encounter after a web search on the topic.

The book is aimed at the working sysadmin who already has at least basic operational knowledge of running a name service. Starting with one DNSSEC implementation that is known to be complete and functional (ISC BIND 9.9 -- Michael warns early on very clearly that earlier versions will not work -- if your favorite system doesn't have that packaged yet, you can build your own or start bribing or yelling at the relevant package maintainer), this book takes a very practical, hands on approach to its topic in a way that I think is well matched to the intended audience.

Keeping in mind that the one thing a working sysadmin is always short on is time, it is likely a strong advantage that this book is so compact. With 12 chapters, it comes in at just short of 100 pages in the PDF version I used for most of this review. With the stated requirement that the reader needs to be reasonably familiar with running a DNS service, the introductory chapters fairly quickly move on to give an overview of public key cryptography as it applies to DNSSEC, with pointers to wordier sources for those who would want to delve into details, before starting the steps involved in setting up secure name service using ISC BIND 9.9 or newer.

Always taking a practical approach, DNSSEC Mastery covers essentially all aspects of setting up and running a working service, including such topics as key management, configuring and debugging both authoritative and recursive resolvers, various hints for working with or around strengths or deficiencies in various client operating systems, how the new world of DNSSEC influences how you manage your zones and delegations, and did I mention debugging your setup? DNSSEC is a lot less forgiving of errors than your traditional DNS, and Michael includes both some entertaining examples and pointers to several useful resources for testing your work before putting it all into production. And for good measure, the final chapter demonstrates how to distribute data you would not trust to old fashioned DNS: ssh host key fingerprints and SSL certificates.

As I mentioned earlier, this title comes along at what seems to be the perfect time. DNSSEC use is not yet as widespread as it perhaps should be, in part due to incomplete implementations or lack of support in several widely used systems. The free software world is ahead of the pack, and just as the world is getting to realize the importance of a trustworthy Internet name service, this book comes along, aimed perfectly at the group of people who will need an accessible-to-techies book like this one. And it comes at a reasonable price, too. If you're in this book's target group, it's a recommended buy.

The ebook is available in several formats from Tilted Windmill Press, Amazon and other places. A printed version is in the works, but was not available at the time this review was written (May 11, 2013).

Note: Michael W. Lucas gives tutorials, too, like this one at BSDCan in Ottawa, May 15 2003.

Title: DNSSEC Mastery: Securing The Domain Name System With BIND
Author: Michael W. Lucas
Publisher: Tilted Windmill Press (April 2012)

Michael W. Lucas has another, somewhat chunkier book out this year too, Absolute OpenBSD, 2nd edition, a very good book about my favorite operating system. It would have been reasonable to expect a review here of that title too, except that I served as the book's technical editor, and as such a review would be somewhat biased.

But if you're interested in OpenBSD and haven't got your copy of that book yet, you're in for a real treat. If a firewall or other networking is closer to your heart, you could give my own The Book of PF and the PF tutorial (or here) it grew out of. You can even support the OpenBSD project by buying the books from them at the same time you buy your CD set, see the OpenBSD Orders page for more information.

Upcoming talks: I'll be speaking at BSDCan 2013, on The Hail Mary Cloud And The Lessons Learned. There will be no PF tutorial at this year's BSDCan, fortunately my staple tutorial item was crowded out by new initiatives from some truly excellent people. (I will, however, be bringing a few copies of The Book of PF and if things work out in time, some other items you may enjoy.)

Tuesday, May 7, 2013

The Term Hackathon Has Been Trademarked In Germany. Now Crawl Back Under That Rock, Please.

Trademarking somebody else's idea behind their back is both a bad idea and highly immoral. If it wasn't your idea, you don't trademark and you don't patent. It really is that simple, people.

The news that the term hackathon had been trademarked in Germany reached me late last week, via this thread on openbsd-misc. The ideas sounded pretty ludicrous to me at the time, but I was too busy with other stuff that couldn't wait to start reacting properly, and a few distractions later, I'd forgotten about the whole thing.

Then today, via the Twitter stream, came the news that an outfit trading under the name Young Targets (how cute) had now started sending invoices at EUR 2500 a pop to anybody in Germany who dared use the term. One example has been preserved here by Hannover-based doctape, who had hosted an informal developer meetup earlier this year.

It may come as a surprise to a select few, but if there is somebody, somewhere, who is entitled to making money off that fairly well-known term, it is not that group of Germans. The term hackathon has been in use for a decade at least, and it springs like many other good things from the free software movement. The exact origin of the term is not clear, but one of the more prominent contenders for the first original use is the OpenBSD project. As you can see from the project's hackathons page, informal developer gatherings have most likely been called just that since 1999 at least.

And as anyone with an Internet connection an minimal searching skills will find out, hackathons have been quite crucial in keeping the project moving forward and offering tech goodies everybody uses, all for free and under a permissive license anybody can understand.


These items include the Secure Shell client and server used by 97% of the Internet (OpenSSH), the much praised OpenBSD packet filter PF and a whole host of other useful software that's developed as integral parts of the OpenBSD system but tend to find their way into other products such as those offered by Apple, Blackberry and quite a few others, including Linux distributions.


My brief and not too exhaustive search of mailing list archives tonight seems to turn up this message From Theo de Raadt to openbsd-misc dated July 1st, 2001 as the earliest public reference to a hackathon, but reading Theo's message again today I'm pretty convinced that the term was in common use even back then. If anyone can come up with evidence of use earlier than this, I'd love to hear from you, of course (mail to peter at bsdly dot net preferably with the word hackathon somewhere in the subject will be read with interest, or leave a comment below if you prefer).

I'm no lawyer at the best of times, but trademarking a term that both originated elsewhere and has been in general use for more than a decade seems to me at least highly immoral, and if it's not illegal, it should be. Trademarking a free software term and proceeding to charge EUR 2500 a pop for its use? It will be in your best interest to stay out of my physical proximity, Meine Damen und Herren.

Hot on the heels of what must have been a hectic night for the newly targeted young Berliners comes an announcement that states that they kinda, sorta will consider not charging sufficiently non-profity people for the use anyway, in the fluffiest terms I have ever heard come out of a German.

I'll offer our new targets some practical advice: Stop your nonsense right now, and make a real effort to track down the originators of the hackathon concept. It's likely you wil find that person is either Theo de Raadt or somebody else closely associated with the OpenBSD about the last turn of the century. If you cannot unregister the trademark, transfer the rights, free of charge, to the concept's originator.

Then either return any fees collected from your wrongful registration, or, at your victims' option, donate the equivalent sum to OpenBSD or a charity of your individual victims' choice.

Doing the right thing this late in the game and after messing up this thoroughly most likely won't save you from being the target of some sort of mischief from young hotheads (note that I strongly caution against using extra-legal tactics in this matter), but at least you, members and employees of Young Targets can hope that this embarrasing episode will be forgotten soon enough for you to resume some semblance of carreers in a not too distant future. Please go hide under a rock for now, after you've done the right thing as outlined above.

For anyone else interested in the matter, I strongly urge you to go to the OpenBSD project's donations page to donate, grab some CD sets and/or other swag from the orders page, and if you think you can help out with one or more items listed on the hardware wanted page, that will be very welcome for the project too.

It should be noted that I do not serve in any official capacity for the OpenBSD project. The paragraphs above represent my opinion only, and what I have outlined here should not be considered any kind of offer or representation on behalf of the OpenBSD project.

If you're interested in OpenBSD in general, you have a real treat coming up in the form of Michael W. Lucas' Absolute OpenBSD, 2nd edition. If a firewall or other networking is closer to your heart, you could give my own The Book of PF and the PF tutorial (or here) it grew out of. You can even support the OpenBSD project by buying the books from them at the same time you buy your CD set, see the OpenBSD Orders page for more information.

Upcoming talks: I'll be speaking at BSDCan 2013, on The Hail Mary Cloud And The Lessons Learned, with a preview planned for the BLUG meeting a couple of weeks before the conference. There will be no PF tutorial at this year's BSDCan, fortunately my staple tutorial item was crowded out by new initiatives from some truly excellent people. (I will, however, be bringing a few copies of The Book of PF and if things work out in time, some other items you may enjoy.)

Saturday, May 4, 2013

Keep smiling, waste spammers' time

When you're in the business of building the networks people need and the services they need to run on them, you may also be running a mail service. If you do, you will sooner or later need to deal with spam. This article is about how to waste spammers' time and have a good time while doing it.

Assembling the parts

To take part of the fun and useful things in this article, you need a system with PF, the OpenBSD packet filter. If you're reading this magazine you are likely to be running all important things on a BSD already, and all the fully open source BSDs by now include PF (as do the commercialized variants sold by the Apple and Blackberry), developed by OpenBSD but also ported to the other BSDs. On OpenBSD, it's the packet filter, and if you're running FreeBSD, NetBSD or DragonFlyBSD it's likely to be within easy reach, either as a loadable kernel module or as a kernel compile-time option.

Getting started with PF is surprisingly easy. The official documentation such as the PF FAQ is very comprehensive, but you may be up and running faster if you buy The Book of PF or do what more than 200,000 others have done before you: Download or browse the free forerunner from http://home.nuug.no/~peter/pf. Or do both, if you like.

Network design issues
A PF setup can be, and to my mind should be, quite unobtrusive. For the activities in this article it does not matter much where you run your PF filtering, as long as it is somewhere in the default path of your incoming SMTP traffic. A gateway with PF is usually an excellent choice, but if it suits your needs better, it is quite feasible to do the filtering needed for this article on the same host your SMTP server runs.

Enter spamd
OpenBSD's spamd, the spam deferral daemon (not to be confused with the program with the same name from the SpamAssassin content filtering system), first appeared in OpenBSD 3.3. The original spamd was a tarpitter with a very simple mission in life. Its spamd-setup program would take a list of known bad IP addresses, that is, the IP addresses of machines known to have sent spam recently, and load it into a table. The main spamd(8) program would then have any SMTP traffic from hosts in that table redirected to it, and spamd would answer those connections s-l-o-w-l-y, by default one byte per second.

A minimal PF config
As man spamd will tell you, the bare minimum to get spamd running in a useful mode on systems with PF version 4.1 or later is

table <spamd-white> persist
table <nospamd> persist file "/etc/mail/nospamd"
pass in on egress proto tcp from any to any port smtp \
        rdr-to 127.0.0.1 port spamd
pass in on egress proto tcp from <nospamd> to any port smtp
pass in log on egress proto tcp from <spamd-white> to any port smtp
pass out log on egress proto tcp to any port smtp

Note: When you get around to upgrading to OpenBSD 5.8, you will need to do a quick search and replace to turn the rdr-to occurences in those rules into divert-tos instead. That mechanism is slightly more efficient for local use (but this also means that the spamd you're using has to be on the local machine).

Or, in the pre-OpenBSD 4.7 syntax still in use on some systems,

table <spamd-white> persist
table <nospamd> persist file "/etc/mail/nospamd"
no rdr inet proto tcp from <spamd-white> to any \
       port smtp
rdr pass inet proto tcp from any to any \
       port smtp -> 127.0.0.1 port spamd

This means, essentially, that any smtp traffic from hosts that are not already in the table spamd-white will be redirected to localhost, port spamd, where you have set up the spam deferral daemon spamd to listen for connections. Enabling spamd, on the other hand, is as easy as adding spamd_flags="" to your /etc/rc.conf.local if you run OpenBSD or /etc/rc.conf if you run FreeBSD (Note that on FreeBSD, spamd is a port, so you need to install that before proceeding. Also, on recent FreeBSDs, the rc.conf lines are obspamd_enable="YES" to enable spamd and obspamd_flags="" to set any further flags.), and starting it with

$ sudo /usr/libexec/spamd

or if you are on FreeBSD,

$ sudo /usr/local/libexec/spamd

It is also worth noting that if you add the "-d" for Debug flag to your spamd flags, spamd will generate slightly more log information, of the type shown in the log excerpts later in this article.

While earlier versions of spamd required a slightly different set of redirection rules and ran in blacklists-only mode by default, spamd from OpenBSD 4.1 onwards runs in greylisting mode by default. Let's have a look at what greylisting means and how it differs from other spam detection techniques before we exlore the finer points of spamd configuration.

Content versus behavior: Greylisting
When the email spam deluge started happening during the late 1990s and early 2000s, observers were quick to note that the messages in at least some cases messages could be fairly easily classified by looking for certain keywords, and the bulk of the rest fit well in familiar patterns.

Various kinds of content filtering have stayed popular and are the mainstays of almost all proprietary and open source antispam products. Over the years the products have develped from fairly crude substring match mechanisms into multi-level rule based systems that incorporate a number of sophisticated statistical methods. Generally the products are extensively customizable and some even claim the ability to learn based on the users' preferences.

Those sophisticated and even beautiful algorithms do have a downside, however: For each new trick a spam producer chooses to implement, the content filtering becomes incrementally more complex and computationally expensive.

In sharp contrast to the content filtering, which is based on message content, greylisting is based on studying spam senders' behavior on the network level. The 2003 paper by Evan Harris noted that the vast majority of spam appeared to be sent by software specifically developed to send spam messages, and those systems typically operated in a 'fire and forget' mode, only trying to deliver each message once.

The delivery software on real mail servers, however, are proper SMTP implementations, and since the relevant RFCs state that you MUST retry delivery in case you encounter some classes of delivery errors, in almost all cases real mail servers will retry 'after a reasonable amount of time'.

Spammers do not retry. So if we set up our system to say essentially

"My admin told me not to talk to strangers"

- we should be getting rid of anything the sending end does not consider important enough to retry delivering.

The practical implementation is to record for each incoming delivery attempt at least
  1. sender's IP address
  2. the From: address
  3. the To: address
  4. time of first delivery attempt matching 1) through 3)
  5. time delivery of retry will be allowed
  6. time to live for the current entry
At the first attempt, the delivery is rejected with temporary error code, typically "451 temporary local problem, try again later", and the data above is recorded. Any subsequent delivery attempts matching fields 1) through 3) that happen before the time specified in field 5) are essentially ignored, treated to the same temporary error. When a delivery matching fields 1) through 3) is attempted after the specified time, the IP address (or in some implementations, the whole subnet) is whitelisted, meaning that any subsequent deliveries from that IP address will be passed on to the mail service.

The first release of OpenBSD's spamd to support greylisting was OpenBSD 3.5. spamd's greylisting implementation operates only on individual IP addresses, and by default sets the minimum time before a delivery attempt passes to 25 minutes, the time to live for a greylist entry to 4 hours, while a whitelisted entry stays in the whitelist for 36 days after the delivery of the last message from that IP address. With a properly configured setup, machines that receive mail from your outgoing mail servers will automatically be whitelisted, too.

The great advantage to the greylisting approach is that mail sent from correctly configured mail servers will be let through. New correspondents will experience an initial delay for the first message to get through and their IP address is added to the whitelist. The initial delay will vary depending on a combination of the length of your minimum time before passing and the sender's retry interval. Regular correpondents will find that once they have cleared the initial delay, their IP addresses are kept in the whitelist as long as email contact is a regular affair.

And the technique is amazingly effective in removing spam. 80% to 95% or better reduction in the number of spam messages is frequently cited, but unfortunately only a few reports with actual numbers have been published. An often-cited report is Steve Williams' message on opensd-misc (available among other places at marc.info), where Steve describes how he helped a proprietary antispam device cope with an unexptected malware attack. He notes quite correctly that the blocked messages were handled without receiving the message body, so their apparently metered bandwidth use was reduced.

Even after more than four years, greylisting remains extremely effective. Implementing greylisting greatly reduces the load on your content filtering systems, but since messages sent by real mail servers will be let through, it will sooner or later also let a small number of unwanted messages through, and unfortunately it does not eliminate the need for content filtering altogether. Unfortunately you will still occasionally encounter some sites that do not play well with greylisting, see the references for tips on how to deal with those.

Do we need blacklists?
With greylisting taking care of most of the spam, is there still a place for blacklists? It's a fair question. The answer depends in a large part on how the blacklists you are considering are constructed and how much you trust the people who generate them and the methods they use.

The theory behind all good blacklists is that once an IP address has been confirmed as a source of spam, it is unlikely that there will be any valid mail send from that IP address in the foreseeable future.

With a bit of luck, by the time the spam sender gets around to trying to deliver spam to addresses in your domain, the spam sender will already be on the blacklist and will in turn treated to the s-l-o-w SMTP dialogue.

Knowing how a host makes it into a blacklist is important, but a clear policy for checking that the entries are valid and for removing entries is essential too. Once spam senders are detected, it is likely that their owners will do whatever it takes to stop the spam sending. Another reason to champion 'aggressive maintenance' of blacklists is that it is likely that IP addresses are from time to time reassigned, and some ISPs do in fact not guarantee that a certain physical machine will be assigned the same IP address the next time it comes online.

Your spamd.conf file contains a few suggested blacklists. You should consider carefully which ones to use. Take the time you need to look up the web pages listed in the list descriptions in the spamd.conf file and then decide which lists fit your needs. If you decide to use one or more blacklists, edit your spamd.conf to include those and set up a cron job to let spamd-setup load updated blacklists at regular intervals.

The lists I consider the more interesting ones are the nixspam list, with a 4 day expiry, and the uatraps list, with a 24-hour exiry. The nixspam list is maintained by ix.de, based on their logs of hosts that have verifiably sent spam to their mail servers. The uatraps list is worth looking into too, mainly because it is generated automatically by greytrapping.

Behavior based response: Greytrapping
Greytrapping is yet another useful technique that grew out of hands-on empirical study of spammer behavior, taken from the log data available at ordinary mail servers. You have probably seen spam messages offering lists of "millions of verified email addresses" available. However, verification goes only so far. You can get a reasonable idea of the quality of that verification if you take some time to actually browse mail server logs for failed deliveries to addresses in your domain. In most cases you will find a number of attempts at delivering to addresses that either have never existed or at least have no valid reason to receive mail.

The OpenBSD spamd developers saw this too. They also realized that what addresses are deliverable or not in your own domain is something you have complete control over, and they formulated the following rule to guide a new feature to be added to spamd:
"if we have one or more addresses that we are quite sure will never receive valid email, we can safely assume that any mail sent to those addresses is spam"
that feature was dubbed greytrapping, and was introduced in spamd in time for the OpenBSD 3.7 release. The way it works is, if a machine that is already greylisted tries to deliver mail to one of the addresses on the list of known bad email addresses, that machine's IP address is added to a special local blacklist called spamd-greytrap. The address stays in the spamd-greytrap list for 24 hours, and any SMTP traffic from hosts in that blacklist is treated to the tarpit for the same period.

This is the way the uatraps list is generated. Bob Beck put a list of addresses he has referred to as 'ghosts of usenet postings past' on his local greytrap list, and started exporting the IP addresses he collects automatically to a freely available blacklist. As far as I know Bob has never published the list of email addresses in his spamtrap list, but the machines at University of Alberta appear to be targeted by enough spammers to count. At the time this article was written, the uatraps list typically contained roughly 120,000 addresses, and the highest number of addresses I have seen reported by my spamd-setup was just over 180,000 (it peaked later at just over 670,000 addresses). See Figure 1 for a graphical representation of the number of hosts in the uatraps list over the period February 2006 through early March 2008.

Figure 1: Hosts in uatraps

By using a well maintained blacklist such as the uatraps list you are likely to add a few more percentage points to the amount of spam stopped before it reaches your content filtering or your users, and you can enjoy the thought of actively wasting spammers' time.

A typical log excerpt for a blacklisted host trying to deliver spam looks like this:

Jan 16 19:55:50 skapet spamd[27153]: 82.174.96.131: connected (3/2), lists: uatraps
Jan 16 19:59:33 skapet spamd[27153]: (BLACK) 82.174.96.131: <bryonRoe@boxerdelasgargolas.com> -> <schurkoxektk@ehtrib.org>
Jan 16 20:01:17 skapet spamd[27153]: 82.174.96.131: From: "bryon Roe" <bryonRoe@boxerdelasgargolas.com>
Jan 16 20:01:17 skapet spamd[27153]: 82.174.96.131: To: schurkoxektk@ehtrib.org
Jan 16 20:01:17 skapet spamd[27153]: 82.174.96.131: Subject: vresdiam
Jan 16 20:02:33 skapet spamd[27153]: 82.174.96.131: disconnected after 403 seconds. lists: uatraps


This particular spammer hung around at a rate of 1 byte per second for 403 seconds (six minutes, forty-three seconds), going through the full dialogue all the way up to the DATA part before my spamd rejected the message back to the spammer's queue.

Figure 2: Connection lengths measured at bsdly.net's spamd

That is a fairly typical connection length for a blacklisted host. Statistics from my sites (see Figure 2) show that most connections to spamd last from 0 to 3 seconds, a few hang on for about 10 seconds, and the next peak is at around 400 seconds. Then there's a very limited number that hang around for anywhere from 30 minutes to several hours, but those are too rare to be statistically significant (and damned near impossible to graph sensibly in relation to the rest of the data.

Interaction with a running spamd: spamdb
Your main interface to the contents of your spamd related data is the spamdb administration program. The command

$ sudo spamdb

without any parameters will give you a complete listing of all entries in the database, whether WHITE, GREY or others. In addition, the program supports a number of different operations on entries in spamd's data, such as adding or deleting entries or changing their status in various ways. For example,

$ sudo spamdb -a 192.168.110.12

will add the host 192.168.110.12 to your spamd's whitelist or update its status to WHITE if there was an entry for that address in the database already. Conversely, the command

$ sudo spamdb -d 192.168.110.12

will delete the entry for that IP address from the database.

For greytrapping purposes, you can add or delete spamtrap email addresses by using a command such as

$ sudo spamdb -T -a wkitp98zpu.fsf@datadok.no

to add that address to your list of spamtrap addresses. To remove the address, you substitute -d for the -a. The -t flag lets you add or delete entries for TRAPPED addresses manually.

Hitting back, poisoning their well: Summary of my field notes
Up util July 2007, I ran my spamd installations with greylisting, supplemented by hourly updates of the uatraps blacklist and a small local list of greytrapping addresses like the one in the previous section, which is obviously a descendant of a message-id, probably harvested from a news spool or from some unfortunate malware victim's mailbox. Then something happened that made me take a more active approach to my greytrapping.

My log summaries showed me an unusually high number of attempted deliveries to non-existent addresses in the domains I receive mail for. Looking a little closer at the actual logs showed spam backscatter: Somebody, somewhere had sent a large number of messages with made up addresses in one of our domains as the From: or Reply-to: addresses, and in those cases the to: address wasn't deliverable either, the bounce messages were sent back to our servers.

The fact that they were generating bounces to the spam messages indicates that any copies of those messages directed at actually deliverable addresses in those domains would have been delivered to actual users' mailboxes, not too admirable in itself.

Another variety that showed up when I browsed the spamd logs was this type:

Jul 13 14:36:50 delilah spamd[29851]: 212.154.213.228: Subject: Considered UNSOLICITED BULK EMAIL, apparently from you
Jul 13 14:36:50 delilah spamd[29851]: 212.154.213.228: From: "Content-filter at srv77.kit.kz" <postmaster@srv77.kit.kz>
Jul 13 14:36:50 delilah spamd[29851]: 212.154.213.228: To: <skulkedq58@datadok.no>

which could only mean that the administrators at that system had not yet learned that spammers no longer use their own From: addresses.

Roughly at that time it struck me:
  1. Spammers, one or more groups, are generating numerous fake and nondeliverable addresses in our domains.
  2. adding those generated addresses to our local list of spamtraps is mainly a matter of extracting them from our logs
  3. if we could make the spammers include those addresses in their To: addresses, too, it gets even easier to stop incoming spam and shift the spammers to the one-byte-at-a-time tarpit. Putting the trap addresses on a web page we link to from the affected domains' home pages will attract the address slurping robots sooner or later.
or the short version: Let's poison their well!

(Actually in the first discussions about this with my BLUG user group friends, we referred to this as 'brønnpissing' in Norwegian, which translates as 'urinating in their well'. The more detailed descriptions of the various steps in the process can be tracked via blog entries at http://bsdly.blogspot.com, starting with the entry dated Monday, July 9th, 2007, Hey, spammer! Here's a list for you!.)

Over the following weeks and months I collected addresses from my logs and put them on the web page at http://www.bsdly.net/~peter/traplist.shtml.

After a while, I determined that harvesting the newly generated soon-to-be-spamtrap addresses directly from our greylist data was more efficient and easier to script than searching the mail server logs. Using spamdb, you can extract the current contents of the greylist with

$ sudo spamdb | grep GREY

which produces output in the format

GREY|96.225.75.144|Wireless_Broadband_Router|<aguhjwilgxj@bn.camcom.it>|<bsdly@bsdly.net>|1198745212|1198774012|1198774012|1|0
GREY|206.65.163.8|outbound4.bluetie.com|<>|<leonard159@datadok.no>|1198752854|1198781654|1198781654|3|0
GREY|217.26.49.144|mxin005.mail.hostpoint.ch|<>|<earle@datadok.no>|1198753791|1198782591|1198782591|2|0

where GREY is what you think it is, the IP address is the sending host's address, the third entry is what the sender identified as in the SMTP dialogue (HELO/EHLO), the fourth is the From: address, the fifth is the To: address. The next three are date values for first contact, when the status will change from GREY to WHITE and when the entry is set to expire, respectively. The final two fields are the number of times delivery has been blocked from that address and the number of conntections passed for the entry.

For our purpose, extracting the made up To: addresses in our domains from backscatter bounces, it is usually most efficient to search for the "<>" indicating bounces, then print the fifth field. Or, expressed in grep and awk:

$ sudo spamdb | grep "<>" | awk -F\| '{print $5}' |  tr -d '<>' | sort | uniq


will give you a sorted list of unique intended bounce-to addresses, in a format ready to be fed to a corresponding script for feeding to spamd. The data above and the command line here would produce

earle@datadok.no
leonard159@datadok.no

- in some situations, the list will be a tad longer than in this ilustration. This does not cover the cases where the spammers apparently assume that any mail with From: addresses in the local domain will go through, even when they come from elsewhere. Extracting the fourth column instead

# spamdb | grep GREY | awk -F\| '{print $4}' | grep mydomain.tld |  tr -d '<>' | sort | uniq 


will give you a list of From: addresses in your own domain to weed out a few more bad ones from.

After a while, I started seeing very visible and measurable effects. At short intervals, we see spam runs targeting the addresses in the published list, working their way down in more or less alphabetical order. For example, in my field notes dated November 25, 2007, I noted

"earlier this month the address capitalgain02@gmail.com 
 started appearing frequently enough that it caught my 
 attention in my greylist dumps and log files.

 The earliest contact as far as I can see was at 
 Nov 10 14:30:57, trying to spam wkzp0jq0n6.fsf@datadok.no 
 from 193.252.22.241 (apparently a France Telecom customer). 
 The last attempt seems to have been ten days later, at 
 Nov 20 15:20:31, from the Swedish machine 217.10.96.36.

 My logs show me that during that period 6531 attempts 
 had been made to deliver mail from capitalgain02@gmail.com 
 via bsdly.net, from 35 different IP addresses, to 131 different 
 recipients in our domains. Those recipients included three 
 deliverable addresses, mine or aliases I receive mail for. 
 None of those attempts actually succeeded, of course."

It is also worth noting that even a decreipt the Pentium III 800MHz (since replaced with a Pentium 4 box, donations of more recent hardware gratefully accepted) at the end of the unexciting DSL line to my house has been able to handle about 190 simultaneous connections from TRAPPED addresses without breaking into a sweat. For some odd reason, the number of simultaneous connection a the other sites I manage with better bandwidth have not been as high as the ones from my home gateway.

During the months I've been running the trapping experiment, the number of spamtrap addresses in the published list has grown to more than 10,000 addresses (by May 4th, 2013, the list had grown to 24431 entries). Oddly enough, my greylist scans still show up a few more every few days.

Meanwhile, my users report that spam in their mailboxes is essentially non-existent. On the other side of the fence, there are indications that it may have dawned on some of the spammers that generating random addresses in other people's domains might end up poisoning their own well, so they started introducing patterns to be able to weed out their own made up addresses from their lists. I take that as a confirmation that our harvesting and republishing efforts have been working rather well.

The method they use is to put some recognizable pattern into the addresses they generate. One such pattern is to take the victim domain name, prepend "dw" and append "m" to make up the local part and then append the domain, so starting from sia.com we get dwsiam@sia.com.

There is one other common variation on that theme, where the prepend string is "lin" and the append string is "met", producing addresses like linhrimet@hri.de. Then again when they use that new, very recognizable, address to try to spam my spamtrap address malseeinvmk@bsdly.net, another set of recognition mechanisms are activated, and the sending machine is quietly added to my spamd-greytrap. (We've since seen other patterns come and go, scanning the list at http://www.bsdly.net/~peter/traplist.shtml will see examples of them all).

And finally, there are clear indications that spammers use slightly defective relay checkers that tend to conclude that a properly configured spamd is an open relay, swelling my greylists temporarily. We already know that the spammers do not use From: addresses they actually receive mail for, and consequently they will never know that those messages were in fact never delivered.

If you've read this far and you're still having fun, you can find other anecdotes I would have had a hard time believing myself a short time back in my field notes at . By the time the magazine has been printed and distributed (or by the time you find this revised article online), there might even be another few tall tales there.

You might also want to read

The Book of PF, 3rd Edition, by Peter N. M. Hansteen, No Starch Press  2014 (covers both pre-4.7 and post-4.7 syntax), available in better bookshops or from the publisher

The Next Step in the Spam Control War: Greylisting, by Evan Harris. Available at http://greylisting.org/articles/whitepaper.shtml

Maintaining A Publicly Available Blacklist - Mechanisms And Principles, April 14, 2013 describes the maintenance regime for the published version of my spamd-greytrap list

In The Name Of Sane Email: Setting Up OpenBSD's spamd(8) With Secondary MXes In Play - A Full Recipe, May 28, 2012, offers another, more OpenBSD-centric, recipe for setting up a spamd based system.



This article originally appeared in BSD Magazine #2, June 2008. This re-publication has suffered only minor updates and edits.

If you're interested in OpenBSD in general, you have a real treat coming up in the form of Michael W. Lucas' Absolute OpenBSD, 2nd edition. If a firewall or other networking is closer to your heart, you could give my own The Book of PF and the PF tutorial (or here) it grew out of. You can even support the OpenBSD project by buying the books from them at the same time you buy your CD set, see the OpenBSD Orders page for more information.

Upcoming talks: I'll be speaking at BSDCan 2013, on The Hail Mary Cloud And The Lessons Learned, with a preview planned for the BLUG meeting a couple of weeks before the conference. There will be no PF tutorial at this year's BSDCan, fortunately my staple tutorial item was crowded out by new initiatives from some truly excellent people.