anti-patterns in security
A proposal to deal with linkspammers – set up a central blackhole where you send everything that you spam-rate, and use the feed of URIs from it as an input to your automated spam filter. Its own web page is here. Unfortunately, as I point out, there’s a serious flaw here.
Basically, if we’re going to filter out anything containing links that have already been reported as spam into a feed that’s open to the world at large, we’ve created a means for anyone to censor anything, across the whole ‘sphere. To make it happen: generate spam comments, deliberately spammy spam comments that will be immediately recognised and deleted, but include the target URI as part of the payload.
Spammers have been using text taken from old books, or harvested from the Web at random, in order to fool spam filters for years; more recently, they’ve taken to harvesting text from the site they’re spamming, and including that with the spam, so as to fool the spam filter. So this doesn’t change existing practice or code very much. And adding legitimate links might well boost your chances of getting the commercial payload through; you’d have to think carefully about whether this was a good or a bad thing, as this attack will work best if the spam is detected and submitted automatically.
Anyway, we run off thousands of spam comments containing links to – say – Sourcewatch, RealClimate, or whoever, as well as links to enralgemypen1s.tv. The spam filters sweep them into the distributed filter feed. Now, anything containing those links is banned from any site that uses the feed to prime its spam filter; and, of course, once one site’s filter starts automatically sweeping them up, their concentration in the crapfeed will only go up.
Part of the problem here is not looking across the layers; we’ve already had this problem in internetworking, where various schemes to filter out attack traffic dynamically have foundered because the enemy realised that if they could get IP packets routed with arbitrary source addresses, they could also set real source addresses, and further, they could use this not just to escape blackhole routing, but to have whatever consequences that followed sent straight to somebody else. In fact, if you did this to something like a group of anycast DNS servers, you could generate truly epic volumes of traffic heading for the target, traffic that came from sources that they couldn’t possibly blacklist.
For example, one IP abuse problem is bogon packets – ones that come from or go to networks that shouldn’t be in the Internet Routing Table, because their addresses aren’t allocated to anyone or they’re reserved for private or special purpose use. These are a problem partly because this means that more than one network in more than one location could be under the same address, and therefore that bad things could happen with the routing protocols, and partly because squatting in bogon space is one way of avoiding responsibility for stuff you send, like spam, malware, denial-of-service attack traffic, etc. Fortunately, you can set your router filters to deny anything from or to any of the networks in the bogon feed provided by Team Cymru and forget all about it.
This works, however, because the bogon feed uses a whitelist approach based on the lists of assigned networks prepared by IANA and the regional registries. (Can anyone guess where I got the idea for the Vfeed from?) Out of the totality of possible IPv4 networks, it subtracts everything that’s been released for use – what’s left is the possible bogon space. If it received the contents of other networks’ bitbuckets, you could bet that someone would hook up a BGP session and inject the whole of Google’s address space, or worse, the whole of Level(3), Akamai Technologies, or the London Internet Exchange’s, and watch half the Internet disappear from their route view and everybody else’s like blips off a radar screen.
Fortunately, if it was prepared in that way, nobody competent would dare use it, and the incompetent usually don’t bother filtering bogon packets…which is why they exist.
Similarly, e-mail spam filters used to send warning messages back to the originating user, until it was noticed that you could set entirely spurious Reply-To fields and deluge all kinds of other people with crap that they couldn’t blacklist because it came from legitimate mail servers.
But the lesson seems to need re-learning quite a bit; explosives detection would seem to be a field full of promise, both in the negative version of the attack (you spray around the smell, and then you smuggle the explosives, because the increased concentration of nitrates from them isn’t detectible against the background contamination) and in the positive version (you spread the smell around the passengers, so they trip the detectors, causing havoc, and eventually causing them to turn the detector sensitivity down – and then you smuggle the explosives). Any attempt to achieve security by comparing a target stream with another depends on the independence of the two streams, just as any attempt to increase bandwidth by combining two parallel streams depends on their independence.
Come to think of it, I’m slightly surprised that I haven’t seen David Cameron poster spam, as that’s a legitimate source that generates content from anonymous Web input.