Archive for January, 2010
Jean-Dominique Merchet writes about the Soviet withdrawal from Afghanistan, and argues that the political preparation and the overall strategy of it give cause for optimism. I’ve thought this before, and then doubted it.
There seems to be something of an optimism outbreak on after the London conference last week; here’s McChrystal, saying he expects that by December, a significant chunk of the Taliban will have gone quiet – notably, he seems to expect a lot of semi-supporters to jack it in quietly. Well, two Friedman units, eh. A significant tribe has apparently agreed to sign up with the Afghan government.
Ahmed Rashid has a must-read on talks during 2009. He argues that the major diplomatic story in this is that the Saudi and Afghan intelligence services have developed their own contacts to the Taliban, bypassing the Pakistani ISI.
al Sahwa discusses the Taliban shadow administration, which has some presence at least in 33 provinces of 34 and is a key element in any guerrilla army. NATO has picked a head of its civilian activities in Afghanistan.
Obviously, if part of the strategy is to bypass the ISI as interlocutors, then it’s going to be indispensable to compensate Pakistan on the other side of the table. As said here:
It would be a really fruitful thing for the Obama administration to start involving itself in an India-Pakistan peace process
However, this doesn’t seem to be happening. You’d think the point that there are limits to what Pakistan can do without pushing public tolerance too far, and that they need time to consolidate after last year’s fighting, would be obvious in the context of a “surge” based on counter-insurgency principles.
The Economist has a rather good story from Waziristan.
This feels wrong – but it looks like Prinny did the right thing. I wonder what Nigel Lawson or Monckton will say the next time they meet the heir to the throne?
A little more Haitian logistics.
That’s what the container terminal looked like this week (from here). Nathan Hodge of Wired has two good pieces, about progress reopening the harbour and bringing in a huge barge full of drinking water, and surveying the bottom of the harbour, which may not be in the same place any more.
Meanwhile, we get links. I have to say I really don’t know what to say about that suggestion, except that sending anything that needs refrigerating sounds no more helpful than sending anything that needs making up with water…
I’d also like to send the blog’s good wishes to an old friend who’s actually being deployed by their NGO.
Perhaps we shouldn’t be so hard on the British wanktank movement. Here’s an example of the US version. It turns out that the people responsible for a break-in at U.S. Senator Mary Landrieu’s office, who apparently posed as telecoms company technicians in an attempt to “tamper” with the phones, were led by a character who was an “Undergraduate Fellow of the Foundation for the Defense of Democracies”. He later acted as “assistant director” of a program at Trinity Washington University whose purpose was:
to introduce students in liberal arts colleges to concepts in intelligence studies and potential careers in intelligence
Is it out of place to note that the FDD is one of the stops on Alexander Melegreaou-Hitchens’s World Tour of Wanktankery? And does anyone wonder if renta-quoting seems a tad dull compared to actually getting your G. Gordon Liddy on? Give him time; yer man also acted as the Operations Officer of a Department of Defense irregular warfare fellowship program, which is close enough to being a media “terrorism expert” for folk music.
I’m amused by the descriptions here and here; their defence is that they “wanted to know how they would react if the phones were inoperative”. According to witnesses, they fiddled with a phone, called it or pretended to on a mobile device, and announced that they couldn’t reach it, presumably in order to claim that there was a fault and they were there to fix it.
Strangely, for people with absolutely no evil intent, they seem to have replicated a 1950s MI5 bugging operation; declare there was a “fault” on the phones, then arrive posing as telephone engineers. Of course, the key element was that the real thing could ask the phone company to stage a deliberate fault, something this lot appear to have missed.
It does make you wonder why we’re having a USA Day thanks to Boris Johnson. Surely it can have nothing to do with Dan Ritterband‘s past directorship of Policy Exchange, which shares an address with the Centre for Social Cohesion, Douglas Murray’s aggressive neo-con thinktank and AMH’s current gig.
Ritterband’s LinkedIn profile describes him as Communications Manager of the Conservative Party; he’s also Boris’s marketing director, and a major figure in Michael Howard’s 2005 election campaign and David Cameron’s leadership campaign. Which means that he’s responsible for a significant percentage of the most vomitous public speech of the last decade.
(If you’ve ever wondered how Bill Roggio gets his access, btw, wonder no more.)
Patrick “unseasonably mild” Wintour’s predictably friendly piece on Blair going before the Iraq inquiry is unintentionally disturbing:
No prime minister is indifferent to his or her legacy, and however much he feels stale controversies are being aired with little new public evidence, he knows tomorrow will be important for him, and his future public life as world statesman, Middle East envoy, spiritual healer and businessman.
Spiritual healer? As if the “world statesman” bit wasn’t hilarious enough, or the “businessman”, bit as opposed to “boardroom table ornament”. The whole piece is sourced to “friends”, British newspaper and especially Blairite code for “his PR people wanted to get this out”, so presumably he actually believes this or at least tolerates his media advisers saying it.
Meanwhile, in his defence, he argues that Iraq was already a regime that had used WMD, and therefore we can’t permit a regime like Iran from having them (around 10:11). The rest is here; if you care for a trip down memory lane. Alternatively, you could just vote Conservative.
So, are the Americans really “prioritising foreign soldiers over aid” in Haiti? Thankfully, the national press tried to answer this question with facts. Well, not really. Spencer Ackerman and Laura Rozen actually asked intelligent questions rather than the usual “Two days after the giant earthquake destroyed all port facilities, critics asked why UN aid was still taking so long to arrive in the stricken region. After all, this 70kg journalist and his 88g sat phone got through just fine…anyone been raped and speak English?” stuff.
Apparently there are about 140 air movements through Port-au-Prince daily on average, of which 50% are allocated to NGOs and the rest to US and other government aircraft. The rate reached 200 on the Sunday following the earthquake; this is despite there being no radar or radio navigation aids since the earthquake.
A logistics system is a linear production process. Computer people would prefer to think of it as a loop construct. Therefore, the total capacity is determined by throughput – by the rate at which it loops. That, in turn, is set by the slowest element of the process.
In this one, goods are being loaded on aircraft, that then take off, fly to Haiti, land, unload, take off, and return. Now, the aircraft can leave from many, many different airports, so I think we can rule out that step as being the limiting factor. For landing, the minimum separation between planes is the limiting factor; the standard 3 miles horizontal separation is a minute’s flying time at 180 mph (156 knots – the approach to Heathrow is flown to 4 miles out at 160kts), and the wake-turbulence separation for a heavy aircraft like a 747 is three minutes. So the maximum separation will be four minutes or thereabouts; a lot of the aircraft being used are able to slow down much faster on final approach and some are less heavy, so it’s probably somewhat less.
One aircraft every three minutes on one runway gives us 20 movements an hour. If they aren’t going to pile up there, they’ve got to leave at the same rate, so that’s 10 in, 10 out, and we’d reach 140 movements in 7 hours. Considering that the landing lights don’t work and the control is visual, they can’t be far off operating at capacity. But that’s still not the last word.
You may be able to land a plane every three minutes, but you probably can’t unload it in three minutes. And you’ve got to return the empties, as well; you can temporarily up the rate by using more of the movements budget for landings, but eventually this will mean you have to stop to send aircraft back. The best performance is achieved by operating at the highest average capacity. So, the slowest part of the process is probably unloading and turn-around more generally.
This brings up another issue. If we start the day with 10 aircraft arriving an hour, that then spend 3 hours on the ground, 30 aircraft will be there before the first one leaves; we’ll need at least 30 parking spots.
In fact, because unloading is the most restrictive step in the process, it’s optimal to always have a queue. Otherwise, there will be moments when the most scarce resource in the whole thing – a forklift truck and its driver – will be waiting for cargo to move, at which point we’re operating below capacity and wasting time.
In practice, this will be the operational limiting factor; it doesn’t matter if a plane has to wait to unload, but it does if the next one can’t leave the runway and the next one has to go-around and divert. So, we’ve arrived. Our limiting factors are forklifts and parking.
The USAF air traffic controllers announced a maximum two-hour turnaround on the night after the earthquake, and further insisted that all aircraft arriving in the airfield circuit have enough fuel to go to their destinations without refuelling, there being (obviously) none to spare in Haiti.
You get 11 hours of daylight there at this time of year, and presuming that it’s still visual-only, that’s 12 movements an hour, six in, six out. That’s about 0.25 Heathrows. With a maximum turnaround of two hours, this implies that there are 12 parking spaces available on the ramp. That should also explain what happened to that MSF flight; apparently another aircraft went technical on the ground, blew its two-hour slot, MSF had to go-around, and David Aaronovitch and Noam Chomsky were at once united in blowhardry, not for the first time.
Actually, come to think of it, you can get the NOTAMs for Port-au-Prince by going here and searching for MTPP. It turns out that an instrument flight plan is mandatory…just like at Heathrow, and they want to know how much weight, how many items of rolling stock (!), and how many passengers you have. Oh, and:
M0004/10 – QXXXX PALLETS DOWNLOAD AT MTPP WILL NOT BE RETURN TO USERS. 17 JAN 02:15 2010 UNTIL UFN. CREATED: 17 JAN 02:14 2010
In return for putting up with that, here’s some logistics porn.
Major offshore petroleum discharge systems (OPDS) components are: the OPDS tanker with booster pumps and spread mooring winches; a recoverable single anchor leg mooring (SALM) to accommodate tankers of up to 70,000 deadweight tons; ship to SALM hose lines; up to 4 miles of 6-inch (internal diameter) conduit for pumping to the beach; and two BTUs to interface with the shoreside systems.
Deploying the anchor element requires counterflooding the ship onto one beam so far over that the decks are awash. Relatedly, you know a navy is serious when it calls out ugly ships manned by civilians. The French amphibious command ship Siroco is on her way; RFA Largs Bay is too, replacing this entry on the RN Blog as the UK’s lead response.
Rather less depressing; Wired reports on the array of open-source IT tools for disaster relief getting their first use in earnest in Haiti. I remember when your main source for things like Google Earth overlays of aerial photos was Kathryn Cramer, and that was in the United States. However, there’s something I saw that wants drawing attention to.
Here’s Bill Woodcock on NANOG, talking sense:
They’ve already got that, but “faster” only in the sense that it’s already done… They’re limited to a few STM1s, which were quickly overwhelmed by the relief workers. This is a common problem in disaster relief, we saw it particularly when we were working in Indonesia and Thailand during the tsunami… An area that had quite modest Internet usage, and infrastructure which may not be great, but is sufficient to its present requirements, gets a flood of relief workers in who all want to use Skype simultaneously, and determine that the perfectly-functional and previously-sufficient Internet is “broken” and needs to be reengineered.
The existing chain of microwave relays is the Haitian ISPs’ fix for the problem of Teleco having a monopoly fiber landing and setting astronomical prices on access to it.
I’m not interested in reengineering anything, but I am interested in making sure that if aid money goes to the incumbent to fix their fiber, at least the community gets something out of it in the form of the monopoly being broken. Otherwise the fiber being fixed does no one any good, because they still won’t be able to use it, same as before the earthquake.
It’s very easy to spend money and make things worse than they were before
He’s referring to the Haitian submarine cable landing, which was destroyed, although the fibre itself may still be present, and the fact that they did have alternative connectivity to the Dominican Republic by microwave link. I do like the point about relief workers with MacBooks (and corporate preening PR men back at headquarters pressing for teh videos for the nine o’clock news) as a denial-of-service attack, however.
The NANOG community has been helping in various ways, including by finding ways for the engineer in charge of their NAP to get his family out of the country, diesel for the backup generators, and such.
Fortunately, most of the useful stuff except for mapping is low-bandwidth, voice and messaging. However, that usually means GSM or satellite, with the result that radio spectrum allocation gets to be a problem. Who knew that “disaster area spectrum allocation specialist” is a job title?
Well, this is hardly surprising; the FBI was in the habit of pretending to be on a terrorism case every time they wanted telecoms traffic data. Their greed for call-detail records is truly impressive. Slurp! Unsurprisingly, the lust for CDRs and the telcos’ eagerness to shovel them in rapidly got the better of their communications analysis unit’s capacity to crunch them.
Meanwhile, Leah Farrell wonders about the problems of investigating “edge-of-network” connections. Obviously, these are going to be the interesting ones. Let’s have a toy model; if you dump the CDRs for a group of suspects, 10 men in Bradford, and pour them into a visualisation tool, the bulk of the connections on the social network graph will be between the terrorists themselves, which is only of interest for what it tells you about the group dynamics. There will be somebody who gets a lot of calls from the others, and they will probably be important; but as I say, most of the connections will be between members of the group because that’s what the word “group” means. If the likelihood of any given link in the network being internal to it isn’t very high, then you’re not dealing with anything that could be meaningfully described as a group.
By definition, though, if you’re trying to find other terrorists, they will be at the edge of this network; if they weren’t, they’d either be in it already, or else they would be multiple hops away, not yet visible. So, any hope of using this data to map the concealed network further must begin at the edge of the sub-network we know about. And the principle that the ability to improve a design occurs primarily at the interfaces – this is also the prime location for screwing it up also points this way.
But there’s a really huge problem here. The modelling assumptions are that a group is defined by being significantly more likely to communicate among itself than with any other subset of the phone book, that the group is small relative to the world around it, and that it is boring; everyone has roughly similar phoning behaviour, and therefore who they call is the question that matters. I think these are reasonable.
The problem is that it’s exactly at the edge of the network that the numbers of possible connections start to curve upwards, and that the density of suspects in the population falls. Some more assumptions; an average node talks to x others, with calls being distributed among them on a well-behaved curve. Therefore, the set of possibilities is multiplied by x for each link you follow outwards; even if you pick the top 10% of the calling distribution, you’re going to fall off the edge as the false positives pile up. After three hops and x=8, we’re looking at 512 contacts from the top 10% of the calling distribution alone.
In fact, it’s probably foolish to assume that suspects would be in the top 10% of the distribution; most people have mothers, jobs, and the like, and you also have to imagine that the other side would deliberately try to minimise their phoning or, more subtly, to flatten the distribution by splitting their communications over a lot of different phone numbers. Actually, one flag of suspicion might be people who were closely associated by other evidence who never called each other, but the false positive rate for that would be so high that it’s only realistically going to be hindsight.
Conclusions? The whole project of big-scale database-driven social network analysis is based on the wrong assumptions, which are drawn either from military signals intelligence or from classical policing. Military traffic analysis works because it assumes that the available signals are a subset of a much bigger total, and that this total is large compared to the world. This makes sense because that’s what the battlefield of electronic warfare is meant to look like – cleared of civilian activity, dominated by one side or the other’s military traffic. Working from the subset of enemy traffic that gets captured, it’s possible to infer quite a lot about the system it belongs to.
Police investigation works because it limits the search space and proceeds along multiple lines of enquiry; rather than pulling CDRs and assuming the three commonest numbers must be suspects, it looks for suspects based on the witness and forensic evidence of the case, and then uses other sources of data to corroborate or refute suspicion.
To summarise, traffic analysis works on the assumption that there is an army out there. We can only see part of it, but we can make inferences about the rest because we know there is an army. Police investigation works on the observation that there has been a crime, and the assumption that probably, only a small number of people are possible suspects.
So, I’m a bit underwhelmed by projects like this. One thing that social network datamining does, undoubtedly, achieve is to create handsome data visualisations. But this is dangerous; it’s an opportunity to mistake beauty for truth. (And they will look great on a PowerPoint slide!)
Another, more insidious, more sinister one is to reinforce the assumptions we went into the exercise with. Traffic-analysis methodology will produce patterns; our brains love patterns. But the surge of false positives means that once you get past the first couple of hops, essentially everything you see will be a false positive result. If you’ve already primed your mind with the idea that there is a sinister network of subversives everywhere, techniques like this will convince you even further.
Unconsciously, this may even be the purpose of the exercise – the latent content of Evan Kohlmann. At the levels of numbers found in telco billing systems, everyone will eventually be a suspect if you just traverse enough links.
Which reminded me of Evelyn Waugh, specifically the Sword of Honour trilogy. Here’s his comic counterintelligence officer, Colonel Grace-Groundling-Marchpole:
Colonel Marchpole’s department was so secret that it communicated only with the War Cabinet and the Chiefs of Staff. Colonel Marchpole kept his information until it was asked for. To date that had not occurred and he rejoiced under neglect. Premature examination of his files might ruin his private, undefined Plan. Somewhere, in the ultimate curlicues of his mind, there was a Plan.
Given time, given enough confidential material, he would succeed in knitting the entire quarrelsome world into a single net of conspiracy in which there were no antagonists, only millions of men working, unknown to one another, for the same end; and there would be no more war.
Want a positive idea? One reading of this and this would be that the failure of intelligence isn’t a failure to collect or analyse information about the world, or rather it is, but it is caused by a failure to collect and analyse information about ourselves.
A proposal to deal with linkspammers – set up a central blackhole where you send everything that you spam-rate, and use the feed of URIs from it as an input to your automated spam filter. Its own web page is here. Unfortunately, as I point out, there’s a serious flaw here.
Basically, if we’re going to filter out anything containing links that have already been reported as spam into a feed that’s open to the world at large, we’ve created a means for anyone to censor anything, across the whole ‘sphere. To make it happen: generate spam comments, deliberately spammy spam comments that will be immediately recognised and deleted, but include the target URI as part of the payload.
Spammers have been using text taken from old books, or harvested from the Web at random, in order to fool spam filters for years; more recently, they’ve taken to harvesting text from the site they’re spamming, and including that with the spam, so as to fool the spam filter. So this doesn’t change existing practice or code very much. And adding legitimate links might well boost your chances of getting the commercial payload through; you’d have to think carefully about whether this was a good or a bad thing, as this attack will work best if the spam is detected and submitted automatically.
Anyway, we run off thousands of spam comments containing links to – say – Sourcewatch, RealClimate, or whoever, as well as links to enralgemypen1s.tv. The spam filters sweep them into the distributed filter feed. Now, anything containing those links is banned from any site that uses the feed to prime its spam filter; and, of course, once one site’s filter starts automatically sweeping them up, their concentration in the crapfeed will only go up.
Part of the problem here is not looking across the layers; we’ve already had this problem in internetworking, where various schemes to filter out attack traffic dynamically have foundered because the enemy realised that if they could get IP packets routed with arbitrary source addresses, they could also set real source addresses, and further, they could use this not just to escape blackhole routing, but to have whatever consequences that followed sent straight to somebody else. In fact, if you did this to something like a group of anycast DNS servers, you could generate truly epic volumes of traffic heading for the target, traffic that came from sources that they couldn’t possibly blacklist.
For example, one IP abuse problem is bogon packets – ones that come from or go to networks that shouldn’t be in the Internet Routing Table, because their addresses aren’t allocated to anyone or they’re reserved for private or special purpose use. These are a problem partly because this means that more than one network in more than one location could be under the same address, and therefore that bad things could happen with the routing protocols, and partly because squatting in bogon space is one way of avoiding responsibility for stuff you send, like spam, malware, denial-of-service attack traffic, etc. Fortunately, you can set your router filters to deny anything from or to any of the networks in the bogon feed provided by Team Cymru and forget all about it.
This works, however, because the bogon feed uses a whitelist approach based on the lists of assigned networks prepared by IANA and the regional registries. (Can anyone guess where I got the idea for the Vfeed from?) Out of the totality of possible IPv4 networks, it subtracts everything that’s been released for use – what’s left is the possible bogon space. If it received the contents of other networks’ bitbuckets, you could bet that someone would hook up a BGP session and inject the whole of Google’s address space, or worse, the whole of Level(3), Akamai Technologies, or the London Internet Exchange’s, and watch half the Internet disappear from their route view and everybody else’s like blips off a radar screen.
Fortunately, if it was prepared in that way, nobody competent would dare use it, and the incompetent usually don’t bother filtering bogon packets…which is why they exist.
Similarly, e-mail spam filters used to send warning messages back to the originating user, until it was noticed that you could set entirely spurious Reply-To fields and deluge all kinds of other people with crap that they couldn’t blacklist because it came from legitimate mail servers.
But the lesson seems to need re-learning quite a bit; explosives detection would seem to be a field full of promise, both in the negative version of the attack (you spray around the smell, and then you smuggle the explosives, because the increased concentration of nitrates from them isn’t detectible against the background contamination) and in the positive version (you spread the smell around the passengers, so they trip the detectors, causing havoc, and eventually causing them to turn the detector sensitivity down – and then you smuggle the explosives). Any attempt to achieve security by comparing a target stream with another depends on the independence of the two streams, just as any attempt to increase bandwidth by combining two parallel streams depends on their independence.
Come to think of it, I’m slightly surprised that I haven’t seen David Cameron poster spam, as that’s a legitimate source that generates content from anonymous Web input.