Archive for the ‘class’ Category

So, why did we get here? Back in the mists of time, in the US Bell System, there used to be something called a Business Office, by contrast to a Central Office (i.e. what we call a BT Local Exchange in the UK), whose features and functions were set down in numerous Bell System Practice documents. Basically, it was a site where the phone company took calls from the public, either for its own account or on behalf of a third party. Its practices were defined by Bell System standardisation, and its industrial relations were defined by the agreement between AT&T and the unions, which specified the pay and conditions for the various trades and workplace types inside the monster telco. If something was a Business Office according to the book, the union agreement covering those offices would apply.

In the Reaganite 80s, after the Bell System was broken up, someone realised that it would be possible to get rid of the union rules if they could re-define the site as something else. Not only could they change the rules, but they could move the site physically to a right-to-work state or even outside the USA. This is, it turns out, the origin of the phrase “call centre”.

In the UK, of course, call centres proliferated in parallel with utility privatisation and financial deregulation. A major element in the business case for privatisation was getting rid of all those electricity showrooms and BT local offices and centralising customer service functions into `all centres. At the same time, of course, privatisation created the demand for customer service in that it was suddenly possible to change provider and therefore to generate a shit-load of admin. Banks were keen to get rid of their branches and to serve the hugely expanding credit card market. At another level, IT helpdesks made their appearance.

On the other hand, hard though it is to imagine it now, there was a broader vision of technology that expected it all to be provided centrally – in the cloud, if you will – down phone lines controlled by your favourite telco, or by the French Government, or perhaps Rupert Murdoch. This is one of the futures that didn’t happen, of course, because PCs and the web happened instead, but you can bet I spent a lot of time listening to people as late as the mid-2000s still talking about multimedia services (and there are those who argue this is what stiffed Symbian). But we do get a sneak-preview of the digital future that Serious People wanted us to have, every time we have to ring the call centre. In many ways, call centres are the Anti-Web.

In Britain, starting in the 1990s, they were also part of the package of urban regeneration in the North. Along with your iconic eurobox apartments and AutoCAD-shaped arts centre, yup, you could expect to find a couple of gigantic decorated sheds full of striplighting and the precariat. Hey, he’s like a stocky, Yorkshire Owen Hatherley. After all, it was fairly widely accepted that even if you pressed the button marked Arts and the money rolled in, there was a limit to the supply of yuppies and there had to be some jobs in there as well.

You would be amazed at the degree of boosterism certain Yorkshire councils developed on this score, although you didn’t need top futurist Popcorn Whatsname to work out that booming submarine cable capacity would pretty quickly make offshoring an option. Still, if Bradford didn’t make half-arsed attempts to jump on every bandwagon going, leaving it cluttered with vaguely Sicilian failed boondoggles, it wouldn’t be Bradford.

Anyway, I think I’ve made a case that this is an institution whose history has been pathological right from the start. It embodies a fantasy of managing a service industry in the way the US automakers were doing at the same time – and failing, catastrophically.

What is it that makes call centres so uniquely awful as social institutions? This is something I’ve often touched on at Telco 2.0, and also something that’s been unusually salient in my life recently – I moved house, and therefore had to interact with getting on for a dozen of the things, several repeatedly. (Vodafone and Thames Water were the best, npower and Virgin Media the worst.) But this isn’t just going to be a consumer whine. In an economy that is over 70% services, the combination of service design, technology, and social relations that makes these things so awful is something we need to understand.

For example, why does E.ON (the electricity company, a branch of the German utility Rhein-Westfälische Elektrizitätswerke) want you to tell their IVR what class you are before they do anything else? This may sound paranoid, but when I called them, the first question I had to answer was whether I owned my home or was a tenant. What on earth did they want to know that for?

Call centres provide a horrible experience to the user. They are famously awful workplaces. And they are also hideously inefficient – some sites experience levels of failure demand, that is to say calls generated due to a prior failure to serve, over 50% of the total inbound calls. Manufacturing industry has long recognised that rework is the greatest enemy of productivity, taking up disproportionate amounts of time and resources and inevitably never quite fixing the problems.

So why are they so awful? Well, I’ll get to that in the next post. Before we can answer that, we need to think about how they are so awful. I’ve made a list of anti-patterns – common or standard practices that embody error – that make me angry.

Our first anti-pattern is queueing. Call centres essentially all work on the basis of oversubscription and queueing. On the assumption that some percentage of calls will go away, they save on staff by queueing calls. This is not the only way to deal with peaks in demand, though – for example, rather than holding calls, there is no good technical reason why you couldn’t instead have a call-back architecture, scheduling a call back sometime in the future.

Waiting on hold is interesting because it represents an imposition on the user – because telephony is a hot medium in McLuhan’s terminology, your attention is demanded while you sit pointlessly in the queue. In essence, you’re providing unpaid labour. Worse, companies are always tempted to impose on you while you wait – playing music on hold (does anybody actually like this?), or worse, nagging you about using the web site. We will see later on that this is especially pointless and stupid.

And the existence of the queue is important in the social relations of the workplace. If there are people queueing, it is obviously essential to get to them as soon as possible, which means there is a permanent pressure to speed up the line. Many centres use the queue as an operational KPI. It is also quality-destroying, in that both workers and managers’ attention is always focused on the next call and how to get off the current call in order to get after the queue.

A related issue is polling. That is to say, repeatedly checking on something, rather than being informed pro-actively when it changes. This is of course implicit in the queueing model. It represents a waste of time for everyone involved.

Repetition is one of the most annoying of the anti-patterns, and it is caused by statelessness. It is always assumed that this interaction has never happened before, will never happen again, and is purely atomised. They don’t know what happened in the last call, or even earlier in the call if it has been transferred. As a result, you have to provide your mother’s maiden name and your account number, again, and they have to retype it, again. The decontextualised nature of interaction with a call centre is one of the worst things about it.

Pretty much every phone system these days uses SIP internally, so there is no excuse for not setting a header with a unique identifier that could be used to look up data in all the systems involved, and indeed given out as a ticket number to the user in case they need to call again, or – why not – used to share the record of the call.

That point leads us to another very important one. Assymetric legibility characterises call centres, and it’s dreadful. Within, management tries to maintain a panopticon glare at the staff. Without, the user faces an unmapped territory, in which the paths are deliberately obscure, and the details the centre holds on you are kept secret. Call centres know a lot about you, but won’t say; their managers endlessly spy on the galley slaves; you’re not allowed to know how the system works.

So no wonder we get failure demand, in which people keep coming back because it was so awful last time. A few companies get this, and use first-call resolution (the percentage of cases that are closed first time) as a KPI rather than call rates, but you’d be surprised. Obviously, first-call resolution has a whole string of social implications – it requires re-skilling of the workforce and devolution of authority to them. No wonder it’s rare.

Now, while we were in the queue, the robot voice kept telling us to bugger off and try the Web site. But this is futile. Inappropriate automation and human/machine confusion bedevil call centres. If you could solve your problem by filling in a web form, you probably would have done. The fact you’re in the queue is evidence that your request is complicated, that something has gone wrong, or generally that human intervention is required.

However, exactly this flexibility and devolution of authority is what call centres try to design out of their processes and impose on their employees. The product is not valued, therefore it is awful. The job is not valued by the employer, and therefore, it is awful. And, I would add, it is not valued by society at large and therefore, nobody cares.

So, there’s the how. Now for the why.

I bet you thought I was kidding. But try this lede:

Taped to the inside of a Sainsbury’s window in King’s Lynn, a printout of a map reminds teenagers of the town’s restrictions. Next to it, a notice on Norfolk Constabulary headed paper spells out the terms of a dispersal order: within the marked area, groups of two or more youngsters can be broken up by police not only if they have caused intimidation, harassment, alarm or distress to members of the public but also if their behaviour is deemed likely to do so. Initially, the order focused mainly on the area around the supermarket and adjacent bus station, but when groups of young people who were deemed to be behaving antisocially relocated, it was extended to cover most of the town centre. Drinking in groups, verbal abuse and reckless or dangerous cycling are among the antisocial activities listed.

It must be deeply weird to grow up with this stuff. Years ago I blogged that in the future, the government would introduce universal ASBO conscription – everyone would be given an ASBO at birth, and the restrictions would be removed progressively as they demonstrated that they could behave responsibly, in a manner that balanced the rights they were granted.

But in this case, they’ve implemented pretty much that. Of course some idiot will show up to say that they shouldn’t misbehave, but note that the terms of the order give the police essentially total discretion. After all, if you can’t think of a reason off the top of your head why three young people might not potentially, at some point in the indefinite future, annoy any hypothetical citizen, you simply lack imagination and you’ve got no business being on the force.

PS, what would we say if, say, a government in central Europe declared a “Roma dispersal zone” across one of its cities? Probably not much, although the EU was in fact pretty aggressive about it during the accession process and British representatives in it were no different. But you see what I mean.

Swinging off a discussion at Jamie Kenny’s of climate deniers, I wonder what Jamie thinks about Steve Levine’s thesis here that China’s emerging culture of mass protest, the famous Mass-Group Incidents or MGIs, may have major and positive consequences for Chinese energy policy and therefore for the world.

It’s surely time we started calling the MGIs a movement; they are big, they are angry, they are common and increasingly so. Also, they seem to be getting more simultaneous as well as more frequent. The range of issues involved is enormous, from pay to police violence via public corruption and land appropriation. And they’re effective – the Chinese Communist Party, although it has more than enough brute force to crush them, often seems to semi-tolerate mass protests by trimming policy or sacking discredited officials. I’ve suggested before that the top level of the Party may actually see them as a useful force in disciplining the industrial bosses and territorial proconsuls who rule below it. The emperor may be far and the mountains may be high, but that’s the last thing you want when an enraged mob is trying to burn down the Public Security Bureau offices.

Beyond that, it’s conventional to say that the Party wants stability above all and that the organising principle of Chinese politics is Hobbesian fear of chaos. JK would probably point out that they’re damn right – if you had China’s history, you’d be obsessed by chaos because there’s been so much of it and it was so fucking chaotic. Anyway, Jamie is the blogosphere’s MGI expert and therefore I’d like his opinion.

Levine’s argument is that forecasts of China’s economic and energy future tend to arrive at an enormous and prolonged boom in coal-fired generation. They do this by projecting current rates of growth into the future. This scares the shit out of everyone with any sense, as it’s this huge, epochal belch of CO2 (and a lot of other stuff besides) that will eventually fuck us all up. Of course, if the CAGRs for coal consumption were wrong quite a few assumptions would need to be reviewed.

Levine argues that it’s the other stuff you get with coal, especially the low grade brown coal China uses a lot of, that will intervene. Basically, he reckons, air pollution, power-plant development, and mining will become a major and rising source of serious MGIs and will result in the Party restraining the coal industry before the mob does it for them. L

Levine points out that Chinese interests were quite restrained during last year’s rush of coal-related mergers and acquisitions – which is interesting when you think that if the Party wanted them to, they could bid almost without limit thanks to SAFE’s enormous foreign exchange reserves.

Further, and I seem to remember James Hansen making this point, there are real constraints on how much coal the Chinese economy can get through, in that moving that much coal from mines and ports to power stations will fairly soon use up most of the State Railways’ freight capacity. As most of this coal is going to drive the machine tools in all those export processing factories…well, either the bulk haul trainload of coal moves or the intermodal linertrain of containers of exports moves. Are you feeling lucky, punk? Building a completely new railway is of course the sort of thing that gets people in an MGI mood.

From a technocratic perspective, as Joe Romm explains here, restrictions on all the other stuff coal-fired power stations shit into the atmosphere are basically as good as a ban on them.

The question is therefore whether “green MGIs” are a serious possibility. It’s not actually necessary that the MGIs be specifically about what Greenpeace would call a green issue, of course. Rioting over pay or safety down the mines, over ethnic resentment in the coalfields, or over land appropriation for new power stations or railway lines would do as well. But it’s worth noting that environmental protests happened in the 1980s in Eastern Europe and the Soviet Union and acted as a sort of gateway drug to dissidence more broadly. Not that people who are willing to burn down the police headquarters and run the mayor out of town when they feel their interests are insufficiently recognised need one.

Relatedly, and also via LeVine, meet the Unitec Model 5 pneumatic hacksaw, guaranteed by the manufacturer to slice through a 24″ pipeline in one blow and only 16lbs dead weight to tote away from the scene of the crime. And it’s nothing but good American workmanship, too. Mesh wireless is so pre-Iraq by comparison, don’t you think?

This LA Times story about the Boeing 787 Dreamliner (so called because it’s still a dream – let’s get the last drop from that joke before it goes into service) and the role of outsourcing is fascinating. It is partly built on a paper by a senior Boeing engineer which makes among other things, this point:

Among the least profitable jobs in aircraft manufacturing, he pointed out, is final assembly — the job Boeing proposed to retain. But its subcontractors would benefit from free technical assistance from Boeing if they ran into problems, and would hang on to the highly profitable business of producing spare parts over the decades-long life of the aircraft. Their work would be almost risk-free, Hart-Smith observed, because if they ran into really insuperable problems they would simply be bought out by Boeing.

Even in its own financial terms, the whole thing didn’t make sense, because the job of welding together the subassemblies and hooking up the wires doesn’t account for much of the profit involved. Further, the supposedly high-margin intellectual-property element of the business – the research, development, and design of the plane – is only a profit centre after it’s been built. Until they’re done, it requires enormous amounts of investment to get right. The outsourcers were expecting the lowest-margin element of the company, assembly, to carry the costs of developing new products. Whether they were funded with equity or with debt, this implies that the systems integrator model, for aircraft at least, fundamentally restricts innovation.

This is one of the points I’d like to bring out here. Hart-Smith’s paper – you can read it here – is much stronger on this than the LA Times was willing to be. It’s a fascinating document in other ways, too. For a start, the depth of outsourcing Boeing tried to achieve with the 787 is incompatible with many of the best practices used in other industries. Because the technical interfaces invariably become organisational and economic ones, it’s hard to guarantee that modules from company X will fit with the ones from Y, and if they don’t, the adjustment mechanism is a lawsuit at the financial level, but at the technical level, it’s rework. The dodgy superblock has to be re-worked to get it right, and this tends to land up with the manufacturer. Not only does this defeat the point of outsourcing in the first place, it obviates the huge importance of avoiding expensive rework.

Further, when anything goes wrong, the cost migrates remorselessly to the centre. The whole idea of systems integration and outsourcing is that the original manufacturer is just a collection of contracts, the only location where all the contracts overlap. Theoretically, as near to everything as possible has been defined contractually and outsourced, except for a final slice of the job that belongs to the original manufacturer. This represents, by definition, all the stuff that couldn’t be identified clearly enough to write a contract for it, or that was thought too risky/too profitable (depends on which end you look at it) for anyone to take the contract on. If this was finance, rather than industry, it would be the equity tranche. One of the main reasons why you can’t contract for something, of course, is that you don’t know it’s going to happen. So the integrator essentially ends up holding all the uncertainty, in so far as they can’t push it off onto the customer or the taxpayer.

This also reminded me a little of Red Plenty – one of the problems is precisely that it’s impossible to ensure that all the participants’ constraints are mutually compatible. There are serious Pareto issues. There may be something like an economic law that implies that, given that there are some irreducible uncertainties in each contractual relationship, which can be likened to unallocated costs, they flow downhill towards the party with the least clearly defined role. You could call it Harrowell’s U-Bend. (Of course, in the macroeconomy, the party with the least well defined role is government – who you gonna call?)

Anyway, Hart-Smith’s piece deserves a place in the canon of what could be termed Sarcastic Economics.

I suspect that the problems he identifies have wider consequences in the economy. Given that it’s always easier to produce more or less of a given good than it is to produce something different, the degree to which it’s possible to reallocate capital has a big impact on how quickly it’s possible to recover from a negative shock, and how bad the transition process is. I would go so far as to argue that it’s most difficult to react to an economic shock by changing products, it’s next most difficult to react by producing more (you could be at a local maximum and need to invest more capital, for example), and it’s easiest to react by producing less, and that therefore there’s a structural bias towards deflationary adjustment.

Hart-Smith’s critique holds that the whole project of retaining product development, R&D, and commercial functions like sales in the company core, and contracting everything else out actually weakens precisely those functions. Rather than being able to develop new products quickly by calling on outside resources, the outside resources suck up the available capital needed to develop new products. And the U-bend effect drags the costs of inevitable friction towards them. Does this actually reduce the economy’s ability to reallocate capital at the macrolevel? Does it strengthen the deflationary forces in capitalism?

Interestingly, there’s also a presentation from Airbus knocking about which gives their views on the Dreamliner fiasco. Tellingly, they seem to think that it was Boeing’s wish to deskill its workforce as far as possible that underlies a lot of it. Which is ironic, coming from an enormous aerospace company. There’s also a fascinating diagram showing that no major assembly in the 787 touches one made by the same company or even the same Boeing division – exactly what current theories of the firm would predict, but then, if it worked we wouldn’t be reading this.

Assembly work was found to be completed incorrectly only after assemblies reached the FAL. Root causes are: Oversight not adequate for the high level of outsourcing in assembly and integration, Qualification of low-wage, trained-on-the-job workers that had no previous aerospace experience

I wonder what the accident rate was like. A question to the reader: 1) How would you apply this framework to the cost overruns on UK defence projects? 2) Does any of this remind you of rail privatisation?

Americans' self-estimations of thewealth distribution

(Via here.)

Apparently the interesting bit is:

the extent to which the public vastly overestimates the prosperity of lower-income Americans. The public thinks the 4th quintile has more money than the median quintile actually has. And the public thinks the 5th quintile has vastly more wealth than it really has…You can easily see how this could have a giant distorting effect on our politics. Poor Americans are simply much, much, much needier than people realize and this is naturally going to lead to an undue slighting of their interests.

The other interesting bit is the political breakdown. If you look at the second chart, which represents what the sample thought would be an ideal distribution, two things become obvious – one, the ideals are not very different, two, they are all significantly more egalitarian than the reality. People who admitted to voting for George Bush wanted to redistribute wealth quite radically – even when you compare their preferences with their illusory beliefs about the distribution, they want a very significant change. Compare them with reality, well…

The rest is pretty obvious – the least egalitarian group is those earning more than $100,000, the most are those earning less than $50,000, women are more egalitarian than men as a group, but no more so than declared John Kerry voters.

The problem is clearly much wider than their estimates of the lowest quintile’s wealth – in fact, although the people studied were unaware of quite how bad things were, they are clearly very well aware of inequality and they want it to change. And this sweeps right across the board. Even if they were unaware of the full poverty of the poor, they were well aware of the rich.

What they need evidently isn’t Blairism – scrape a bit off the top and pay it out to the poorest, with all kinds of interlocking and perverse conditions drawn up by the Yglesians of this world. This is why universalism is important – it’s possible that the whole discourse of “targeting” as applied to social policy reinforces the delusion that the poor are actually rich.

The Book

Red Plenty is a fictionalised history, or possibly a work of hard historical science fiction, which covers what it describes as the “fifties’ Soviet dream” but which might be better termed the Soviet sixties – the period from Khrushchev’s consolidation of power to the first crackdown on the dissidents and the intervention in Czechoslovakia. This is a big book in a Russian way – it’s always been a science-fiction prerogative to work with the vastness of space, the depth of history, and the wonder and terror of science and technology, but it’s also been fairly common that science-fiction has had a bit of a problem with people. The characters who re-fire the S-IVB main engine for translunar injection, with nothing but a survival pack of big ideas for use on arrival, tend to vanish in the cosmos. At its best, this has given the genre a disturbingly calm new perspective – chuck out your literary chintz, the rocket equation will not be fooled. At worst, well, OH NO JOHN RINGO.

Red Plenty covers a lot of big ideas, some serious hardware and even more serious software, and great swaths of the Soviet Union. But you will also need to be prepared to meet quite a lot of difficult but rewarding people, rather like the geneticist character Zoya Vaynshtayn does at the party Leonid Kantorovich’s students throw in Akademgorodok. In that sense, it has a genuinely Russian scale to it. The characters are a mixture of historical figures (as well as Kantorovich, you will spend some time in Nikita Khrushchev’s interior monologue), pure fictions, and shadow characters for some historical ones. (Emil Shaidullin roughly represents Gorbachev’s adviser Abel Aganbegyan; Vaynshtayn the historical geneticist Raissa Berg.)

So what are they up to?

Rebooting Science

Kantorovich, a central figure of the book, is remembered as the only Soviet citizen to win a Nobel Prize in economics, and the inventor of the mathematical technique of linear programming. As a character, he’s a sort of Soviet Richard Feynman – an egghead and expert dancer and ladies’ man, a collaborator on the nuclear bomb, and a lecturer so cantankerous his students make a myth of him. Politically, it’s never clear if he’s being deliberately provocative or completely naive, or perhaps whether the naivety is protective camouflage.

A major theme of the book is the re-creation of real science in the Soviet Union after the Stalinist era; biology has to start up afresh, economics has to do much the same, and everyone is working in a large degree of ignorance about the history of their fields. Some things simply can’t be restarted – as Spufford points out, despite all the compulsory Marxism-Leninism, even genetics hadn’t been erased as thoroughly as independent Marxist thought, and nobody in charge was willing to even think of opening that particular can of worms. On the other hand, the re-opening of economics as a field of study led to what the biologists would have called an adaptive radiation. Pioneers from engineering, maths, biology and physics began to lay spores in the new territory.

Comrades, let’s optimise!

The new ecosystem was known as cybernetics, which was given a wider meaning than the same word was in the West. Kantorovich’s significance in this is that his work provided both a theoretical framework and a critical technology – if the problem was to allocate the Soviet Union’s economic resources optimally, could it be possible to solve this by considering the economy as a huge system of linear production functions, and then optimising the lot? The idea had been tried before, in the socialist calculation debate of the 1920s, although without the same mathematical tools.

This is one of those events whose significance has changed a great deal over time. The question was whether it was possible for a planned economy to achieve an optimal allocation of resources. The socialists thought so; their critics held that it was impossible, and elaborated a set of criteria for optimal allocation very similar to the ones that are familiar as the standard assumptions in the economic theory of the firm in perfect competition. These days, it’s often presented as if this was a knockout argument. From the firm in perfect competition, we hop to Hayek’s idea that a market economy is better at making use of dispersed, implicit knowledge. Basta. We won.

The socialists weren’t without intellectual originality. In fact, they did actually formulate a mathematical rebuttal to the firm in perfect competition – the Lange model, which demonstrated that optimal allocation was a possibility in theory. The Hayekian critique wasn’t considered that great at the time – it was thought a much better point that the barrier to effective planning was a practical one, not a fundamental one. And even then, it was well known that the standard assumptions don’t, actually, describe any known economy. It would simply be impossible to process all the data with the technology available. Even with the new tools of linear optimisation, who was going to do all those sums, especially as the process is an iterative rather than a formal one? Stalin and Hitler had their own way of solving these arguments – no man, no problem – and the whole thing ended up moot for some time.

Computers: a technical fix

But if it had been impossible to run the numbers with pen and paper in 1920, or with Hollerith machines and input-output tables in 1940, what about computers in 1960? Computers could blast through millions of iterations for hundreds of thousands of production processes in tens of thousands of supply chains; computers were only likely to get better at it, too. Red Plenty is about the moment when it seemed that the new territory of cybernetics was going to give rise to a synthesis between mathematics, market-socialist thinking, and computing that would replace GOSPLAN and deliver Economics II: True Communism.

After all, by the mid-60s it was known that the enormous system of equations could be broken down into its components, providing that the constraints in each sub-system were consistent with the others. If each production unit had its own computer, and the computers in each region or functional organisation were networked, and then the networks were….were internetworked? In fact, the military was already using big computer networks for its command-and-control systems, borrowing a lot of ideas from the US Air Force’s SAGE; by 1964, there were plans for a huge national timesharing computer network, for both military and civilian use, as a horizontal system cutting across all the ministries and organisations. Every town would get a data centre.

The Economics Fairy Strikes Again

But, of course, it didn’t happen. There’s a good paper on the fate of the Soviet internetworkers here; Spufford has a fascinating document on the end of indigenous general-purpose computer development in the USSR here. Eventually, during the 1970s, it became increasingly obvious that the Soviet economy was not going to catch up with and outstrip anyone, let alone the United States, and the Austrian economists were retroactively crowned as having obviously been right all along, and given their own chance to fail. Spufford frames the story as a Russian fairytale; perhaps we can say that in fact, economics is the fairytale, or rather the fairy. Successive groups of intellectuals have fought their way through the stacks of books, past the ideological monsters, and eventually reached the fairy’s grotto, to be granted their greatest wish. And it’s always the same one – a chance to fail.

Why did the Soviet economists fail? Red Plenty gives a spectacular sweep through the Soviet economy as it actually was; from the workings of GOSPLAN, to the management of a viscose factory, to the world of semi-criminal side payments that actually handled the problems of day-to-day survival. In the 1990s, the descendants of one half of the socialist calculation debate swept into Russia as advisers paid by the Thatcher Foundation. Arriving on the fairy’s magic cloud, they knew little of how the Soviet economy worked in practice, and duly got their opportunity to fail. The GOSPLAN officials of the 60s were reliant on data that was both completely unreliable, being the product of political bargaining more than anything else, and typically slightly less than a year out of date. And the market socialists were just as reliant on the management of Soviet industry for the production cost data they needed to make sure all those budget constraints really were consistent.

That’s a technical explanation. But there are others available. Once communism was achieved the state was meant to wither away, and not many of the people in charge of it were at all keen on this as a pension plan. Without the power to intervene in the economy, what was the point of the Party, again? Also, what was that stuff about letting people connect computers to the telephone network and pass messages from factory to factory? Where will it end? The central government, the Politburo, GOSPLAN, STAVKA – they would never accept it.

Another, more radical, is that the eventual promise of Red Plenty was to render not so much the top of the pyramid, but the middle management, redundant. The rapid industrialisation had created a new management class who had every intention of getting rich and staying that way. (This was the Yugoslavs’ take on the Soviet Union – the new class had simply taken over from the capitalists.) What would happen to their bonuses, and their prerogative to control the planners by telling them what they wanted to hear?

And yet another is that the whole project was flawed. Even if it was possible to discern the economy’s underlying cost-structure, write the software, and optimise the whole thing, how would this system deal with dynamic economics? How would it allocate investment? How would it cope with technological change? It’s no help to point out that, in fact, a lot of the questions are nowhere near being solved in any economics.

Soviet History

One view of the USSR’s history is a succession of escape attempts. The NEP of the mid-20s, Nikolai Voznezhensky’s term at GOSPLAN in the 1940s, the Soviet 60s. Each saw a real effort to get away from a political economy which was in many ways a wild caricature of the Industrial Revolution, screwing down the labour share of income in order to boost capital investment and hence industrial output, answering any protest against this with the pistol of the state. As well as trying new economic ideas, they also saw surges of creativity in other fields. They were all crushed.

Arguably, you could say the same thing about perestroika. The people who signed the Alma-Ata protocol to arrange the end of the Soviet Union and the dismissal of Gorbachev were not, in fact, heroic dissidents, but rather career communist bureaucrats, some of whom went on to become their own little Stalins. Spufford says in the endnotes to Red Plenty that part of the book’s aim is a prehistory of perestroika – one view of the characters is that many of them are developing into the people who will eventually transform the country in the 1980s. Green politics was an important strand in the great dissident wave, right across the USSR and Central Europe; Zoya Vaynshteyn’s genetic research, which turns up some very unpleasant facts, is a case in point. Valentin, the programmer and cadre, is going to retain his self-image as a bohemian hacker into the future. Another Party figure in the book is the man who refuses to get used to violence, which will also turn out to be important in 1989.

Anyway, go read the damn book.

So we’ve discussed GCHQ and broad politics and GCHQ and technology. Now, what about a case study? Following a link from Richard Aldrich’s Warwick University homepage, here’s a nice article on FISH, the project to break the German high-grade cypher network codenamed TUNNY. You may not be surprised to know that key links in the net were named OCTOPUS (Berlin to Army Group D in the Crimea and Caucasus) and SQUID (Berlin to Army Group South). Everyone always remembers the Enigma break, but FISH is historically important because it was the one for which Bletchley Park invented the COLOSSUS computers, and also because of the extremely sensitive nature of the traffic. The Lorenz cyphersystem was intended to provide secure automated teleprinter links between strategic-level headquarters – essentially, the German army group HQs, OKW and OKH, the U-boat command deployed to France, and key civilian proconsuls in occupied Europe. The article includes a sample decrypt – nothing less than AG South commander von Weichs’ strategic appreciation for the battle of Kursk, as sent to OKH, in its entirety.

Some key points, though. It was actually surprisingly late in the day that the full power of FISH became available – it wasn’t enough to build COLOSSUS, it was also necessary to get enough of them working to fully industrialise the exploit and break everything that was coming in. This was available in time for Normandy, but a major driver of the project must have been as a form of leverage on the Americans (and the Russians). The fate of the two Colossi that the reorganised postwar GCHQ saved from the parts dump is telling – one of them was used to demonstrate that a NSA project wouldn’t work.

Also, COLOSSUS represented a turning point in the nature of British cryptanalysis. It wasn’t just a question of automating an existing exploit; the computers were there to implement a qualitatively new attack on FISH, replacing an analytical method invented by Alan Turing and John Tiltman with a statistical method invented by William Tutte. Arguably, this lost something in terms of scientific elegance – “Turingismus” could work on an intercept of any length, Tutte’s Statistical Method required masses of data to crunch and machines to crunch it on any practical timescale. But that wasn’t the point. The original exploit relied on an common security breach to work – you began by looking for two messages of similar length that began with the same key-indicator group.

Typically, this happened if the message got corrupted by radio interference or the job was interrupted and the German operators were under pressure – the temptation was just to wind back the tape and restart, rather than set up the machine all over again. In mid-1943, though, the Germans patched the system so that the key indicator group was no longer required, being replaced by a codebook distributed by couriers. The statistical attack was now the only viable one, as it depended on the fundamental architecture of FISH. Only a new cypher machine would fix it.

The symbolic figure here is Tommy Flowers, the project chief engineer, a telecoms engineer borrowed from the Post Office research centre who later designed the first all-electronic telephone exchange. Max Newman, Alan Turing’s old tutor and the head of the FISH project, had shown Flowers a copy of On Computable Numbers, which Flowers read but didn’t understand – he was a hacker rather than a logician, after all. He was responsible for the shift from electromechanical technology to electronics at Bletchley, which set both Newman and Turing off towards their rival postwar stored-program computing projects.

Another key point from the book is the unity of cryptography and cryptanalysis, and the related tension between spreading good technology to allies and hoping to retain an advantage over them. Again, the fate of the machines is telling – not only did the FISH project run on, trying to break Soviet cypher networks set up using captured machines, but it seems that GCHQ encouraged some other countries to use the ex-German technology, in the knowledge that this would make their traffic very secure against everyone but the elect. Also, a major use of the surviving computers was to check British crypto material, specifically by evaluating the randomness of the keystreams involved, a task quite similar to the statistical attack on FISH.

Finally, FISH is exhibit A for the debate as to whether the whole thing has been worthwhile. What could have been achieved had the rest of the Colossi been released from the secret world, fanning out to the universities, like the scientists from Bletchley did themselves? Max Newman took racks of top-quality valves away from Bletchley when he moved to Manchester University, and used them in the very first stored-program, digital, Turing-complete computer; Alan Turing tried to do the same thing, but with a human asset, recruiting Tommy Flowers to work on the Pilot-ACE at NPL. (Flowers couldn’t make it – he had to fix the creaking UK telephone network first.) Instead, the machines were broken up and the very existence of the whole project concealed.

On the other hand, though, would either Newman or Turing have considered trying to implement their theories in hardware without the experience, to say nothing of the budget? The fact that Turing’s paper was incomprehensible to one of the most brilliant engineers of a brilliant generation doesn’t inspire confidence, and of course one of the divides that had to be crossed between Cambridge and GPO Research in Dollis Hill was one of class.

Via Bruce Schneier’s, an interesting paper in PNAS on false positives and looking for terrorists. Even if the assumptions of profiling are valid, and the target-group really is more likely to be terrorists, it still isn’t a good policy. Because the inter-group difference in the proportion of terrorists is small relative to the absolute scarcity of terrorists in the population, profiling means that you hugely over-sample the people who match the profile. Although it magnifies the hit-rate, it also magnifies the false positive rate, and because a search carried out on someone matching the profile is one not carried out elsewhere, it increases the chance of missing someone.

In fact, if you profile, you need to balance this by searching non-profiled people more often.

The operators of Deepwater Horizon disabled a lot of alarms in order to stop false alarms waking everyone up at all hours. Shock! In some ways, though, that was better than this story about a US hospital, from comp.risks. There, a patient died when an alarm was missed. Why? Too many alarms, beeps, and general noise, and people had turned off some devices’ alarms in order to get rid of them.

Unlike Transocean, they had a solution – remove the off switches, because that way, they’ll damn well have to listen. At least the oil people didn’t think that would work. Of course, they didn’t think that if your warning system goes off so often that nobody can sleep when nothing unusual is going on, there’s something wrong with the system.

Adam Greenfield responds, and anyone who uses the BOAC speedbird as their avatar is probably worth listening to:

“But that becomes a political problem, something almost all geeks seem incapable of understanding, probably because its a social rather than a technical problem.”

Well, “geeks” may be incapable of understanding that, Cian, but that happens to be where we start. I mean, you guys’d know this if you actually bothered to look into what happens at a walkshop instead of taking the lazy way out and slagging it as a “kool kids” thing. The whole point, as far as I’m concerned, is to take a good close materialist look at how communities, institutions and individuals contest public space and the public sphere.

In this case, sure, the lens we’re using is technological. But the concerns predominantly have to do with accountability, agency and control, and the language is everyday. Come join us on a walkshop sometime and contribute your insight, and I think you’d be hard pressed to come away with any other conclusion.

I think what I’m getting at here is that in many ways, the power-relationships in our cities aren’t embedded in architecture some much as in software, as it were. Sometimes it really is software, too – the social services’ disastrous computer system that played a role in the death of Baby P, and did so by imposing a sort of dysfunctional and extreme-Taylorist workplace on the social workers, or the systems that allocate tax-credits and then sometimes demand repayments that essentially amount to the recipients’ entire economic surplus.

But it’s broader than that – it’s about people’s expectations, levels of economic security, and the strategies they adopt to cope with life. After all, everyone adapts in some way, it’s just that some local optimisations cut off more options than others.

It’s also about how institutions adapt to people; one difference between having visible, hardware favelas and having them in software is that it’s easier to think that it’s just another damn fool, or someone who is In Need of Care, although the flip is that it’s also easier just to adopt a hardware fix and build a fucking great wall…

In Victorian England, the poor risked going to debtors prisons. In contemporary America, the poor face a different form of lockup.

Its walls are built out of predatory mortgage loans, rent-to-own contracts, payday lending, instant tax “refunds,” the repo man, the old-fashioned pawn shop, bait-and-switch debt consolidators and a rogues’ gallery of scam artists.