Archive for the ‘computer’ Category
The Book
Red Plenty is a fictionalised history, or possibly a work of hard historical science fiction, which covers what it describes as the “fifties’ Soviet dream” but which might be better termed the Soviet sixties – the period from Khrushchev’s consolidation of power to the first crackdown on the dissidents and the intervention in Czechoslovakia. This is a big book in a Russian way – it’s always been a science-fiction prerogative to work with the vastness of space, the depth of history, and the wonder and terror of science and technology, but it’s also been fairly common that science-fiction has had a bit of a problem with people. The characters who re-fire the S-IVB main engine for translunar injection, with nothing but a survival pack of big ideas for use on arrival, tend to vanish in the cosmos. At its best, this has given the genre a disturbingly calm new perspective – chuck out your literary chintz, the rocket equation will not be fooled. At worst, well, OH NO JOHN RINGO.
Red Plenty covers a lot of big ideas, some serious hardware and even more serious software, and great swaths of the Soviet Union. But you will also need to be prepared to meet quite a lot of difficult but rewarding people, rather like the geneticist character Zoya Vaynshtayn does at the party Leonid Kantorovich’s students throw in Akademgorodok. In that sense, it has a genuinely Russian scale to it. The characters are a mixture of historical figures (as well as Kantorovich, you will spend some time in Nikita Khrushchev’s interior monologue), pure fictions, and shadow characters for some historical ones. (Emil Shaidullin roughly represents Gorbachev’s adviser Abel Aganbegyan; Vaynshtayn the historical geneticist Raissa Berg.)
So what are they up to?
Rebooting Science
Kantorovich, a central figure of the book, is remembered as the only Soviet citizen to win a Nobel Prize in economics, and the inventor of the mathematical technique of linear programming. As a character, he’s a sort of Soviet Richard Feynman – an egghead and expert dancer and ladies’ man, a collaborator on the nuclear bomb, and a lecturer so cantankerous his students make a myth of him. Politically, it’s never clear if he’s being deliberately provocative or completely naive, or perhaps whether the naivety is protective camouflage.
A major theme of the book is the re-creation of real science in the Soviet Union after the Stalinist era; biology has to start up afresh, economics has to do much the same, and everyone is working in a large degree of ignorance about the history of their fields. Some things simply can’t be restarted – as Spufford points out, despite all the compulsory Marxism-Leninism, even genetics hadn’t been erased as thoroughly as independent Marxist thought, and nobody in charge was willing to even think of opening that particular can of worms. On the other hand, the re-opening of economics as a field of study led to what the biologists would have called an adaptive radiation. Pioneers from engineering, maths, biology and physics began to lay spores in the new territory.
Comrades, let’s optimise!
The new ecosystem was known as cybernetics, which was given a wider meaning than the same word was in the West. Kantorovich’s significance in this is that his work provided both a theoretical framework and a critical technology – if the problem was to allocate the Soviet Union’s economic resources optimally, could it be possible to solve this by considering the economy as a huge system of linear production functions, and then optimising the lot? The idea had been tried before, in the socialist calculation debate of the 1920s, although without the same mathematical tools.
This is one of those events whose significance has changed a great deal over time. The question was whether it was possible for a planned economy to achieve an optimal allocation of resources. The socialists thought so; their critics held that it was impossible, and elaborated a set of criteria for optimal allocation very similar to the ones that are familiar as the standard assumptions in the economic theory of the firm in perfect competition. These days, it’s often presented as if this was a knockout argument. From the firm in perfect competition, we hop to Hayek’s idea that a market economy is better at making use of dispersed, implicit knowledge. Basta. We won.
The socialists weren’t without intellectual originality. In fact, they did actually formulate a mathematical rebuttal to the firm in perfect competition – the Lange model, which demonstrated that optimal allocation was a possibility in theory. The Hayekian critique wasn’t considered that great at the time – it was thought a much better point that the barrier to effective planning was a practical one, not a fundamental one. And even then, it was well known that the standard assumptions don’t, actually, describe any known economy. It would simply be impossible to process all the data with the technology available. Even with the new tools of linear optimisation, who was going to do all those sums, especially as the process is an iterative rather than a formal one? Stalin and Hitler had their own way of solving these arguments – no man, no problem – and the whole thing ended up moot for some time.
Computers: a technical fix
But if it had been impossible to run the numbers with pen and paper in 1920, or with Hollerith machines and input-output tables in 1940, what about computers in 1960? Computers could blast through millions of iterations for hundreds of thousands of production processes in tens of thousands of supply chains; computers were only likely to get better at it, too. Red Plenty is about the moment when it seemed that the new territory of cybernetics was going to give rise to a synthesis between mathematics, market-socialist thinking, and computing that would replace GOSPLAN and deliver Economics II: True Communism.
After all, by the mid-60s it was known that the enormous system of equations could be broken down into its components, providing that the constraints in each sub-system were consistent with the others. If each production unit had its own computer, and the computers in each region or functional organisation were networked, and then the networks were….were internetworked? In fact, the military was already using big computer networks for its command-and-control systems, borrowing a lot of ideas from the US Air Force’s SAGE; by 1964, there were plans for a huge national timesharing computer network, for both military and civilian use, as a horizontal system cutting across all the ministries and organisations. Every town would get a data centre.
The Economics Fairy Strikes Again
But, of course, it didn’t happen. There’s a good paper on the fate of the Soviet internetworkers here; Spufford has a fascinating document on the end of indigenous general-purpose computer development in the USSR here. Eventually, during the 1970s, it became increasingly obvious that the Soviet economy was not going to catch up with and outstrip anyone, let alone the United States, and the Austrian economists were retroactively crowned as having obviously been right all along, and given their own chance to fail. Spufford frames the story as a Russian fairytale; perhaps we can say that in fact, economics is the fairytale, or rather the fairy. Successive groups of intellectuals have fought their way through the stacks of books, past the ideological monsters, and eventually reached the fairy’s grotto, to be granted their greatest wish. And it’s always the same one – a chance to fail.
Why did the Soviet economists fail? Red Plenty gives a spectacular sweep through the Soviet economy as it actually was; from the workings of GOSPLAN, to the management of a viscose factory, to the world of semi-criminal side payments that actually handled the problems of day-to-day survival. In the 1990s, the descendants of one half of the socialist calculation debate swept into Russia as advisers paid by the Thatcher Foundation. Arriving on the fairy’s magic cloud, they knew little of how the Soviet economy worked in practice, and duly got their opportunity to fail. The GOSPLAN officials of the 60s were reliant on data that was both completely unreliable, being the product of political bargaining more than anything else, and typically slightly less than a year out of date. And the market socialists were just as reliant on the management of Soviet industry for the production cost data they needed to make sure all those budget constraints really were consistent.
That’s a technical explanation. But there are others available. Once communism was achieved the state was meant to wither away, and not many of the people in charge of it were at all keen on this as a pension plan. Without the power to intervene in the economy, what was the point of the Party, again? Also, what was that stuff about letting people connect computers to the telephone network and pass messages from factory to factory? Where will it end? The central government, the Politburo, GOSPLAN, STAVKA – they would never accept it.
Another, more radical, is that the eventual promise of Red Plenty was to render not so much the top of the pyramid, but the middle management, redundant. The rapid industrialisation had created a new management class who had every intention of getting rich and staying that way. (This was the Yugoslavs’ take on the Soviet Union – the new class had simply taken over from the capitalists.) What would happen to their bonuses, and their prerogative to control the planners by telling them what they wanted to hear?
And yet another is that the whole project was flawed. Even if it was possible to discern the economy’s underlying cost-structure, write the software, and optimise the whole thing, how would this system deal with dynamic economics? How would it allocate investment? How would it cope with technological change? It’s no help to point out that, in fact, a lot of the questions are nowhere near being solved in any economics.
Soviet History
One view of the USSR’s history is a succession of escape attempts. The NEP of the mid-20s, Nikolai Voznezhensky’s term at GOSPLAN in the 1940s, the Soviet 60s. Each saw a real effort to get away from a political economy which was in many ways a wild caricature of the Industrial Revolution, screwing down the labour share of income in order to boost capital investment and hence industrial output, answering any protest against this with the pistol of the state. As well as trying new economic ideas, they also saw surges of creativity in other fields. They were all crushed.
Arguably, you could say the same thing about perestroika. The people who signed the Alma-Ata protocol to arrange the end of the Soviet Union and the dismissal of Gorbachev were not, in fact, heroic dissidents, but rather career communist bureaucrats, some of whom went on to become their own little Stalins. Spufford says in the endnotes to Red Plenty that part of the book’s aim is a prehistory of perestroika – one view of the characters is that many of them are developing into the people who will eventually transform the country in the 1980s. Green politics was an important strand in the great dissident wave, right across the USSR and Central Europe; Zoya Vaynshteyn’s genetic research, which turns up some very unpleasant facts, is a case in point. Valentin, the programmer and cadre, is going to retain his self-image as a bohemian hacker into the future. Another Party figure in the book is the man who refuses to get used to violence, which will also turn out to be important in 1989.
Anyway, go read the damn book.
So we’ve discussed GCHQ and broad politics and GCHQ and technology. Now, what about a case study? Following a link from Richard Aldrich’s Warwick University homepage, here’s a nice article on FISH, the project to break the German high-grade cypher network codenamed TUNNY. You may not be surprised to know that key links in the net were named OCTOPUS (Berlin to Army Group D in the Crimea and Caucasus) and SQUID (Berlin to Army Group South). Everyone always remembers the Enigma break, but FISH is historically important because it was the one for which Bletchley Park invented the COLOSSUS computers, and also because of the extremely sensitive nature of the traffic. The Lorenz cyphersystem was intended to provide secure automated teleprinter links between strategic-level headquarters – essentially, the German army group HQs, OKW and OKH, the U-boat command deployed to France, and key civilian proconsuls in occupied Europe. The article includes a sample decrypt – nothing less than AG South commander von Weichs’ strategic appreciation for the battle of Kursk, as sent to OKH, in its entirety.
Some key points, though. It was actually surprisingly late in the day that the full power of FISH became available – it wasn’t enough to build COLOSSUS, it was also necessary to get enough of them working to fully industrialise the exploit and break everything that was coming in. This was available in time for Normandy, but a major driver of the project must have been as a form of leverage on the Americans (and the Russians). The fate of the two Colossi that the reorganised postwar GCHQ saved from the parts dump is telling – one of them was used to demonstrate that a NSA project wouldn’t work.
Also, COLOSSUS represented a turning point in the nature of British cryptanalysis. It wasn’t just a question of automating an existing exploit; the computers were there to implement a qualitatively new attack on FISH, replacing an analytical method invented by Alan Turing and John Tiltman with a statistical method invented by William Tutte. Arguably, this lost something in terms of scientific elegance – “Turingismus” could work on an intercept of any length, Tutte’s Statistical Method required masses of data to crunch and machines to crunch it on any practical timescale. But that wasn’t the point. The original exploit relied on an common security breach to work – you began by looking for two messages of similar length that began with the same key-indicator group.
Typically, this happened if the message got corrupted by radio interference or the job was interrupted and the German operators were under pressure – the temptation was just to wind back the tape and restart, rather than set up the machine all over again. In mid-1943, though, the Germans patched the system so that the key indicator group was no longer required, being replaced by a codebook distributed by couriers. The statistical attack was now the only viable one, as it depended on the fundamental architecture of FISH. Only a new cypher machine would fix it.
The symbolic figure here is Tommy Flowers, the project chief engineer, a telecoms engineer borrowed from the Post Office research centre who later designed the first all-electronic telephone exchange. Max Newman, Alan Turing’s old tutor and the head of the FISH project, had shown Flowers a copy of On Computable Numbers, which Flowers read but didn’t understand – he was a hacker rather than a logician, after all. He was responsible for the shift from electromechanical technology to electronics at Bletchley, which set both Newman and Turing off towards their rival postwar stored-program computing projects.
Another key point from the book is the unity of cryptography and cryptanalysis, and the related tension between spreading good technology to allies and hoping to retain an advantage over them. Again, the fate of the machines is telling – not only did the FISH project run on, trying to break Soviet cypher networks set up using captured machines, but it seems that GCHQ encouraged some other countries to use the ex-German technology, in the knowledge that this would make their traffic very secure against everyone but the elect. Also, a major use of the surviving computers was to check British crypto material, specifically by evaluating the randomness of the keystreams involved, a task quite similar to the statistical attack on FISH.
Finally, FISH is exhibit A for the debate as to whether the whole thing has been worthwhile. What could have been achieved had the rest of the Colossi been released from the secret world, fanning out to the universities, like the scientists from Bletchley did themselves? Max Newman took racks of top-quality valves away from Bletchley when he moved to Manchester University, and used them in the very first stored-program, digital, Turing-complete computer; Alan Turing tried to do the same thing, but with a human asset, recruiting Tommy Flowers to work on the Pilot-ACE at NPL. (Flowers couldn’t make it – he had to fix the creaking UK telephone network first.) Instead, the machines were broken up and the very existence of the whole project concealed.
On the other hand, though, would either Newman or Turing have considered trying to implement their theories in hardware without the experience, to say nothing of the budget? The fact that Turing’s paper was incomprehensible to one of the most brilliant engineers of a brilliant generation doesn’t inspire confidence, and of course one of the divides that had to be crossed between Cambridge and GPO Research in Dollis Hill was one of class.
A thought; it’s surprising how much you can learn about computer science from cooking, and cooking from computer science.
The von Neumann architecture – when you’re cooking, there is a central processing unit, which is the top of the stove, there is mass storage, which is the fridge and the cupboard, there is a user interface, which is your attention span, there is RAM, which is the work space, there is an output device, the table, and there’s also a network interface – the cook’s relationship with those around him or her. At any given time, any one of these elements can be the system’s rate-limiting factor – but it is a timeless, placeless truth that there is always one that is the system’s bottleneck.
More RAM is always welcome – Whether it’s fridge I/O, stovetop processing cycles, the interface with the cook, the queue of jobs waiting to be written to the table, or congestion in the social network, it’s always the free space in RAM that acts as a buffer for the whole system. If you’ve got enough RAM, you can cope with most problems without anything dire happening, by either queueing things up or else pre-fetching them from the cupboard ahead of time.
But if you go below a certain threshold level, the system tends to become increasingly unstable and you risk a crash and possibly dinner loss.
Throwing hardware at the problem works…until it doesn’t – You can only go so far in clearing space around the kitchen – if your demand for space goes too high, you need a bigger kitchen. Therefore, we need to pay close attention to scaling.
Amdahl’s Law and the trade-offs of parallelisation – Doing things in parallel allows us to achieve extremely high performance, but it does so at the expense of simplicity. You can see this most clearly in classic British cooking – many different high-grade ingredients all require different forms of cooking and cook at different rates, but must all arrive at the same time on the plate. Of course, as Amdahl’s law states, when you parallelise a process, it’s the elements you can’t parallelise that are the limiting factor. You can’t cook the filling of the pie and the pastry in parallel.
Distributed processing is great…until it isn’t – Similarly, distributing tasks among independent nodes allows us to scale up easily and to achieve greater reliability. However, these goals are often in conflict. The more cooks you have in the kitchen, the harder it is to maintain consistency between them, and the more critical it is that you get the networking element of the problem right. Strange emergent properties of the system may surprise you, and it seems to be a law that the consumption of drink scales O(log n) with the number of cooks.
Test-driven development – only fools rely on a priori design to guarantee the quality of their sauce. It’s absolutely necessary to build in tests at every step of the cooking process, both to maintain quality, and to stay agile in the face of unforeseen problems and user demands.
The only way to avoid coding bugs is to avoid coding – Ever since the days of Escoffier, cooks have known the importance of using well-known and well-tried recipes as modular building blocks. Escoffier started off with two basic sauces on which he built the entire enterprise of French haute cuisine. So use the standard libraries, and don’t try to invent a new way of making white sauce – just type from sauces import roux
Love the Unix core utilities – Look around your kitchen. What utensils do you actually pick up and use every time you cook? Obviously, you need to invest in the things you actually use, rather than expensive shiny gadgets you don’t fully understand. And you need to master the technique of using them. Get a big sharp knife.
Shea’s Law – Shea’s law states that “The ability to improve a design occurs primarily at the interfaces. This is also the prime location for screwing it up.” This is always true of cooking. If you can’t get back from the fridge around the inevitable spectators in time to complete a frying loop, or two flavours fail to get together, or you catastrophically fall out with someone in the kitchen, bad things happen.
Loop constructs are fundamental to everything – Perhaps the most important decisions you will make will be whether you minimise how long a process step takes, or whether you minimise the number of steps in the process. But the number of different operations on the CPU – the stove – is the main driver of complexity.
Everyone underestimates the problems of deployment – How will your recipe work in another kitchen, or in the same kitchen under different circumstances?
The hacker ethos – If you have to know what line 348 will be before you open a text editor, you’ll never get started. Similarly, you will get nowhere by wondering what temperature in degrees you should saute onions. Chuck your code in the oven, and see if it returns a roast chicken! Also, the fun of having a secret recipe is actually the fun of sharing it with others.
Junk food is bad for you, but sometimes it is unavoidable – Software produced by huge sinister corporations and stuffed with secret additives is likely to make you fat and stupid. But sometimes you need a pizza, or a really fancy presentation graphic, or a pretty music library application for a mobile device. Everyone does it – the thing is to maintain a balanced diet.
Further, after the last post, BT futurologist says we’re living in science fiction. And what particular works does she mention? Blade Runner, Judge Dredd and Solyent Green.
Well.
In the world of Halting State, meanwhile, the Germans have had a wee probby with their electronic health cards. Partly it’s due to a reasonably sensible design; they decided to store information on the card, rather than on a remote system, and to protect it using a public-key infrastructure.
Data on the cards would have been both encrypted for privacy, and signed for integrity, using keys that were themselves signed by the issuing authority, whose keysigning key would be signed by the ministry’s root certification authority, operated by the equivalent of HM Stationery Office.
Not just any PKI, either; it would have been the biggest PKI in the world. Unfortunately, a hardware security module failed – with the keysigning key for the root CA on it, and there are NO BACKUPS. This means that all the existing cards will have to be withdrawn as soon as any new ones are issued, because they will need to create a new root KSK, and therefore all existing cards will fail validation against the new ones.
It’s certainly an EPIC FAIL, and alert readers will notice that it’s a sizeable chunk of the plot of Charlie’s novel. But it’s a considerably less epic fail than it might have been; if the system had been a British-style massive central database, and the root CA had been lost or compromised, well…as it is, no security violation or data loss has occurred and the system can be progressively restored, trapping and issuing new cards.
In that sense, it’s actually reasonably good government IT; at least it failed politely.
Something else that came up at OpenTech; is there any way of getting continuing information out of the government? This is especially interesting in the light of things like Who’s Lobbying? and Richard Pope and Rob McKinnon’s work in the same direction; it seems to me that the key element in this is getting information on meetings, specifically meetings with paid advocates i.e. lobbyists. Obviously, this has some pretty crucial synergies with the parliamentary bills tracker.
However, it’s interesting at best to know who had meetings with who at some point in the past, just as it is at best interesting to know who claimed what on expenses at some point in the past; it’s not operationally useful. Historians are great, but for practical purposes you need the information before the next legislative stage or the next committee meeting.
I asked Tom Watson MP and John “not the Sheffield Wednesday guy” Sheridan of the Cabinet Office if the government does any monitoring of lobbyists itself; you’d think they might want to know who their officials are meeting with for their own purposes. Apparently there are some resources, notably the Hospitality Register for the senior civil service. (BTW, it was a bit of a cross section of the blogosphere – as well as Watson and a myriad of geeks, Zoe Margolis was moderating some of the panels. All we needed was Iain Dale to show up and have Donal Blaney threaten to sue everyone, and we’d have had the full set.)
One option is to issue a bucketful of FOIA requests covering everyone in sight, then take cover; carpet-bomb disclosure. But, as with the MPs’ expenses, this gives you a snapshot at best, which is of historical interest. As Stafford Beer said, it’s the Data-Feed you need.
So I asked Francis Davey, MySociety’s barrister, if it’s legally possible to create an enduring or repeating FOIA obligation on a government agency, so they have to keep publishing the documents; apparently not, and there are various tricks they can use to make life difficult, like assuming that the cost of doing it again is the same as doing it the first time, totalling all the requests, and billing you for the lot.
Am I daft, or was the Apple iBook G4, 12″ screen, the least annoying computer of my experience?
Thinking about contacts, and also reading this, it struck me that if there is anything in computing that needs a manifesto it’s Polite Software.
As in: it behaves helpfully towards others, by exporting and importing data in standard formats correctly (and if there is a common incorrect way of doing something, it should provide the option of doing it that way – like KDE does with “Microsoft-style” groupware notifications), it doesn’t get in the way (so if it’s doing something, it doesn’t interrupt you doing something else by grabbing the UI thread, and it segregates any process involving an external process so it doesn’t hang on a network connection), it never loses other people’s work, it doesn’t make you repeat yourself (so if you have to go back one step, all the values you entered are preserved, which most Web applications fail to do), it tells the truth (error messages are descriptive and don’t say you did something that you didn’t, and logs are kept and are easily available).
Why is contact management implemented so poorly in every software package I’ve ever encountered? It’s almost as bad as the all-time worst application, voicemail. Outlook, Gmail, KDE Kontact, MS Entourage, Mozilla Thunderbird; they’ve all been carefully pessimised to incorporate every possible pain in the arse. For a start, file formats and vendor lock-in. There is a perfectly good, easy to parse, free standard accepted the world over: the vCard.
But still, so often, it doesn’t bloody work. Most Microsoft products will only import them one at a time from individual files, which is useless if you have any number of contacts. I recently finished digitising and re-checking a huge pile of business cards accumulated from my journo days, and I finished up with 348 contacts classified as “business”. Now, Kontact will happily export them as a vCard file of version 2.1 or 3.0; but Nokia devices will only read the first contact.
And the killer detail? They store the contacts files as a multi-contact vCard! But this is an implementation detail. I have never seen any contacts app that doesn’t have a horribly ugly user interface, that doesn’t organise your contacts in hierarchical directories – because people are always part of zero or one groups, right? – and that doesn’t imagine that friends are alphabetical.
Social network sites are no solution. I hate them with a passion. They are closed-minded data sinks, whose business model is either “spam the buggers with ads” or “sell the company and all the data to someone who will spam the buggers with ads”. And I have yet to see one that doesn’t have most of the antifeatures I just described. And I want one copy of the data to be on my local machine, thank you.
Now, I think part of the problem is that all the applications I named are either e-mail clients or they incorporate an e-mail client. Perhaps we ought to disassociate the ideas of “contacts” and “e-mail”? Perhaps a contacts app should handle all the possible means of communicating with the contacts?
And too many of them confuse the task of searching through the contacts with displaying the details of each one. Search is good, but why is there no visual interface for contacts? Can’t we display them in a way that lets you see relationships between them? This relates to the organisation issue; I don’t want to select categories, I’d rather give a list of tags, or perhaps have both groups and tagging, or maybe tags and related names, and let the groups emerge.
That implies that the backend will have to be a database, rather than a flat file or a directory of vCards. SQLite would do perfectly well (Apple uses it for your messages in Mail.app). I’m aware that KDE is working on a common database backend (Akonadi) for these things, but at the moment it’s a waste of space, and the related project Nepomuk has the dread word “semantic” in it (i.e. a lot of stuff which we’re not really able to define in a meaningful fashion let alone implement).
The UI? I like the idea of plotting the contacts by their similarity or difference, maybe on a half sphere centred on the user, so their relationships become apparent. In KDE you could make this a .part for Kontact, so you could flip between the detail view and the graphical overview.
There should be a special term for the phase in the adoption of an idea between the point at which everyone accepts its desirability, and the point at which it wins over other ideas politically. This isn’t the same as the point of implementation; it’s quite possible for your idea to go into practice, but still to be in the queue elsewhere. So here we are; from E-Health Insider, it looks like the NHS NPfIT is looking at throwing away the disastrous Cerner and iSoft systems and issuing new tenders. In fact, some trusts in the South East have been permitted to sort their own problems out.
However, David Nicholson (the very model of a modern managerialist) is in charge and he for some reason won’t let all the other trusts do this. Even though it is clearly sensible, and is being done, it’s still in the special gap of political unacceptability. I thought this was interesting:
Nicholson said a key problem that the NPfIT programme had faced throughout was the unique requirements of the NHS and what it is trying to achieve. “There is no system off the shelf we could go for.”
Yet the programme was set up so that the NHS IT community, to say nothing of the NHS clinicians, and even less of the patients, had absolutely no input to it. Both Cerner and iSoft are trying to adapt off-the-shelf products from the US. And the attempts to save by outsourcing were disastrous.
“The Lorenzo product is being developed at Morecambe Bay, so we’re really optimistic that something will come out of that, but its not inevitable,” he went on. “And I think we’ll know over the next few months whether these products will actually be able to deliver the things they promised to do.”
That might have been an idea before you bought them, eh. Further, note that he thinks Lorenzo still might get somewhere because of in-house development work…
The other issue he said that was being focused on is how to deliver products more quickly and to give trusts more flexibility. Answering questions on the Summary Care Record, the NHS boss said it was possible to de-couple the Summary Care Record from the wider CRS development and simplify it.
This is damning to the entire project. If the record formats can be standardised without the rest of the system, there is no reason for “the system” as sold to Tony Blair to exist. Every trust could have its own system as long as they used the standard.
Remember, the only way to kill a zombie is to aim for the head. By the way, it’s not as if the Americans don’t have Bad Medical IT as well.