Archive for the ‘engineering’ Category
Remember that thumbsucker I did on the Great Firewall? Well, here’s some data, via this post (thanks, Jamie). It seems that Fang Binxing, China’s Chief Bellhead, boss of the Beijing University of Post & Telecoms, and king of the great firewall, really is in trouble due to his special relationship with Bo Xilai. He briefly came up on the web to threaten to sue a Japanese newspaper which thinks he was detained for investigation. Then, the former head of Google in China (who obviously isn’t neutral in this) prodded him, and he denied having the power to block the offending story.
The FT, meanwhile, thinks Zhou Yongkang, the head of the security establishment, is on the out. That shouldn’t be overstated because he’s due to retire, but he has been doing a rubber chicken circuit of second-division official appearances, and his key responsibilities have been taken over by others.
Fang is supposedly being replaced by Yan Wangjia, CEO of Beijing Venustech, who was responsible for engineering the Great Firewall. Her company’s Web site is convincing on that score. Here’s the announcement that they got the contract to provide China Mobile with a 10 gigabit DPI system:
Recently, Venustech successfully won the bid for centralized firewall procurement project of China Mobile in 2009 with its 10G high-end models of Venusense UTM, thus becoming the first company of its kind to supply high-end security gateway to telecom operators.
It is said this centralized firewall procurement project is the world’s largest single project of high-end 10G security gateway procurement ever implemented, drawing together most of world-renowned communication equipment vendors and information security vendors such as Huawei and Juniper. Through the rigorous test by China Mobile, Venusense UTM stood out, making Venustech the only Chinese information security vendor in this bid.
Looking around, it sounds like they are the hardware vendor of the Great Firewall, specialising in firewall, intrusion detection, and deep-packet inspection kit for the governmental, educational, and enterprise sectors “and of course the carriers”. Well, who else needs a 10Gbps and horizontally scaling DPI box but a carrier? Note the careful afterthought there. Also, note that they’re the only people in the world who don’t think Cisco is a leading network equipment vendor.
Quietly, the Eurofighter project seems to be running into trouble. First of all, Dassault got the Indian contract and the Indians claim that Rafale is dramatically cheaper. Further, they weren’t impressed by the amount of stuff that is planned to come in future upgrades, whose delivery is still not certain. These upgrades are becoming a problem, as the UK, Germany, and Italy aren’t in agreement about their schedule or about which ones they want. Also, a Swiss evaluation report was leaked that is extremely damning towards the Gripen and somewhat less so to Eurofighter.
This is going to have big consequences for European military-industrial politics. So is the latest wobble on F-35.
The fact that a majority of this year’s graduates from USAF basic pilot training are assigned to drone squadrons has got quite a bit of play in the blogosphere. Here, via Jamie Kenny, John Robb (who may still be burying money for fear of Obama or may not) argues that the reason they still do an initial flight training course is so that the pilot-heavy USAF hierarchy can maintain its hold on the institution. He instead wants to recruit South Korean gamers, in his usual faintly trendy dad way. Jamie adds the snark and suggests setting up a call centre in Salford.
On the other hand, before Christmas, the Iranians caught an RQ-170 intelligence/reconnaissance drone. Although the RQ-170 is reportedly meant to be at least partly stealthy, numerous reports suggest that the CIA was using it among other things to get live video of suspected nuclear sites. This seems to be a very common use case for drones, which usually have a long endurance in the air and can be risked remaining over the target for hours on end, if the surveillance doesn’t have to be covert.
Obviously, live video means that a radio transmitter has to be active 100% of the time. It’s also been reported that one of the RQ-170’s main sensors is a synthetic-aperture radar. Just as obviously, using radar involves transmitting lots of radio energy.
It is possible to make a radio transmitter less obvious, for example by saving up information and sending it in infrequent bursts, and by making the transmissions as directional as possible, which also requires less power and reduces the zone in which it is possible to detect the transmission. However, the nature of the message governs its form. Live video can’t be burst-transmitted because it wouldn’t be live. Similarly, real-time control signalling for the drone itself has to be instant, although engineering telemetry and the like could be saved and sent later, or only sent on request. And the need to keep a directional antenna pointing precisely at the satellite sets limits on the drone’s manoeuvring. None of this really works for a mapping radar, though, which by definition needs to sweep a radio beam across its field of view.
Even if it was difficult to acquire it on radar, then, it would have been very possible to detect and track the RQ-170 passively, by listening to its radio emissions. And it would have been much easier to get a radar detection with the advantage of knowing where to look.
There has been a lot of speculation about how they then attacked it. The most likely scenario suggests that they jammed the command link, forcing the drone to follow a pre-programmed routine for what to do if the link is lost. It might, for example, be required to circle a given location and wait for instructions, or even to set a course for somewhere near home, hold, and wait for the ground station to acquire them in line-of-sight mode.
Either way, it would use GPS to find its way, and it seems likely that the Iranians broadcast a fake GPS signal for it. Clive “Scary Commenter” Robinson explains how to go about spoofing GPS in some detail in Bruce Schneier’s comments, and points out that the hardware involved is cheap and available.
Although the military version would require you to break the encryption in order to prepare your own GPS signal, it’s possible that the Iranians either jammed it and forced the drone to fall back on the civilian GPS signal, and spoofed that, or else picked up the real signal at the location they wanted to spoof and re-broadcast it somewhere else, an attack known as “meaconing” during the second world war when the RAF Y-Service did it to German radio navigation. We would now call it a replay attack with a fairly small time window. (In fact, it’s still called meaconing.) Because GPS is based on timing, there would be a limit to how far off course they could put it this way without either producing impossible data or messages that failed the crypto validation, but this is a question of degree.
It’s been suggested that Russian hackers have a valid exploit of the RSA cipher, although the credibility of this suggestion is unknown.
The last link is from Charlie Stross, who basically outlined a conceptual GPS-spoofing attack in my old Enetation comments back in 2006, as a way of subverting Alistair Darling’s national road-pricing scheme.
Anyway, whether they cracked the RSA key or forced a roll-back to the cleartext GPS signal or replayed the real GPS signal from somewhere else, I think we can all agree it was a pretty neat trick. But what is the upshot? In the next post, I’m going to have a go at that…
Among the failings highlighted by the federation, which represents 136,000 officers, were chronic problems, particularly in London with the hi-tech digital Airwave radio network. Its failings were one reason why officers were “always approximately half an hour behind the rioters”. This partly explained, it said, why officers kept arriving at areas from where the disorder had moved on.
The Airwave network was supposed to improve the way emergency services in London responded to a crisis after damning criticism for communication failures following the 7 July bombings in 2005.
It is being relied upon to ensure that police officers will be able to communicate with each other from anywhere in Britain when the Olympics come to London next summer. The federation wants a review into why the multibillion-pound system collapsed, leaving officers to rely on their own phones.
“Officers on the ground and in command resorted, in the majority, to the use of personal mobile phones to co-ordinate a response,” says the report.
It sounds like BB Messenger over UMTS beats shouting into a TETRA voice radio, as it should being about 10 years more recent. Not *this* crap again!
There’s surely an interesting story about how the UK managed to fail to procure a decent tactical radio for either its army or its civilian emergency services in the 1990s and 2000s. Both the big projects – the civilian (mostly) one that ended up as Airwave and the military one that became BOWMAN – were hideously troubled, enormously overbudget, and very, very late. Neither product has been a great success in service. And it was a bad time for slow procurement as the rapid technological progress (from 9.6Kbps circuit-switched data on GSM in 1998 to 7.2Mbps HSPA in 2008, from Ericsson T61s in 2000 to iPhones in 2008) meant that a few years would leave you far behind the curve.
And it’s the UK, for fuck’s sake. We do radio. At the same time, Vodafone and a host of M4-corridor spin-offs were radio-planning the world. Logica’s telecoms division, now Acision, did its messaging centres. ARM and CSR and Cambridge Wireless were designing the chips. Vodafone itself, of course, was a spinoff from Racal, the company that sold army radios for export because the official ones were ones nobody would import in a fit. BBC Research’s experience in making sure odd places in Yorkshire got Match of the Day all right went into it more than you might think.
Presumably that says something about our social priorities in the Major/Blair era? That at least industrially, for once we were concentrating on peaceful purposes (but also having wars all over the place)? Or that we weren’t concentrating on anything much industrially, and instead exporting services and software? Or that something went catastrophically wrong with the civil service’s procurement capability in the 1990s?
It’s the kind of story Erik Lund would spin into something convincing.
The Libyan rebels are making progress, as well as robots. Some of them are reported to be within 40 miles of Tripoli, those being the ones who the French have been secretly arming, including with a number of light tanks. Now that’s what I call protecting civilians.
They are also about to take over the GSM network in western Libya like they did in the east. How do I know? I’m subscribed to the Telecom Tigers group on LinkedIn and so I get job adverts like these two.
ZTE BSC Job: URGENT send cv at [e-mail] for the job position or fw to your friends : Expert Telecom Engineer ZTE BSC.Location:Lybia,Western Area,1300USD/day,start immediate
URGENT send cv at [e-mail] for the job position or fw to your friends : ERICSSON MGW/BSS/BSC 2G/RAN Implementation Senior Expert Engineer.Location:Lybia,Gherian,Western Mountains,1300-1500 USD/day
In fact, one of the ads explicitly says that the job is in the rebel zone and the other is clear enough. What the rebels are planning to do is clear from the job descriptions:
must be able to install a ZTE latest generation BSC – platform to be integrated with 3rd party switching platform,solid knowledge of ZTE BSC build out and commissioning to connect up to 200 existing 2G/3G sites
To put it another way, they want to unhook the existing BTSs – the base stations – from Libyana and link them to a core system of their own, and in order to do this they need to install some Chinese-made Base Station Controllers (BSCs – the intermediary between the radio base stations and the central SS7 switch in GSM).
Here’s the blurb for the Ericsson post:
Responsible for commissioning and integrating an Ericsson 2G BSS network (2048-TRX Ericsson BSC plus Ericsson BTSs) in a multi-vendor environment. Will be responsible for taking the lead and ownership of all BSS commissioning and integration, leading the local team of BSS engineers, and managing the team through to completion of integration.
Experience of Ericsson MGW implementation, and integration of MGW with BSS, is highly desirable. Experience of optical transmission over A-interface.
Compilation, creation and coordination of BSC Datafill. This will include creating, generating, seeking and gathering of all Datafill components (Transport, RF Frequencies, neighbor relations, handovers, Switch parameters, ABIS mapping, etc.) based on experience and from examination of existing network configuration and data. Loading of Datafill into the BSC to facilitate BTS integration.
Working with the MSC specialists to integrate the BSC with the MSC. Providing integration support to BTS field teams; providing configuration and commissioning support to the BSC field team.
So they’ve got some Ericsson BSCs, the base stations are Ericsson too, and an MSC (Mobile Switching Centre, the core voice switch) has been found from somewhere – interesting that they don’t say who made it. That’ll be the “3rd party switching platform” referred to in the first job. They’re doing VoIP at some point, though, because they need a media gateway (MGW) to translate between traditional SS7 and SIP. They need engineers to integrate it all and to work out what the various configurations should be by studying what Gadhafi’s guys left. (It’s actually fairly typical that a mobile network consists of four or so different manufacturers’ kit, which keeps a lot of people in pies dealing with the inevitable implementation quirks.)
The successful candidate will also have some soft skills, too:
Willing to work flexible hours, excellent interpersonal skills and the ability to work under pressure in a challenging, diverse and dynamic environment with a variety of people and cultures.
You can say that again. Apparently, security is provided for anyone who’s up for the rate, which doesn’t include full board and expenses, also promised.
They already have at least one candidate.
Amusingly for a comment on scalability, I couldn’t post this on D^2’s thread because Blogger was in a state. Anyway, it’s well into the category of “comments that really ought to be posts” so here goes. So various people are wondering how the New York Times managed to spend $50m on setting up their paywall. D^2 reckons that they’re overstating, for basically cynical reasons. I think it’s more fundamental than that.
The complexity of the rules makes it sound like a telco billing system more than anything else – all about rating and charging lots and lots of events in close to real-time based on a hugely complicated rate-card. You’d be amazed how many software companies are sustained by this issue. It’s expensive. The NYT is counting pages served to members (easy) and nonmembers (hard), differentiating between referral sources, and counting different pages differently. Further, it’s got to do it quickly. Latency from the US West Coast (their worst case scenario) to nytimes.com is currently about 80 milliseconds. User-interface research suggests that people perceive a response as instant at 100ms – web surfing is a fairly latency tolerant application, but when you think that the server itself takes some time to fetch the page and the data rate in the last mile will restrict how quickly it can be served, there’s a very limited budget of time for the paywall to do its stuff without annoying the hell out of everyone.
Although the numbers of transactions won’t be as savage, doing real-time rating for the whole NYT website is going to be a significant scalability challenge. Alexa reckons 1.45% of global Web users hit nytimes.com, for example. As comparison, Salesforce.com is 0.4% and that’s already a huge engineering challenge (because it’s much more complicated behind the scenes). There are apparently 1.6bn “Internet users” – I don’t know how that’s defined – so that implies that the system must scale to 268 transactions/second (or about 86,400 times the daily reach of my blog!)
A lot of those will be search engines, Internet wildlife, etc, but you still have to tell them to fuck off and therefore it’s part of your scale & scope calculations. That’s about a tenth of HSBC’s online payments processing in 2007, IIRC, or a twentieth of a typical GSM Home Location Register. (The usual rule of thumb for those is 5 kilotransactions/second.) But – and it’s the original big but – you need to provision for the peak. Peak usage, not average usage, determines scale and cost. Even if your traffic distribution was weirdly well-behaved and followed a normal distribution, you’d encounter a over 95th percentile event one day in every 20. And network traffic doesn’t, it’s usually more, ahem, leptokurtotic. So we’ve got to multiply that by their peak/mean ratio.
And it’s a single point of failure, so it has to be robust (or at least fail to a default-open state but not too often). I for one can’t wait for the High Scalability article on it.
So it’s basically similar in scalability, complexity, and availability to a decent sized MVNO’s billing infrastructure, and you’d be delighted to get away with change from £20m for that.
This LA Times story about the Boeing 787 Dreamliner (so called because it’s still a dream – let’s get the last drop from that joke before it goes into service) and the role of outsourcing is fascinating. It is partly built on a paper by a senior Boeing engineer which makes among other things, this point:
Among the least profitable jobs in aircraft manufacturing, he pointed out, is final assembly — the job Boeing proposed to retain. But its subcontractors would benefit from free technical assistance from Boeing if they ran into problems, and would hang on to the highly profitable business of producing spare parts over the decades-long life of the aircraft. Their work would be almost risk-free, Hart-Smith observed, because if they ran into really insuperable problems they would simply be bought out by Boeing.
Even in its own financial terms, the whole thing didn’t make sense, because the job of welding together the subassemblies and hooking up the wires doesn’t account for much of the profit involved. Further, the supposedly high-margin intellectual-property element of the business – the research, development, and design of the plane – is only a profit centre after it’s been built. Until they’re done, it requires enormous amounts of investment to get right. The outsourcers were expecting the lowest-margin element of the company, assembly, to carry the costs of developing new products. Whether they were funded with equity or with debt, this implies that the systems integrator model, for aircraft at least, fundamentally restricts innovation.
This is one of the points I’d like to bring out here. Hart-Smith’s paper – you can read it here – is much stronger on this than the LA Times was willing to be. It’s a fascinating document in other ways, too. For a start, the depth of outsourcing Boeing tried to achieve with the 787 is incompatible with many of the best practices used in other industries. Because the technical interfaces invariably become organisational and economic ones, it’s hard to guarantee that modules from company X will fit with the ones from Y, and if they don’t, the adjustment mechanism is a lawsuit at the financial level, but at the technical level, it’s rework. The dodgy superblock has to be re-worked to get it right, and this tends to land up with the manufacturer. Not only does this defeat the point of outsourcing in the first place, it obviates the huge importance of avoiding expensive rework.
Further, when anything goes wrong, the cost migrates remorselessly to the centre. The whole idea of systems integration and outsourcing is that the original manufacturer is just a collection of contracts, the only location where all the contracts overlap. Theoretically, as near to everything as possible has been defined contractually and outsourced, except for a final slice of the job that belongs to the original manufacturer. This represents, by definition, all the stuff that couldn’t be identified clearly enough to write a contract for it, or that was thought too risky/too profitable (depends on which end you look at it) for anyone to take the contract on. If this was finance, rather than industry, it would be the equity tranche. One of the main reasons why you can’t contract for something, of course, is that you don’t know it’s going to happen. So the integrator essentially ends up holding all the uncertainty, in so far as they can’t push it off onto the customer or the taxpayer.
This also reminded me a little of Red Plenty – one of the problems is precisely that it’s impossible to ensure that all the participants’ constraints are mutually compatible. There are serious Pareto issues. There may be something like an economic law that implies that, given that there are some irreducible uncertainties in each contractual relationship, which can be likened to unallocated costs, they flow downhill towards the party with the least clearly defined role. You could call it Harrowell’s U-Bend. (Of course, in the macroeconomy, the party with the least well defined role is government – who you gonna call?)
Anyway, Hart-Smith’s piece deserves a place in the canon of what could be termed Sarcastic Economics.
I suspect that the problems he identifies have wider consequences in the economy. Given that it’s always easier to produce more or less of a given good than it is to produce something different, the degree to which it’s possible to reallocate capital has a big impact on how quickly it’s possible to recover from a negative shock, and how bad the transition process is. I would go so far as to argue that it’s most difficult to react to an economic shock by changing products, it’s next most difficult to react by producing more (you could be at a local maximum and need to invest more capital, for example), and it’s easiest to react by producing less, and that therefore there’s a structural bias towards deflationary adjustment.
Hart-Smith’s critique holds that the whole project of retaining product development, R&D, and commercial functions like sales in the company core, and contracting everything else out actually weakens precisely those functions. Rather than being able to develop new products quickly by calling on outside resources, the outside resources suck up the available capital needed to develop new products. And the U-bend effect drags the costs of inevitable friction towards them. Does this actually reduce the economy’s ability to reallocate capital at the macrolevel? Does it strengthen the deflationary forces in capitalism?
Interestingly, there’s also a presentation from Airbus knocking about which gives their views on the Dreamliner fiasco. Tellingly, they seem to think that it was Boeing’s wish to deskill its workforce as far as possible that underlies a lot of it. Which is ironic, coming from an enormous aerospace company. There’s also a fascinating diagram showing that no major assembly in the 787 touches one made by the same company or even the same Boeing division – exactly what current theories of the firm would predict, but then, if it worked we wouldn’t be reading this.
Assembly work was found to be completed incorrectly only after assemblies reached the FAL. Root causes are: Oversight not adequate for the high level of outsourcing in assembly and integration, Qualification of low-wage, trained-on-the-job workers that had no previous aerospace experience
I wonder what the accident rate was like. A question to the reader: 1) How would you apply this framework to the cost overruns on UK defence projects? 2) Does any of this remind you of rail privatisation?