Archive for the ‘geekage’ Category

slightly less lazyweb

So I couldn’t just drop the OAuth library into the plugins directory because I didn’t have sudo rights there, so I wgot and zipped it and uploaded it as a plugin via the web interface and changed the include to point there. And now the end of the bifurcation era may be in sight. Not there yet, but the first 25 posts got transferred out of a mere 2,725 and 1,943 comments (before we get to thinking about the Enetation ones, I’ve got the dump file somewhere, and that’ll make a python project for one of these days).

Meanwhile, has anyone got experience of running XBMC or one of the many other Linux media centres on a cheapo Android tablet or netbook device, preferably with the content somewhere on a network? I’m thinking of building a not-a-hifi system that lets me have different music rooms – just because I can.

So OpenSUSE11.4 was out this week. As the Jedi said here:

gah! suse is never totally easy

Indeed. I thought I’d do an online upgrade, so I scheduled this to happen when I was in the office and therefore had a fast Internet link available. I applied all the remaining 11.3 updates, configured the three additional repos, did a “zypper ref” and then a “zypper dup”, paged through the Flash player licence, and watched it report 500 odd MB of packages to grab. Much churning later, it started to miss packages, which I installed manually. Eventually, it finished, and I ran “zypper verify” to check it out. This reported that vim-data was missing, so I installed it, and went for a reboot.

Oh dear, the new distro apparently didn’t know what an ext4 filesystem was. And although I could still start 11.3 from the boot menu, KDE wasn’t working. So, back at home, I downloaded the ISO image (2 hours 20 odd minutes at home), burned a disc, and prepared for a clean install, which failed with a message about running out of processes in this runlevel. You guessed it, dodgy install media. Wiped and downloaded again. I check the MD5 hash. It’s a miss. I start the download again and go out. I come back to find the laptop has rebooted and has got to the failure point in 11.4. How? What? I restart in Windows and discover that 678 of 695MB has been fetched before something happened. It dawns on me that Microsoft has force-rebooted the bastard through Windows Update although I set it to do nothing of the sort. I’m getting seriously pissed off now. I download it again, from a different mirror (ox.ac.uk rather than Kent Uni mirrorservice.org). More hours. I check the MD5 hash. What do you know, it’s wrong. And it’s the same hash as last time. As an experiment, I burn it anyway, boot it, and run the media check utility.

Which fails at 63%, block 226192, in exactly the same location as the first time around. Riight, it looks like Novell has pushed a crappy image out to all the damn mirrors. Well, I can still get a Linux shell in 11.3, so I run it up, hook an ethernet cable to the linksys box, run dhclient, and repeat the command-line distro upgrade. Although Zypper still thinks all the dependencies are in place, when I tell it to “zypper dup”, it still manages to find 258 package changes left to do from the original upgrade. It takes an age, but eventually, completes, and it’s shutdown -r now time. And everything now works, right down to hibernated browser tabs.

Except for Python packages, of course. Pythonistas tend to dote on easy_install, but I’m still annoyed that I have to update this stuff out of sync with my linux environment, especially as it lives in my root partition. Would it be so hard to put everything in PyPi into an RPM repository and never worry about it ever again? This is actually an important lesson about the mobile app stores, and the original app store itself, Firefox extensions. Freedom goes with structure.

Lessons from this: once an upgrade shows any signs of weirdness, abort it and start again. And don’t expect online upgrade to work first time – this happened to me with a past OpenSUSE upgrade, come to think of it, but I clearly learned nothing.

The version of Nokia’s Share Online application that shipped with my E71 has a problem. I was trying to upload photos from Berlin over O2 Germany’s data network to my Flickr account, and it unexpectedly returned an authentication error; I looked at “your recent photos and videos”, and got photos belonging to Flickr user mrspin, then from three others. Actually, I get another user every time.

I could reach my Flickr page via the web browser. The problem is not O2.de or roaming-specific; it happens here in the UK as well. What I think is happening is something like this: 3UK is using a lot of NAT in its data network, as mobile operators often do, and something about Share Online doesn’t handle this well. Specifically, I reckon it’s using the device IPv4 address as part of an identifier – as the addresses in 3UK’s netblock are rapidly being reused for other users, it may be possible for someone else to log in using IP address x.x.x.x, and then a request from me to be bound to the wrong account.

Oddly, the browser isn’t affected. I suspect, therefore, that Share Online is doing some sort of weird magic rather than just using the DNS and Flickr’s own authentication mechanism – perhaps it doesn’t resolve flickr.com every time, or honour the Flickr cookie correctly? After all, a Web authentication mechanism should cope with the same user logging in from multiple IP addresses. That should be obvious.

Fortunately, when I tried to write to the account, the authentication failed – as it should do, as I was trying to log in to the wrong account. This suggests that Share Online doesn’t actually resolve flickr.com/yourname for read-only, but instead caches replies matched with IP addresses somewhere in the network. As mobile operators reuse IP addresses a lot, and use non-routable (RFC-1918) addresses which aren’t globally unique a lot inside their networks, this is a really bad idea. Something is obviously cached, as the problem persists from my own WLAN as well.

I suspect that this used to work because the percentage of a typical operator’s IP address space that was actually used was low, and therefore there was a good chance that the same address wouldn’t be used for the same application before the cache expired. Now this is no longer true.

There appears to be a new version of the application out, so will try and let you know.

I don’t yet know if I can, for example, see content marked as “private” from other users, or of course if they can see mine.

firefox

Whining about Firefox crashes. Here’s one day last week:

Start 0930
1515 – 5 groups, 111 tabs. Pressed page down key; CRASH. Resume successful.
1538 – Hang. RAM usage peaks at 66%, CPU 1 goes to 100%
1541 – Running, very slowly. Resource utilisation still very high
1542 – Hang
1550 – Cache cleared, normal ops resumed
1813 – Hang. RAM goes from 23% to 40%, CPU 1 to 100%
1828 – Memory leak – top shows RAM usage at 55%, but the system is using the swapfile heavily
1845 – Memory leak – RAM 84%. CPU 1 2%
1910 – RAM usage down to 63%, but still crappy
1917 – Memory leak – RAM 80%, CPU 1 3%

Next day:
1110 – Hang. CPU 1 103%, RAM 44.2%
1115 – Hang. Firefox process killed from command line. Fails to launch, “existing process already running” error message. Kill -9 from command line.
1335 – Hang CPU 1 99%, RAM 33%
1337 – Wow; it’s recovered to tolerably normal functioning.

I agree 111 tabs is a lot, but you can see why I’m pissed off.

My heart sank when I saw these words: Firefox user interface guru. And yes, he’s had an idea. A suggestion: rather than a fancy new UI, how about having a crack at stability? FF 3, and the later FF 2s, were and are crashy, hangy, and inconsistent. It regularly (daily) gets its knickers in a twist and either fails to blit the screen, hangs, reads from the keyboard buffer extremely slowly, or just crashes without error messages, warnings, logs or anything else. And the “Save and Quit” function doesn’t work, which is probably connected with the fact that most crashes at least let you restore the tabs, but some lose even that.

If they want a new idea, what about having a crack at whatever is to Firefox as Firefox was to Mozilla, a lightweight, fast, rugged cut-down version of the bloated original? They could keep only the rendering engine and things like SSL, and make everything else an extension. Personally, I’d use Konqueror if it had equivalents for the various extensions I use. Anyway, Mozilla thinks Firefox is an operating system. And the thing about operating systems is that stability, security, and affordances for applications are the first and indeed only things that matter. Fancy user interfaces can be applied later.

Need to get contacts out of a Nokia phone and into something else? There are various standard things – the “switch mode” on the newer ones may help. But the PC Suite client (which still only seems to exist for Windows) doesn’t let you export data from the phone as anything but a .nbu (Nokia Back Up) file. This clearly isn’t good; but the other day I had to get contacts out of a Nokia 6500 with no screen.

So I reboot in Windows, hook up the gadget by USB cable, and run the software client, then backup the data on the phone to the computer. Opening the .nbu in a text editor, you’ll find a lot of base-64 encoded data (that’s any photos you backed up), but towards the end of the file…are all contacts that were saved to the phone’s persistent memory, in standard vCard format, just separated by a line containing two tabs, a plus sign, and a letter. Like this:
+ x
BEGIN:VCARD
VERSION:3.0
N;ENCODING=QUOTED-PRINTABLE;CHARSET=UTF-8:;Alice;;;
TEL;VOICE:001111111
END:VCARD
  x BEGIN:VCARD
VERSION:3.0
N;ENCODING=QUOTED-PRINTABLE;CHARSET=UTF-8:Anne;;;;
TEL;VOICE:002222222
END:VCARD

(Phone numbers changed to protect the guilty)

If you just split out the vCards from the file, whatever you try to import them with will choke after the first contact. Get rid of the lines containing the funny characters and tabs, however, and you will find that a perfect multiple import is possible. Depending on how many contacts you have, you may want to do this with a scripting language, or just find and replace.

James Wimberley has a good post regarding changing the electricity grid to support dynamic demand-response, where things like Dutch cold-stores or your fridge over-cool when electricity is plentiful and cut out when the grid is under strain. It’s hugely important in adapting the electrical system to use stuff like gigawatt-size wind farms; essentially, it’s a way to store large volumes of electricity, with the added feature that you lose no power in the process. Not using the power is always 100 per cent efficient.

Anything that lets you buck entropy has to be good, but there are some serious problems to get over. James dislikes some versions of this as being too Stalinist – the grid reliability controller reaches out and turns everyone down a notch. It’s not Stalinism he’s thinking of, though, really, but rather Charlie Stross’s third great evil of modernity, high technocracy. Stalinists would have planned your energy requirements in advance.

He suggests instead that what we really need is a device that handles all the appliances behind it according to rules you set up, and that receives a feed of data about electricity prices, marginal CO2 emissions, and loads on the grid. Which presupposes a standard for announcing grid data onto the Web; RSS for power stations.

But I’m sceptical on a few things; as far as I can tell, demand response is much more useful for managing the grid than for shaving down demand overall. Which is great, but it falls foul of one of my beefs with much of the official green movement; the macro-micro issue. You can’t open a newspaper without being lectured by the lifestyle pages about fairly marginal changes; you rarely see anything about moving big chunks of the energy budget.

And I think the geekosphere is guilty of this; Wattson, AMEE and friends are cool, but it’s all about shaving percentages here and there. Consider this post about the huge economic returns on US Federal energy efficiency R&D; it’s all about building components, fridge compressors and the like. There’s a management consulting piece of quasiwisdom that says that it’s always better to remove empty work (muda) from a process than to optimise it. But who wants to discuss lumps of building material?

On the good side, though, I notice that prices for my favourite pet project are coming down gradually. And I like this quote a lot:

The Roadster is faster then anything on the road, including the young guy in a fuel cell SUV who improbably challenged me to a Saturday night race on Hollywood Blvd.

Oh yes, gleeful leftie hacker tournament after the BNP did a 0.16 megarecord datafart. My effort contains absolutely no personally-identifying data; it’s made with this guy’s count by region and population data from National Statistics, to show the number of BNP activists per 100 citizens in each UK region. People kept asking for that kind of information, so I made it. Note that the g-spreadsheet guy used classifications that don’t quite map to NatStats’ regions, so I decided to assume that his “South Central England” was the West Midlands and “Midlands” was the East Midlands, and total Yorks & Humber and North-East to match his “North East England”.

Update: Well, in the end I used his numbers by county to create a table that matches the regions. Here’s a new and correct visualisation that shows Yorkshire where it should be, in the lead. Ernst Wilhelm Bohle lives!

8e8754ae-b737-11dd-bf3a-000255111976 Blog_this_caption

|

To the developers of Mozilla Ubiquity: what are you thinking?

The whole idea is that you interact with Web things through a command line in the browser that is close to natural language; it’s all a bit like this post about the world’s favourite command line. But at the moment it’s not particularly useful for anything. Leave aside the fact that ’email firstname.lastname@example.com’ creates an e-mail message with a blank to: field and the string ‘firstname.lastname@example.com’ in the message body (yes, they actually did that).

But it’s missing something crucial. |. If you’re familiar with command lines in general, you’ll recognise it – it’s the character commonly called a pipe. If you’re not, it’s called a pipe because, on a command line interface, it’s normally used to “pipe” the output of whatever command was issued before it into the input of the command that comes after it. For example, on a unix/linux machine, you could wget https://yorksranter.wordpress.com/ | grep Worstall | mail 'firstname.lastname@example.com' and dear old Firstname Lastname gets a e-mail containing a list of the occurrences of the word Worstall on the front page of my weblog. We sent a command (wget) with an argument (my URL), and then we sent the output to a second command (grep), with a search string (Worstall), and then we sent the result of that to yet a third command (mail).

We could keep chaining them together; we could use logical operators as well, which means that the Unix shell is a programming language in its own right. Alternatively, in some command lines and every programming language I’ve ever heard of you can wrap a command in (brackets) and it’s like putting its output there. This is the whole point of a command line; it’s the main reason why anyone would bother with one in this day and age. As far as I can work out from a brief perusal of the voluminous hype and much scantier documentation, Ubiquity is meant to help you do multiple tasks on the Web efficiently. But without either a | or else a (), it’s no use at all.

An unremarked-on aspect of the 1.5% interest rate cut last week. Namely, are we already living in a near-real time planned economy, as Stafford Beer foresaw? It sounds like I must be joking. But how else are we to interpret Sir Terry Leahy’s trip to see the Bank of England and the Treasury? Tesco boasts that one in every eight pounds spent in the UK passes through its tills; this bit is always in the papers. They rarely mention their huge management-information system, except to the trade.

If you wanted close to real-time information about the consumer economy, I can’t think of anything that would work better. After all, even at KwikSave you’d get a daily cycle of cashflow information. And Tesco runs a hell of a lot of deliveries; their visualisation dashboard must provide a fearsome amount of data on what flows where. Chuck in the Clubcard voluntary-surveillance stuff. The tills are, unlike Kwikkies, presumably networked.

Back at the start of the…well, at the start of the latest frantic wave of the world financial crisis, I messaged Dsquared to say that I had the impression his workplace was suddenly full of Bakelite consoles springing out of forgotten compartments. In fact, of course, the people going into manual reversion for the first time in 30 years were the good folk at HM Treasury. According to David Scott and Alexei Leonov’s memoir, there was during the launch of the Apollo spacecraft a T-handle in front of the pilot. If you turned it one-quarter of the way, the launcher’s computer and those of the capsule were locked out and the rocket’s control systems slaved to those of the capsule, so you could then fly the whole thing by hand.

Well, with the financial intermediaries choked up reprocessing all the stuff they shipped off the balance sheet – or rather, into the twilight zone – someone has to control these things. Ordnung muß sein. Now we’re well into T-handle country. What are we going to do with it?