Archive for the ‘brains’ Category

Dan Lockton would probably be interested in this…

The robot is this Bristol Robotics Lab project; both people in the thread at Jamie Zawinski’s I saw it in, and everyone I’ve shown it to, immediately think it looks like a cat.

In fact, in a sense, they do recognise it as a cat – it’s roughly the right size and shape, it’s in the right place, and if you wave a piece of string in its whiskers it responds much like a cat. But actually, as the project Web site tells us:

The robot was designed to reproduce the behaviour of rats as they use their whiskers to explore their environment. To get a clearer picture of how rats use their whiskers we filmed them using high speed video cameras (500fps) and manually tracked the position of each whisker in the array on a frame by frame basis. Software based automatic tracking is still very much in its infancy though there are a number of groups (including our Sheffield partners) who are now working toward such an application.

The data from this whisker tracking allowed us to quantify the kinematics of whiskers as the rats explored novel environments. From this we found that following a whisker making contact with an object there was a very rapid (~13ms) change in the velocity profile of the ‘whisking’, or movement pattern of the whiskers. We also observed that the rat will tend to move, or orient, its nose toward the exact point of contact.

Our hypotheses were that the rat was trying to optimise the force applied by the whiskers making contact with the object as well as bringing as many addition whiskers as possible, and its nose for smelling, to bear on that point. The orienting behaviour we see as an example of a higher level control loop through the brain, very similar in nature to how we as visually dominant animals rapidly orient, or saccade, the fovea of our eyes toward interesting events detected by our peripheral vision.

To this end we designed our robot to mimic both the low level contact mediated adaptation of the whisker motion pattern and the ability to orient its ‘nose’ towards points in three dimensional space. Designing the physical robot to be capable of mimicking these behaviours allows us to test different computational models of the underlying brain structures which can control it.

So it’s a ratbot, and interestingly enough, it’s an example of a hardware simulation of a biological phenomenon.

But this is also an example of an interesting design phenomenon; if you want objects to be immediately comprehensible, it helps to use the patterns Dan details here; notably, in this case, similarity, mimicry, and role-playing. Everyone knows how to act around a cat; a robot, not so much. We’ve been trained to do so by cats. And this part of the project will be unavoidably cattish:

We hope to be able to demonstrate the validity of the proposed brain model by the robot being able to chase an object (perhaps a remote controlled car) moving through its whisker field

Aww. Also, our associations for cat- and also dog-like behaviour stimulate our curiosity towards them, in part because we project it on them. Just as making a new smartphone an interesting object to handle speeds the learning process, a robot that encourages curiosity and interaction towards itself will speed its users’ learning process.

I suspect that the response to the Bristol Scratchbot would have been rather different had we been told in advance that it was emulating a rat. (Special note – much of the robot was made on a rapid prototyper.)

There is a fascinating paper here on how people believed that there was a link between Iraq and Al-Qa’ida. Essentially, if you give people enough free-floating emotional energy, they are likely to decide that if you care so much, then there must be an explanation for the holes in your logic. It’s called inferred justification, and it surely explains why the global Right are so keen on content-free mobilisation. Keep’em teabagging, in short, it stops them thinking.

Something similar is at work in this quote in a Conor Foley post at Crooked Timber:

Crime offers the imagery with which to express feelings of loss and social decay generated by these other processes and to legitimate the reaction adopted by many residents: private security to ensure isolation, enclosure and distancing from those considered dangerous

Strategies of neuro-politics; how do you keep other people from thinking, and indeed keep yourself from thinking? In the first study I mentioned, only 2 per cent of the people interviewed altered their beliefs based on new information, and 14 per cent of those who said they believed in a link between Al-Qa’ida and pre-war Iraq in the survey later denied it.

So what are we going to do about it? If the best idea anyone has on the Left is a High Pay Commission, we’re not getting anywhere. I’m against this for a couple of reasons: first, the obvious work-around is to stick to the money as profits, and if necessary, to reorganise at least part of the company so the super-high earners are shareholders or partners. It wasn’t many years ago that Goldman Sachs was still a partnership, after all.

Second, it doesn’t do very much for the office cleaners, even if it manages to offend the investment bankers. It doesn’t even bring in any tax revenue, nor does it hold out any hope of higher wages for the poor rather than marginally lower ones for the rich.

But what it does do is provide a focus for indignation; something to get worked up about, or in other words, a piece of politics-without-thinking.

If that’s no good, neither is the guy who’s trying to bill companies for the time he spent consuming their products; a clever conceit, and probably fun, but tragically art-knobber at bottom. (But then, as the inventor of ContentFree Comment, who am I to talk?) As Owen Hatherley remarks in a cracking interview:

Criticising consumerism is what people do when they can’t quite stomach criticising capitalism.

So, what to do? I was impressed by this guy‘s style – as well as the .38 and the giant TEABAGGERS = FAIL sign, check out all those neat data visualisations on his banner! If Habermas and Hunter S. Thompson had collaborated, wouldn’t it have looked a little like that – the gonzo public sphere…but clearly this isn’t practical or even desirable on a large scale.

The big question, I think, is how to define the Left as the side that’s fighting for positive liberty, and to work out how we operationalise that. Chris Dillow is right that stronger unions, not high pay commissions, are the answer to that particular problem. But I’m also interested in things like this – politicised DIY, basically and this, and of course neurogenesis.

We won’t get anywhere, however, as long as the incredible revolution in our understanding of cognition is reduced to a set of buzzwords (nudge, Taleb, etc) used by the Tories to misdirect attention anyway from the ugly truth.

Is neurogenesis perhaps the most interesting scientific discovery of the times? I rather think it is. The government minister’s version: until quite recently, we thought that once you passed a certain early age, that was it for your supply of neurons, and you would only lose them. Paradoxically, that wasn’t incompatible with learning, as ones you use more are preferentially conserved, and a sort of evolutionary process might therefore be at work. I remember being taught this at school in the early 1990s.

The City of Bradford Metropolitan Council can probably be forgiven this; the theory that adult brains do not regenerate was only decisively falsified in 1989. We now know that new brain cells are created throughout life at a surprisingly high rate, and in fact your brain is constantly being replaced. It’s a top field of research, and new discoveries are frequent. For example, we know that neurogenesis is somehow associated with the olfactory system (new neurons crawl along blood vessels to the olfactory bulb, then move on to their new roles elsewhere in the brain, a bit like geeks flocking into the one interesting session at the conference), that its regulation is involved in depression and Alzheimer’s disease, both of which seem to involve abnormally low levels of it, and that various external factors influence it.

Learning new things, socialising, taking physical exercise, and falling in love (or lust) all increase the rate at which new neurons are produced. More medically, neurons are produced from stem cells, which opens up the possibility of acting directly on the process. We don’t know yet what the consequences of overdoing it would be; science fiction is, however, working on it.

Lab monkeys demonstrate unusually, indeed pathologically, low levels of neurogenesis, which is believed to be caused by a sterile and boring environment; in fact, Elizabeth Gould, the discoverer of neurogenesis, had to redesign the lab in order to verify that this was so.

Fascinatingly, childhood poverty reduces neurogenesis, and it does this by increasing levels of chronic stress. Transient stress seems to regulate neurogenesis up – hardly surprising, given that this is how we often learn – but permanent insecurity makes you stupid, depressed, and vulnerable to dementia.

At the moment, the government is terribly keen on “happiness” and especially on administering cognitive-behavioural therapy to the poor. Unfortunately, the hard scientific facts seem to suggest that they would be much better advised to concentrate on a sort of Attleean agenda of economic security and broadening culture, of whatever kind. Over the last 30 or so years, we’ve had a rash of economists (mostly) claiming to offer tough, quantitative answers to society’s questions, in opposition to a Left that deals in vague generalities or rabble-rousing. But the answers from science – real science, with radiation and monkeys and scalpels – are diametrically opposed to the ones from half-science.

Economics, in academia, is coping reasonably well with its own scientific revolution, the onslaught of Tversky and Kahnemann; its policy-advising function is largely a failure, hopelessly trapped by a dead weight of hacks and ideologues. But there is now a second wave of intellectual disruption heading for it from the life sciences. I was discussing the cognitive-bias revolution on a mailing list recently, and there was talk about what a new school of thought aiming to incorporate the new insights should call itself. It’s not a trivial issue; the Friedmanites’ triumph had much to do with their marketing, “Free to Choose”, “rational expectations”, “economic rationalism” in Australia. My suggestion was “realistic economics”. Nobody wants to be on the side of unrealism, after all, which is what pre-Kahnemann economics offers.

What have we here? Via Spencer Ackerman: David Wurmser, trying to sketch the wiring in his head on a really big piece of paper.

The spider chart was meant “to create a strategic picture, and that strategic picture is the foundation of policy change,” Wurmser said. “It helped you visualize, because if you saw, say, twenty relationships between X and Y, and twenty between Y and Z, then there’s at least a suspicion that Z and X are interacting through Y.” A map like that could bring insight, but there were perils in surmising too much.

Suppose X and Y were Dick Cheney and Colin Powell. Twice they served in senior posts under presidents named Bush. In the early 1990s, they worked at the same address and were spotted together on international flights. They communicated frequently, encrypting their secrets….

That’ll have been back when they still trusted him with the felt tip pens, I suppose. It reminds me a lot of this post from last August, regarding surrealism, rolling news, and TV anchor Glenn Beck’s “methodology”, which seems to have been identical to Wurmser’s.

The problem with this sort of semi-random links-and-ties analysis is twofold – not only is your brain predisposed by millions of years of evolution to impose patterns on raw data, which means you’re bound to find pattern if you look for it, but the spurious ones we inevitably perceive come from somewhere. Specifically, they come from our preconceptions, prejudices, and perhaps most of all, from the ones we don’t want to admit to. Just as you’d only dump the whole logs from a computer program to trace a bug, you don’t free-associate in order to make plans.

So as well as generating lots of time-sucking, budgetivorous false positives, this kind of thinking actually tends to make us behave even more stupidly, because it strengthens all the least rational forces within us.

I really mean this, by the way, and I’d love to hear from anyone who has comments about its potential implementation.

I liked this comment from Chris “Chris” Williams regarding Arthur C. Clarke:

What future? A better one than we’ve got: a worse on than we’d have had without him. Several million fanboys and girls grew up exposed to clear prose, opposition to nationalism, scepticism about organised religion, faith in technology, faith in humanity, and some great comedy.

“The guest of honour pressed a button (which wasn’t connected to anything). The chief engineer threw a switch (which was).” – or thereabouts. From Travel by Wire. All there at the start.

Which amused me; especially as the same post got linked by the Adam Smith Institute. Ha, I can’t imagine two technologies that got commercially deployed whose development had less to do with Teh Market than satellite communications and GSM. Even though there is fierce competition in both fields, a lot of it is down to the fact that the GSM founding engineers designed it in, working for ASI-tastic organisations like nationalised Nordic telcos and the European Commission.

Satellites, well…you do know Bell Labs (itself hardly the most Thatcherite operation, and one Reaganism killed off pretty sharpish) actually considered launching the first comsat on a Soviet rocket? Beyond mockery, what I’m driving at is that Clarke delivered a solid disrespect for ideology as well as religion and nationalism and Western arrogance – surely, the Indian-engineer archetype must have something to do with all his Dr Chandras, next to the IITs and the unintended consequences of IBM being kicked out of India in the 70s? (And what would the ASI make of *that*?)

The political landscapes he delivered were always nicely sceptical of state bureaucracies (2001: A Space Odyssey can be read as an attack on the security-bureaucratic complex) and also of big business. He missed the revival of small business, but then, who didn’t. And his major political flaw was that he was too optimistic about technocratic cooperation – he seemed to believe that politics stopped in low earth-orbit, and Space Station One is essentially the European Union at L-5. Just as you can’t have non-political bread, you certainly can’t have non-political spaceflight; but of all the political mistakes you could make, it’s a pretty minor one compared with some of the others on offer during his career.

From the 1930s to today, he could have variously believed in die-hard opposition to Indian autonomy, to say nothing of independence, that Stalin was an honourable gentleman, that what we really need is a strong leader to discipline the feminine masses, that white people were smarter than other people, that the US intelligence services were engaged in a conspiracy to downplay Soviet power and that therefore we need many more nuclear weapons, that burning the North Sea oil reserves in order to support sterling at an exchange rate high enough to flatten the export sector was a good idea, that the UN is a secret Zionist conspiracy to take your guns, that what we really need is a restored Caliphate, or that invading Iraq was wise. And this is far from an exhaustive list. Literally no other period of human history has offered a richer cornucopia of delusions; as George Orwell said, no ordinary man could be such a fool.

The Clarkean vision was that perhaps, we might be able to imbue reality with the inspiration and excitement various groups of us applied to the list of ideological manias above. Rather than pluricontinentalism or bimetallism or conservatism, we might consider the renal parasites of cephalopods, the neurological basis or otherwise of psychoanalysis, or viewing the surface of Venus in the infrared. Nothing is mere; so said Richard Feynman. It finally poses the question; is a sceptical utopia possible?

It’s been bloody difficult to see anything of this year’s championship play-offs; apparently there’s some sort of rugby union event going on. But I did manage to see Hull and Wigan last night. Which posed a problem – which of them do I hate more?

I still get abusive comments on this post from time to time; but then there’s this, too. It’s a pity I’m committed to despising Hull, really – it’s quite a club, after all.

Anyway, last night’s game rocked the Pennines; it was incredibly close, and extremely loud in a way you hardly ever hear outside RL and (sometimes) football. There was some brilliant rugby, too – Gareth Raynor’s first try, for example. Chasing a grubber kick to the flag, he stole up on Leuluai, who was hoping to shield the ball off the park, and managed to touch down reaching around his legs; a pickpocket’s try. Wigan got ahead and ended up hanging on for the last 10 minutes in a succession of frantic drives.

It struck me that sport (as well as a lot of other human activities) is a way of manipulating time; the mark of a really good match, spectating or playing, is that at first time hurtles past (what, half-time already?) and then slows to a tension-ridden crawl. There is some science to this; it’s been suggested that the brain has a variable clock speed, increasing the rate at which it samples reality at moments of crisis and therefore giving the sensation of time passing very slowly.

Something similar occurs with Rugby League; the game’s administrators see their thought processes slow to a crawl at moments of crisis, so suddenly it’s 2007 and we haven’t had a World Cup for seven years. The difference is that everyone else experiences the intervening period passing incredibly slowly. This is really getting embarrassing; if you remember how good the Tongan, PNG, and Western Samoans were in 1995, and how good the corresponding rah-rah sides have been this year, it’s a disaster that they have had no meaningful international rugby since 2000.