Tuesday, February 19, 2008


For the last few days at work I’ve been going through a lot of old books on contemporary technology – robotics, computing, communications, medicine, and so on. I won’t bore you with explaining why this is (I realise that this post is borderline ‘shop’ as it is) but I’ve been pondering the problems inherent to anyone or anything that tries to make predictions about the way in which technology will progress.

These problems all seem to boil down to not wanting to look thick in front of your future self; it’s one of those situations where you can actually hear futureyou laughing derisively. You don’t want to complete miss out the possible future developments, because you don’t want to look completely blinkered and trapped in the present/past. If you make a prediction that goes wrong, however, you end up looking like a pillock. I mean, who can forget the bloke from the US patent office who, in the late nineteenth century, declared that everything that could be invented had been invented – or when Bill Gates stated that he couldn’t see why a home computer would ever need more than 256k of RAM.

With the technology books the writers are afraid of making predictions that look stupid, but leaving out ideas (or technologies) that are known but unproven runs the risk of looking ignorant, which makes the book look out of date pretty much from the moment of publication. These fears manifest in a peculiar focus, not on technologies on the cusp of success, but on technologies so far advanced that they are hypothetical at best. These ideas don’t easily look out of date – odds are, they suppose, that talking about quantum computers, or Tokomak Fusion Reactors, as ideas that haven’t been realised yet, isn’t going to make your book look dated anytime in the next 30 years.

The problem is that these fantastical technological ideas either look dry & uninteresting on the page, or get eclipsed by more mundane developments in the near future. Virtual Reality, for example, is held up as the future of human-computer interactions in pretty much all of these books – despite the fact that, after 20 years of hype and predictions, all that VR has managed is to make a lot of people feel really nauseous*. Meanwhile Nintendo and Apple’s developments with touch panels and motion sensors have completely blindsided all of these books.

It seems to me that the best idea is just to go for a scattergun approach – mention everything that looks vaguely plausible (if that’s possible) and hope that people only remember the ones you got right.


*Yes, I’m aware that I just sort of made a prediction about the future. I don’t care, futureme thinks I’m stupid anyway.


And finally, a little link (well, actually it’s a pretty big link). This is a well written and interesting article about how philosophy – and equally, all intellectual disciplines - should shun technical jargon and buzzwords in favour of expressing their ideas in everyday English. The idea is that if they do that then the intelligent layman will be able to read their stuff all the way to the end. Needless to say I got to about the fourth paragraph before I got an urge to listen to ‘the village green preservation society’ by the kinks, and lost my ability to take anything in.