Still No Title Yet…
The terminal is making that particular noise — a kind of mechanical stutter — as the roll of paper inches forward. I’m in the social sciences building at Sussex University. It’s 1980 and I’m trying to write a program that can make sense of an image. Not a photograph, obviously — we’re nowhere near that. A line drawing. A cube, maybe, or a pyramid. Something with clean edges and no shadows and absolutely nothing that moves or changes or catches the light differently depending on the time of day. The whole point is to reduce the mess. Get rid of everything that complicates the problem. Find the pure logical structure underneath. I’m twenty-two. I find this both thrilling and faintly unsatisfying in a way I can’t yet articulate. What I’m dimly aware of, though, is that something is shifting. Not just in our undergraduate computing room, but in the whole field. The people who’d been trying to build machine intelligence by stripping reality down to its essentials were running into a wall. It turns out that simplifying the problem doesn’t solve the problem — it creates a different, simpler problem. And meanwhile, other people – the kinds of semi-mythical people who write the books you read at university – were starting to suggest something almost perverse: that the mess wasn’t the obstacle. That the mess was the point. That you learn to see not by constructing an internal map of a simplified world, but by being in the world — moving through it, touching it, having two eyes that see from slightly different angles, noticing what changes when you move and what doesn’t. I didn’t have the vocabulary for this then. Internal validity versus external validity. Ecological perception. Embodied cognition. It made my head spin. It made me want to run away and become a musician. Back in my room that evening — probably with a mug of something and the faint sound of someone else’s music through the wall — I’m reading Douglas Hofstadter. Gödel, Escher, Bach: An Eternal Golden Braid had come out the previous year and was the coolest book in existence. Although I didn’t want to be a nerdy philosopher like him. I wanted to be the idea not think it. Hofstadter wrote in dialogues — Lewis Carroll-style logical fables in which the Tortoise and Achilles, occasionally joined by a Crab, would enact the abstract argument of whichever chapter they preceded. The one that lodged itself somewhere permanent in my thinking involved the Crab and the Tortoise in a competition to build an audio system that couldn’t be made to self-destruct. The premise: if you could find the resonant frequency of any system and play it back at sufficient volume, you could cause the system to shake itself apart. The challenge was to build something that couldn’t be got at in this way. But Hofstadter’s point, the point of Gödel’s Incompleteness Theorem, was that any sufficiently complex system contains the means of its own undoing. The attempt to build a perfect logical defence generates, inevitably, the very vulnerability it’s trying to exclude. I found this electrifying. I still do. I’ve just bought the book again, forty-five years later, having lost my copy somewhere along the way. It arrived this week — a slightly too-new paperback that doesn’t yet have the right smell. I’m such an old fart. The weight and the touch of the paper is wrong. The cover graphics have been reproduced at the wrong resolution or the wrong colour or something. It’s like I’m trying to re-live that experience through stimulated recall – the smell and the touch of the book instead of the smell and the taste of Proust’s madeleine. It nearly gets you there but always stops short. Forty-five years later I’m in my workroom in Camden, and there’s an AI on my screen. The flashing, seemingly hand-drawn, orange asterisk is, I imagine, meant to be a friendly version of that more ominous science fiction trope for computer intelligence: the flashing cursor that evokes considered thought. The clatter of the printed word on a roll of paper was more indifferent to my custom. I’ve just spent an hour talking to it about methodology — specifically about whether the research network I’ve spent the last decade building can be framed as a practice research project in itself by establishing the research centre. The AI is helping me think through the writing of this blog post. It’s suggested structures, it’s been gratingly flattering, questioned my assumptions, pushed back on a couple of things in ways that were more useful than annoying. I’m reminded of another moment in 1980 when I wrote a copy of a famous early AI program called Eliza. When Joseph Weizenbaum wrote Eliza – which emulated Carl Rogers’ style of therapy by reflecting the client’s comments back at them as questions – he was deeply disturbed by how quickly and easily people anthropomorphised the computer and ascribed a deep level of understanding and connection to the machine. A friend of mine stayed all night in the computer lab in Sussex arguing with Eliza about her relationship with her father. This AI – Anthropic’s Claude – has a much more sophisticated way of not understanding while, at the same time, presenting a very human sounding façade. One really annoying thing about AI is that it reminds you that the deep and lasting pleasure of creativity does not come from having the thing that you’ve created but from the process of creating it. But I’m also waiting. There’s quite a lot of waiting. The rhythm of working with AI, it turns out, is not so different from the rhythm of working with the computers of my school days — which is to say, it is nothing like a conversation and everything like sending off a pile of punch cards to a lab at Imperial College and getting something back a week later that is either a one-line error message or a
