The fallout from AI’s ‘Pearl Harbor moment’ has been dramatic. In tech, 12 months can seem like an eternity
If a week is a long time in politics, a year is an eternity in tech. Just over 12 months ago, the industry was humming along in its usual way. The big platforms were deep into what Cory Doctorow calls “enshittification” – the process in which platforms go from being initially good to their users, to abusing them to make things better for their business customers and finally to abusing those customers in order to claw back all the value for themselves. Elon Musk was ramping up his efforts to alienate advertisers on Twitter/X and accelerate the death spiral of his expensive toy. TikTok was monopolising every waking hour of teenagers. FTX had just gone bankrupt and at least $1bn of investors’ money had gone awol. Here in the UK, the bedraggled online safety bill was wending its way through parliament. And nobody outside the tech world had ever heard of Geoffrey Hinton or Sam Altman.
And then one day – 30 November 2022, to be precise – everything changed. OpenAI, an upstart tech company headed by Altman that had been building so-called large language models (LLMs) for some years, released ChatGPT. The strange thing, though, was that, even weeks earlier, ChatGPT wasn’t a product. OpenAI’s focus was elsewhere – on GPT-4, the biggest and most powerful model the company had built. This was a machine that could apparently answer almost any question using information gleaned from having “read” everything ever published, but which would sometimes also make stuff up and was therefore deemed not ready for public consumption. Altman, possibly spooked by the fear that a rival company, Anthropic, would launch something big, then made a fateful decision: to release an older, less powerful version of the GPT technology – GPT-3 with a bolted-on chatbot front end – and see what happened.