James W. Phillips

Mind games: why AI must be regulated

Illustration: John Broadley 
issue 28 October 2023

During my time in No. 10 as one of Dominic Cummings’s ‘weirdos and misfits’, my team would often speak with frontline artificial intelligence researchers. We grew increasingly concerned about what we heard. Researchers at tech companies believed they were much closer to creating superintelligent AIs than was being publicly discussed. Some were frightened by the technology they were unleashing. They didn’t know how to control it; their AI systems were doing things they couldn’t understand or predict; they realised they could be producing something very dangerous.

This is why the UK’s newly established AI Taskforce is hosting its first summit next week at Bletchley Park where international politicians, tech firms, academics and representatives of civil society will meet to discuss these dangers.

Without oversight, the range of possible harms will only grow in ways we can’t foresee

Getting to the point of ‘superintelligence’ – when AI exceeds human intelligence – is the stated goal of companies such as Google DeepMind, Anthropic and OpenAI, and they estimate that this will happen in the short-term.

Comments

Join the debate for just $5 for 3 months

Be part of the conversation with other Spectator readers by getting your first three months for $5.

Already a subscriber? Log in