During my time in No. 10 as one of Dominic Cummings’s ‘weirdos and misfits’, my team would often speak with frontline artificial intelligence researchers. We grew increasingly concerned about what we heard. Researchers at tech companies believed they were much closer to creating superintelligent AIs than was being publicly discussed. Some were frightened by the technology they were unleashing. They didn’t know how to control it; their AI systems were doing things they couldn’t understand or predict; they realised they could be producing something very dangerous.
This is why the UK’s newly established AI Taskforce is hosting its first summit next week at Bletchley Park where international politicians, tech firms, academics and representatives of civil society will meet to discuss these dangers.
Getting to the point of ‘superintelligence’ – when AI exceeds human intelligence – is the stated goal of companies such as Google DeepMind, Anthropic and OpenAI, and they estimate that this will happen in the short-term.
Comments
Join the debate for just $5 for 3 months
Be part of the conversation with other Spectator readers by getting your first three months for $5.
UNLOCK ACCESS Just $5 for 3 monthsAlready a subscriber? Log in