I refuse to get Amazon Alexa, and never use Siri, because I find the concept of human-style interactions with robots somewhere between the unhealthy and the grotesque. And almost always more hassle than they’re worth because they don’t actually ‘understand’ what you’re telling them.
But I don’t find them sinister, and find myself sceptical of the growing panic about AI since Chat GPT-4 launched in March. A fortnight ago, British scientist Geoffrey Hinton, 75, made a dramatic exit from Google, so that he could speak freely about the dangers of the technology he’d helped create. His fears seem to revolve around the ‘hive mind’ function of AI, whereby everything one robot learns, they all learn. It’s too much, too quickly-replicating knowledge for Hinton, especially given that AI is being trained not just on language but video.
‘I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have,’ he said.

Get Britain's best politics newsletters
Register to get The Spectator's insight and opinion straight to your inbox. You can then read two free articles each week.
Already a subscriber? Log in
Comments
Join the debate for just £1 a month
Be part of the conversation with other Spectator readers by getting your first three months for £3.
UNLOCK ACCESS Just £1 a monthAlready a subscriber? Log in