I refuse to get Amazon Alexa, and never use Siri, because I find the concept of human-style interactions with robots somewhere between the unhealthy and the grotesque. And almost always more hassle than they’re worth because they don’t actually ‘understand’ what you’re telling them.
But I don’t find them sinister, and find myself sceptical of the growing panic about AI since Chat GPT-4 launched in March. A fortnight ago, British scientist Geoffrey Hinton, 75, made a dramatic exit from Google, so that he could speak freely about the dangers of the technology he’d helped create. His fears seem to revolve around the ‘hive mind’ function of AI, whereby everything one robot learns, they all learn. It’s too much, too quickly-replicating knowledge for Hinton, especially given that AI is being trained not just on language but video.
‘I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have,’ he said.
Comments
Join the debate for just $5 for 3 months
Be part of the conversation with other Spectator readers by getting your first three months for $5.
UNLOCK ACCESS Just $5 for 3 monthsAlready a subscriber? Log in