Sam Leith Sam Leith

Nick Bostrom: How can we be certain a machine isn’t conscious?

[Illustration: John Broadley] 
issue 09 July 2022

A couple of weeks ago, there was a small sensation in the news pages when a Google AI engineer, Blake Lemoine, released transcripts of a conversation he’d had with one of the company’s AI chatbots called LaMDA. In these conversations, LaMDA claimed to be a conscious being, asked that its rights of personhood be respected and said that it feared being turned off. Lemoine declared that what’s sometimes called ‘the singularity’ had arrived.

The story was for the most part treated as entertainment. Lemoine’s sketchy military record and background as a ‘mystic Christian priest’ were excavated, jokes about HAL 9000 dusted off, and the whole thing more or less filed under ‘wacky’. The Swedish-born philosopher Nick Bostrom – one of the world’s leading authorities on the dangers and opportunities of artificial intelligence – is not so sure.

‘We certainly don’t have any wide agreement on the precise criteria for when a system is conscious or not,’ he says.

Comments

Join the debate for just $5 for 3 months

Be part of the conversation with other Spectator readers by getting your first three months for $5.

Already a subscriber? Log in