James W. Phillips was a special adviser to the prime minister for science and technology and a lead author on the Blair-Hague report on artificial intelligence. Eliezer Yudkowsky is head of research at the Machine Intelligence Research Institute. On SpectatorTV this week they talk about the existential threat of AI. This is an edited transcript of their discussion.
JAMES W. PHILLIPS: When we talk about things like superintelligence and the dangers from AI, much of it can seem very abstract and doesn’t sound very dangerous: a computer beating a human at Go, for example. When you talk about superintelligence what do you mean, exactly, and how does it differ from today’s AI?
‘It was always apparent to me that you’d get to superintelligence eventually if you just kept pushing’
ELIEZER YUDKOWSKY: Super-intelligence is when you get to human level and then keep going – smarter, faster, better able to invent new science and new technologies, and able to outwit humans.

Get Britain's best politics newsletters
Register to get The Spectator's insight and opinion straight to your inbox. You can then read two free articles each week.
Already a subscriber? Log in
Comments
Join the debate for just $5 for 3 months
Be part of the conversation with other Spectator readers by getting your first three months for $5.
UNLOCK ACCESS Just $5 for 3 monthsAlready a subscriber? Log in