James W. Phillips and Eliezer Yudkowsky

Should we fear AI? James W. Phillips and Eliezer Yudkowsky in conversation

[Illustration: John Broadley] 
issue 15 July 2023

James W. Phillips was a special adviser to the prime minister for science and technology and a lead author on the Blair-Hague report on artificial intelligence. Eliezer Yudkowsky is head of research at the Machine Intelligence Research Institute. On SpectatorTV this week they talk about the existential threat of AI. This is an edited transcript of their discussion.

JAMES W. PHILLIPS: When we talk about things like superintelligence and the dangers from AI, much of it can seem very abstract and doesn’t sound very dangerous: a computer beating a human at Go, for example. When you talk about superintelligence what do you mean, exactly, and how does it differ from today’s AI?

‘It was always apparent to me that you’d get to superintelligence eventually if you just kept pushing’

ELIEZER YUDKOWSKY: Super-intelligence is when you get to human level and then keep going – smarter, faster, better able to invent new science and new technologies, and able to outwit humans.

Comments

Join the debate for just $5 for 3 months

Be part of the conversation with other Spectator readers by getting your first three months for $5.

Already a subscriber? Log in