Sam Altman, the recently fired (and rehired) chief executive of Open AI, was asked earlier this year by his fellow tech billionaire Patrick Collison what he thought of the risks of synthetic biology. ‘I would like to not have another synthetic pathogen cause a global pandemic. I think we can all agree that wasn’t a great experience,’ he replied. ‘Wasn’t that bad compared to what it could have been, but I’m surprised there has not been more global coordination and I think we should have more of that.’
He is right. There is almost no debate about regulating high-risk virology, whereas the world is in a moral panic about artificial intelligence. The recent global summit at Bletchley Park essentially focused on how to make us safe from Hal the malevolent computer. Altman has called for regulation to stop AI going rogue one day, telling Congress: ‘I think if this technology goes wrong, it can go quite wrong… we want to be vocal about that. We want to work with the government to prevent that from happening.’
Bad actors worldwide know how easy it would be to use virology to bring the world economy to its knees
In contrast to that still fairly remote risk, the threat the world faces from research on viruses is far more immediate. There is strong evidence that Covid probably started in a laboratory in Wuhan. To summarise: a bat sarbecovirus acutely tuned to infecting human beings but not bats, which contains a unique genetic feature of a kind frequently inserted by scientists, caused an outbreak in the one city in the world where scientists were conducting intensive research on bat sarbecoviruses. That research involved bringing the viruses from distant caves, recombining their genes and infecting them into human cells and humanised transgenic mice; three of the scientists got sick but no other animals in the city did.

Comments
Join the debate for just £1 a month
Be part of the conversation with other Spectator readers by getting your first three months for £3.
UNLOCK ACCESS Just £1 a monthAlready a subscriber? Log in