Brendan McCord

How can we develop AI that helps, rather than harms, people?

(Getty images)

In every technological revolution, we face a choice: build for freedom or watch as others build for control. With AI the stakes couldn’t be higher. It already mediates 20 per cent of our waking hours through smartphones, automated systems, and digital interfaces. Soon it will touch nearly every aspect of human existence. While AI promises to liberate us for higher pursuits by “extending the number of important operations which we can perform without thinking,” history – from the iron cage of Soviet bureaucracy to modern Chinese surveillance – serves as a stark warning that automation can just as easily erode our freedoms and condition us to passively accept social control.

AI threatens to become an “autocomplete for life,” offering pre-packaged responses that slowly transform us into passive and dependent sheep

Today’s debate about AI’s future is dominated by competing visions of control. Doomsayers, like some of those at this week’s AI Action Summit in France, advocate for strict controls (even “pauses” on all development) that would forfeit progress while inviting tyranny.

Written by
Brendan McCord

Brendan McCord is founder and Chair of the Cosmos Institute, an academy developing philosopher-builders to develop AI that benefits people

Topics in this article

Comments

Join the debate for just $5 for 3 months

Be part of the conversation with other Spectator readers by getting your first three months for $5.

Already a subscriber? Log in