Artificial Intelligence (AI) has surged in popularity in recent months. ChatGPT alone has swelled to more than 100 million users in a matter of weeks, capturing the imagination of the world for whom the technology had previously been consigned to the realm of science fiction. Scores of companies, from software businesses to manufacturers, are racing to find fresh ways to build its functionality into their operations.
But amidst the excitement, there is also a worry: are we going too far, too fast? Twitter’s owner Elon Musk warned this week that AI could lead to ‘civilisation destruction’. Regulators, alarmed at this explosion in activity, are scrambling to react. They have a serious dilemma before them: do they push for lax rules that give the nascent AI sector enough breathing space to grow, or do they aim at tough legislation that stops bad AI getting out of hand?
There is an even bigger problem: an apparently low-risk AI being given an anodyne task that proves catastrophic
The EU is hoping to be first out of the gates with its proposed rules, and is seeking to strike a balance between these two poles by differentiating between what it calls ‘limited-risk’ and ‘high-risk’ AI and applying different strictures accordingly.

Get Britain's best politics newsletters
Register to get The Spectator's insight and opinion straight to your inbox. You can then read two free articles each week.
Already a subscriber? Log in
Comments
Join the debate for just £1 a month
Be part of the conversation with other Spectator readers by getting your first three months for £3.
UNLOCK ACCESS Just £1 a monthAlready a subscriber? Log in