It’s an interesting and unusual word, agentic. For a start, some language enthusiasts dislike it as a mulish crossbreed of Latin and Greek. Also, its etymology is obscure. It appears to derive from 20th-century psychology: one of its first usages can be found in a study of the infamous 1960s Milgram experiments at Yale University, when volunteers were persuaded to electrocute, with increasing and horrible severity, innocent ‘learners’ (actually actors). The experiment revealed that most of us would administer a lethal shock of electricity to an innocent human being, if only told to do so by a man in a white coat with a clipboard.
That battling lawyerly AI may go headfirst into a protracted squabble with your council, turning an £85 speeding ticket into a £3,000 legal bill
And if that sounds sinister, maybe that’s fitting, because the word agentic has been co-opted into the lexicon of artificial intelligence to describe a new form of AI that many will find ominous.

Britain’s best politics newsletters
You get two free articles each week when you sign up to The Spectator’s emails.
Already a subscriber? Log in
Comments
Join the debate for just £1 a month
Be part of the conversation with other Spectator readers by getting your first three months for £3.
UNLOCK ACCESS Just £1 a monthAlready a subscriber? Log in