It’s an interesting and unusual word, agentic. For a start, some language enthusiasts dislike it as a mulish crossbreed of Latin and Greek. Also, its etymology is obscure. It appears to derive from 20th-century psychology: one of its first usages can be found in a study of the infamous 1960s Milgram experiments at Yale University, when volunteers were persuaded to electrocute, with increasing and horrible severity, innocent ‘learners’ (actually actors). The experiment revealed that most of us would administer a lethal shock of electricity to an innocent human being, if only told to do so by a man in a white coat with a clipboard.
That battling lawyerly AI may go headfirst into a protracted squabble with your council, turning an £85 speeding ticket into a £3,000 legal bill
And if that sounds sinister, maybe that’s fitting, because the word agentic has been co-opted into the lexicon of artificial intelligence to describe a new form of AI that many will find ominous.

Get Britain's best politics newsletters
Register to get The Spectator's insight and opinion straight to your inbox. You can then read two free articles each week.
Already a subscriber? Log in
Comments
Join the debate for just $5 for 3 months
Be part of the conversation with other Spectator readers by getting your first three months for $5.
UNLOCK ACCESS Just $5 for 3 monthsAlready a subscriber? Log in