It’s an interesting and unusual word, agentic. For a start, some language enthusiasts dislike it as a mulish crossbreed of Latin and Greek. Also, its etymology is obscure. It appears to derive from 20th-century psychology: one of its first usages can be found in a study of the infamous 1960s Milgram experiments at Yale University, when volunteers were persuaded to electrocute, with increasing and horrible severity, innocent ‘learners’ (actually actors). The experiment revealed that most of us would administer a lethal shock of electricity to an innocent human being, if only told to do so by a man in a white coat with a clipboard.
And if that sounds sinister, maybe that’s fitting, because the word agentic has been co-opted into the lexicon of artificial intelligence to describe a new form of AI that many will find ominous.
Comments
Join the debate for just $5 for 3 months
Be part of the conversation with other Spectator readers by getting your first three months for $5.
UNLOCK ACCESS Just $5 for 3 monthsAlready a subscriber? Log in