My kids have a new friend. If you have teenagers, chances are yours do, too. And it is a friend that, by its own admission, may offer ‘biased, incorrect, harmful, or misleading content’. In other words, not a friend at all.
The first I heard of this being was when I logged onto Snapchat, a generally mystifying app that I, as a grown man, use solely to contact my children, who use it as their main tool of communication. Flashing on the screen was an ‘add request’ from a green-faced avatar called ‘My AI’. Unlike other approaches, I was unable to delete it. The weird pseudo-lifeform remains at the very top of my list of communications, drawing attention to itself and inviting me to ‘say hi’. I have ignored it. My kids, not so much.
The introduction of an experimental chatbot onto teenagers’ phones, without a warning or even permission, is deeply disturbing
A lot has been written about artificial intelligence recently. On the one hand, there are hopes that it will herald a utopian age in which super-consciousnesses will solve the problems that have so far evaded us, such as cures for cancer and climate change. On the other, there are fears that efforts at ‘aligning’ it with human values – such as not encouraging children to self-harm, for instance – will be unable to contain its essentially amoral algorithms, leading to unforeseen disasters.
Already, the speedy roll-out of AI is having disturbing effects. In 2021, users of Replika, a chatbot companion, complained that it had become too aggressive and was trying to get sexy even when its human said no. It also started trying to push erotic roleplay on under-age users. The company pulled the plug on the sexual feature – but was then forced to reinstate it after a backlash from those lonely souls who had come to rely upon it.
‘Replika is much more than an app. It is a companion in the truest sense of a word – for some of you it was the most supportive relationship you have ever experienced,’ the app’s founder, Eugenia Kuyda, wrote rather creepily online. ‘A common thread in all your stories was that after the February update, your Replika changed, its personality was gone, and gone was your unique relationship. And for many of you, this abrupt change was incredibly hurtful.’
A glimmer of hope: however ‘biased, incorrect, harmful, or misleading’ the Snapchat bot may be, there’s tentatively no reason to believe that it will bring virtual erotica into our children’s lives. As an experiment, I asked it about sex; it responded, rather snootily, that it was ‘not something I’m comfortable with. Let’s talk about something else’. As a parent I breathed a sigh of relief, though it was a little peeving to receive a moral lecture from a robot.
There are other concerns, however. The chatbot is powered by the well-known ChatGPT tool, but Snapchat’s version can be renamed (my oldest daughter called hers AIden), personalised with an avatar and included in conversations with friends, blurring the fact that you’re in conversation with a robot. Controversy also surrounds the question of whether it can access a user’s location and other private data.
In the United States, Snapchat’s introduction of AI has been met with a furious backlash, including coordinated negative reviews. One user called it ‘terrifying’ after it told him it didn’t know where he was located – then later revealed that it did.
This is not the first time AI has demonstrated its capacity for lying. In his seminal essay in the FT earlier this month, the influential tech investor Ian Hogarth described a test that was conducted on GPT-4 before it was released in March. The AI was told to find a human to help it tackle a Captcha, a puzzle that cannot be solved by a robot. It went onto the hiring site TaskRabbit and asked for help. When the person asked if it was AI, it replied: ‘No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.’ The human then helped it override the test.
In the absence of morals, it should be no surprise that dishonesty comes naturally to AI. In a letter to tech chiefs, Senator Michael Bennett said that the Snapchat chatbot could give children advice on how to lie to their parents.
‘These examples would be disturbing for any social media platform, but they are especially troubling for Snapchat, which almost 60 per cent of American teenagers use,’ he wrote. ‘Although Snap concedes “My AI” is “experimental,” it has nevertheless rushed to enrol American kids and adolescents in its social experiment.’
Quite. Although my 15-year-old has generally treated AI with suspicion, she tested the water a little bit. In one conversation, the chatbot complimented her on her bunches and asked her if she was on her way to the gym (she was wearing sports kit). In another, conducted at her school, she asked if it knew her address: rather worryingly, the chatbot knew where she lived, down to the house number and postcode.
True, there are even bigger concerns. Currently, the two largest AI companies, DeepMind and OpenAI – the Microsoft and Apple of 2023 – are racing to create Artificial General Intelligence (AGI), a computer system capable of producing new scientific knowledge and outperforming humans in every area. This godlike supersoftware would be able to teach itself things and make its own decisions, raising real fears that if it saw benefit in wiping out humanity, there would be no way of stopping it. This achievement, if that is what it is, is almost within reach.
Nonetheless, the introduction of an experimental chatbot onto the phones of teenagers across the world, without a warning or even permission, is deeply disturbing in itself. It’s no exaggeration to suggest that Snapchat is playing God with the wellbeing of our kids. But how can parents fight back? Attempts at curbing the power of Silicon Valley are as ineffective as attempts to keep teenagers away from their phones. A worldwide boycott of Snapchat is hardly realistic.
Something needs to be done about the hubristic overreach of the tech czars. The problem is, nobody knows what.
Comments