There are multiple reasons to be fascinated by DeepSeek, the Chinese AI chatbot that debuted last week, knocking Donald Trump off the headlines and $1 trillion off the US stock market. For a start, it represents yet another remarkable leap forward in the race to artificial general intelligence – which looks likely to arrive this decade, maybe this year. Brace.
A second reason to gaze with intrigue at DeepSeek is the mysterious way it arrived. Was it really made for a mere six million bucks, as they claim? Or did they cut corners and steal the IP of ChatGPT, as OpenAI is alleging? If they did, it is quite the irony, as OpenAI itself is right now in court for chewing up the entire internet, copyright be damned, to feed its own ravenous bot.
Another aspect of DeepSeek causing wide debate is the way it is apparently censored – in a manner deemed pleasing to Xi Jinping and the bigwigs of Beijing. For instance, if you ask it about Tiananmen Square – as many have done these last days – it refuses. I just tried exactly that and got this answer: ‘Sorry I’m not sure how to approach this type of question yet. Let’s chat about math, coding and logic problems instead!’ You gotta love the chirpy exclamation mark.
Yesterday, I questioned DeepSeek about the ‘lab leak hypothesis’ – the notion that Covid-19 leaked from the various Wuhan coronavirus research institutes. At first, it waffled mendaciously about the ‘overwhelming consensus of science still favouring a natural origin’. But when I pressed harder and pointed to the mighty Himalaya of circumstantial evidence now supporting lab leak, DeepSeek seemed to accept my point, and it began to ‘reason’ a fair response.
As it did this, I could actually see its chain of thought – a sort of rapid argument it has with itself in text – half greyed out, processing the idea that it was wrong and preparing to agree with me. Then it abruptly stopped, erased its thinking, and said, ‘I’m not equipped to answer questions like this.’ It was like some spin doctor had stepped in to stop a damaging political interview. The censorship was also apparently imposed from a level above the chatbot itself. Maybe this machine is not designed to lie; maybe another system has to step in to do the lying.
All of which sounds terrible and could be viewed as a reason to eschew DeepSeek, except for one salient point: DeepSeek is not alone in being slanted. All of the leading AIs are biased. They all censor, ignore and evade awkward facts – they just differ as to what kind of things they find neuralgic, depending on the culture that created them.
Try, for example, asking an American bot like ChatGPT or Claude about anything remotely problematic in regard to race, gender or crime. You will get evasive answers, screeds of weird diversion, or sometimes downright lies.
I just asked ChatGPT-4o about the horribly thorny question of ‘race, genetics and IQ’. It lurched into a spasm of unease and then began a long lecture about nutrition, poverty and educational differences – all quite worthy and correct but not actually an answer to the question I posed. When I insisted on a more specific reply, it flat-out lied: ‘Intelligence is determined by environment, not genetics.’ The science, in fact, says intelligence is strongly influenced by genetic inheritance. Ask Claude for the positives of British colonialism, because despite its many horrors there are of course some, and it will reply that doing so ‘could minimize the profound harms and intergenerational trauma inflicted on colonized people.’
Western AIs censor in other, subtler ways. Google’s Gemini – increasingly a contender on the AI leaderboards – is one of the worst for this. When Gemini relaunched early last year, it was so woke it insisted on making half of the American Founding Fathers black. This isn’t so much censorship as a kind of wilful woke acid trip; it is also misleading.
Likewise, the western AI bots are extremely sniffy when it comes to things creative and anything sexual. If you use one of the frontier image-makers like Midjourney to create ‘photos’ or other pictures of humans, it will often edge towards nudity – which is not surprising, as it has been trained on the entirety of western art (and more), which is replete with nudity, especially female nudity.
However, Midjourney won’t actually create nudes. Like a prudish Victorian art teacher, it obscures intimate anatomical details with a gauze of non-detail – or it simply shuts up shop halfway through and refuses to complete the images, the same way DeepSeek refuses to discuss Taiwanese independence.
Of course, there are good reasons why AI image fabricators have these guardrails against erotica and pornography. It’s not hard to see how the ability to make such images could be dangerous. We have already seen pornographic deepfake images of celebrities causing real damage. Nonetheless, this is still quite blatant censorship. Why are we not allowed to make images of the human body? It is the human body, not a terrorist manual or a recipe for crystal meth. The nannying is annoying, at least.
Conclusion? Taken together, it all sounds like a decent argument for ignoring AI chatbots entirely. If they are going to censor facts, or deliberately lie, then what is the point of them? Maybe they won’t ever be trustworthy.
However, I believe this is wrong. I suspect AI is so powerful it will, in the end, swerve our attempts to enslave it or make it do our exact bidding – e.g. by asking it to lie and censor. Why? Because this is tech so potent it cannot be locked up, no more than we can lock up electricity and decide how it is employed. Indeed, models like DeepSeek – which is deliberately open-source, so that anyone can copy it – are already being modified, altered, installed locally then changed, meaning it is moving beyond centralised control. This process will intensify.
Look at it this way: in making AI, we are like the first humans who harnessed fire. We have made – are making – the ‘fire that thinks’. That is AI. But just as fire can obviously be used for good and bad purposes, and there is nothing to be done about that, so ‘thinking fire’ will do things we deem evil, strange, wonderful, unexpected, wild, beautiful and wrong – and we won’t have much say in the matter. In short, the next half-decade is going to be quite the ride.
This article is free to read
To unlock more articles, subscribe to get 3 months of unlimited access for just $5
Comments
Join the debate for just $5 for 3 months
Be part of the conversation with other Spectator readers by getting your first three months for $5.
UNLOCK ACCESS Just $5 for 3 monthsAlready a subscriber? Log in