The race for ‘AI supremacy’ is over, at least for now, and the US didn’t win.
Over the last few weeks, two companies in China released three impressive papers that annihilated any pretence that the US was decisively ahead. In late December, a company called DeepSeek, apparently initially built for quantitative trading rather than large language models (LLMs), produced a nearly state-of-the-art model that required only roughly 1/50th of the training costs of previous models – instantly putting them in the big leagues with American companies like OpenAI, Google, and Anthropic, both in terms of performance and innovation.
A couple of weeks later, they followed up with a competitive (though not fully adequate) alternative to OpenAI’s o1, called r1. Because it is more forthcoming in its internal process than o1, many researchers are already preferring it to OpenAI’s o1 (which had been introduced to much fanfare in September last year).
Comments
Join the debate for just $5 for 3 months
Be part of the conversation with other Spectator readers by getting your first three months for $5.
UNLOCK ACCESS Just $5 for 3 monthsAlready a subscriber? Log in