A few months back I wrote a Spectator piece about a phenomenal new ‘neural network’ – a subspecies of artificial intelligence – which promises to revolutionise art and how humans interact with art. The network is called Dall-e 2, and it remains a remarkable chunk of not-quite-sentient tech. However, such is the astonishing, accelerating speed of development in AI, Dall-e 2 has already been overtaken. And then some.
Just last week a British company called Stability AI launched an artificial intelligence model which has been richly fed, like a lean greyhound given fillet steak, on several billion images, equipping it to make brand new images when prompted by a linguistic message. It is called Stable Diffusion and it is revolutionary in multiple ways, perhaps the most important being this: unlike other models, the ‘owners’ are letting anyone use Stable Diffusion from the get-go (with intrinsic restrictions on sexual or prejudicial imagery and so on).
Comments
Join the debate for just $5 for 3 months
Be part of the conversation with other Spectator readers by getting your first three months for $5.
UNLOCK ACCESS Just $5 for 3 monthsAlready a subscriber? Log in