‘I hope I’m wrong’: the co-founder of DeepMind on how AI threatens to reshape life as we know it

 

illustration
Where’s the off switch? … Illustration: Mojo Wang

Halfway through my interview with the co-founder of DeepMind, the most advanced AI research outfit in the world, I mention that I asked ChatGPT to come up with some questions for him. Mustafa Suleyman is mock-annoyed, because he’s currently developing his own chatbot, called Pi, and says I should have used that. But it was ChatGPT that became the poster child for the new age of artificial intelligence earlier this year, when it showed it could do everything from compose poetry about Love Island in the style of John Donne to devise an itinerary for a minibreak in Lisbon.

The trick hadn’t really worked, or so I thought – ChatGPT’s questions were mostly generic talking points. I’d asked it to try a bit harder. “Certainly, let’s dive into more specific and original questions that can elicit surprising answers from Mustafa Suleyman,” it had trilled. The results still weren’t up to much. Even so, I chuck one at him as he sits in the offices of his startup in Palo Alto on the other end of a video call (he left DeepMind in 2019). “How do you envision AI’s role in supporting mental health care in the future,” I ask – and suddenly, weirdly, I feel as if I’ve got right to the heart of why he does what he does.

“I think that what we haven’t really come to grips with is the impact of … family. Because no matter how rich or poor you are, or which ethnic background you come from, or what your gender is, a kind and supportive family is a huge turbo charge,” he says. “And I think we’re at a moment with the development of AI where we have ways to provide support, encouragement, affirmation, coaching and advice. We’ve basically taken emotional intelligence and distilled it. And I think that is going to unlock the creativity of millions and millions of people for whom that wasn’t available.”

It’s not what I was expecting – AI as BFF – but it’s all the more startling because of what Suleyman has already told me about his background. Born in 1984 in north London to a Syrian father and English mother, he grew up in relative poverty and then, when he was 16, his parents separated and both moved abroad, leaving him and his little brother to fend for themselves. He later won a place at Oxford to study philosophy and theology, but dropped out after a year.

“I was frustrated with it being very theoretical. I was an entrepreneur at heart. I was running a fruit juice and milkshake stall in Camden Town while I was at Oxford. So I was coming back through the summer to make money because I was completely skint. And I was also doing the charity at the same time.” (Suleyman, although a “strong atheist”, was helping a friend set up the Muslim Youth Helpline, designed to make counselling and support available to young Muslims in a culturally sensitive way.) “So it was kind of three things simultaneously. And it just felt like I was doing this ivory tower thing when really I could be making money and doing good.”

Now 39, he’s still not in touch with his dad, and lives alone in California. Reflecting on what he hopes AI can offer – “a boost to what you can do, the way that you feel about yourself” – he says: “I certainly didn’t have that. And I think that many don’t.” But is interaction with a chatbot a realistic replacement for companionship, support, even love? It’s hard not to find the idea a bit chilling. “It’s not intended to be a substitute. But that doesn’t mean it’s useless. I think it can fill in the gaps where people are lacking. It’s going to be a tool for helping people get stuff done. Right? It’s going to be very practical.”

Join YouTube banner

This is one aspect of the sunlit uplands of AI; the shadow side is largely what preoccupies Suleyman in his new book, written with the researcher Michael Bhaskar and ominously titled The Coming Wave. Even if you have followed debates about the dangers of artificial intelligence, or just seen Black Mirror, it’s a genuinely mind-boggling read, setting out the ineluctable forces soon to completely transform politics, society and even the fabric of life itself over the next decade or two. I tell Suleyman that it’s “sobering”. “I mean, that’s a polite way to put it,” he says. “And, you know, it was hard to write – it was gut wrenching in a way. And it was only because I had time to really reflect during the pandemic that I mustered the courage to make the case. And, obviously, I hope I’m wrong.”

Me too. The Coming Wave distils what is about to happen in a forcefully clear way. AI, Suleyman argues, will rapidly reduce the price of achieving any goal. Its astonishing labour-saving and problem-solving capabilities will be available cheaply and to anyone who wants to use them. He memorably calls this “the plummeting cost of power”. If the printing press allowed ordinary people to own books, and the silicon chip put a computer in every home, AI will democratise simply doing things. So, sure, that means getting a virtual assistant to set up a company for you, or using a swarm of builder bots to throw up an extension. Unfortunately, it also means engineering a run on a bank, or creating a deadly virus using a DNA synthesiser.

The most extraordinary scenarios in the book come from the realm of biotech, which is already undergoing its own transformation thanks to breakthroughs such as Crispr, the gene-editing technology. Here, AI will act as a potent accelerant. Manufactured products, Suleyman tells us, could one day be “grown” from synthetic biological materials rather than assembled, using carbon sucked out of the atmosphere. Not only that, but “organisms will soon be designed and produced with the precision and scale of today’s computer chips and software”. If this sounds fanciful, it’s just a bit further along a trajectory we’ve already embarked on. He points out that companies such as The Odin are already selling home genetic engineering kits including live frogs and crickets for $1,999 (£1,550). You can even buy a salamander bioengineered to express a fluorescent protein for $299 – though when I visit the website, they’re out of stock.

Glow-in-the-dark pets aside, many of these developments hold enormous promise: of curing disease, charting a way through the climate crisis, creating “radical abundance”, as Suleyman puts it. But four aspects of the AI revolution create the potential for catastrophe. First, the likelihood of asymmetric effects. We’re familiar with this in the context of warfare – a rag-tag band of fighters able to hamstring a powerful state using guerrilla tactics. Well, the same principle will apply to bad actors in the age of AI: an anonymous hacker intent on bringing down a healthcare system’s computers, say, or a Unabomber-like figure equipped with poison-tipped drones the size of bees.

Second, there’s what Suleyman terms hyper-evolution: AI is capable of refining design and manufacturing processes, with the improvements compounding after each new iteration. It’s incredibly hard to keep up with this rate of change and make sure safeguards are in place. Lethal threats could emerge and spread before anyone has even clocked them.

Then there’s the fact that AI is “omni use”. Like electricity, it’s a technology that does everything. It will permeate all aspect of our lives because of the benefits it brings, but what enables those benefits also enables harms. The good will be too tempting to forgo, and the bad will come along with it.

Finally, there’s “autonomy”. Unique among technologies so far in human history, AI has the potential to make decisions for itself. Though this may invoke Terminator-style nightmares, autonomy isn’t necessarily bad: autonomous cars are likely to be much safer than ones driven by humans. But what happens when autonomy and hyper-evolution combine? When AI starts to refine itself and head off in new directions on its own? It doesn’t take much imagination to be concerned about that – and yet Suleyman believes the dangers are too often dismissed with the wave of a hand, particularly among the tech elite – a habit he calls pessimism aversion.

He likes to think of himself as someone who confronts problems rather than rationalising them away. After he left Oxford he worked in policy for the then mayor of London Ken Livingstone, before helping NGOs arrive at a common position during the Copenhagen climate summit. It wasn’t until 2010 that he got into AI, creating DeepMind with the coding genius Demis Hassabis, the brother of a school friend, and becoming chief product officer. DeepMind’s mission was to develop artificial general intelligence, AI with human-like adaptability. Four years later it was acquired by Google for £400m, making Suleyman and his colleagues unimaginably rich.

Join YouTube banner

For a while DeepMind’s efforts seemed nerdy and abstract. It tried to beat people at board games. Among its achievements was using AI to thrash the champion Go player Lee Se-dol. But it was always about more than that (Hassabis recently said: “I’ve always been interested in the nature of the universe, nature of reality, consciousness, the meaning of life, all of these big questions. That’s what I wanted to spend my life working on”). In 2020, it unveiled a program that could figure out the structure of proteins, one of the most fiendish problems in science. Painstaking research over many decades had described the shape of about 190,000 of these complex molecules – which include insulin and haemoglobin – information that’s vital for understanding how they function, and coming up with targeted drug treatments. By 2022 DeepMind had worked out another 200m.

But given omni-use and the havoc that AI might be about to wreak, does Suleyman ever feel guilty about the part he’s played in its development? No, because he sees technological change as arising from the “collective creative consciousness”. “That’s not a way of disowning responsibility. It’s just an honest assessment: very rarely does an invention get held in a kind of private space for very long.” At the same time, he does believe he can nudge the sector towards greater social public-spiritedness. “What I’ve always tried to do is attach the idea of ethics and safety to AGI. I wrote our business plan in 2010, and the front page had the mission ‘to build artificial general intelligence, safely and ethically for the benefit of everyone’.” He reckons this early stand set the tone. “I think it has really shaped how a lot of the other AI labs formed. OpenAI [the creator of ChatGPT] started as a nonprofit largely because of a reaction to us having set that standard.”

The Coming Wave is partly an effort to continue this role of shaping and bolstering the industry’s conscience. In it, he sets out 10 strategies for the “containment” he sees as being necessary to keep humanity on the narrow path between societal collapse on the one hand and AI-enabled totalitarianism on the other. These include increasing the number of researchers working on safety (including “off switches” for software threatening to run out of control) from 300 or 400 currently to hundreds of thousands; fitting DNA synthesisers with a screening system that will report any pathogenic sequences; and thrashing out a system of international treaties to restrict and regulate dangerous tech.

Given the range of terrifying scenarios he has sketched out, Suleyman comes across as relatively optimistic. For the moment, at least, there’s a sympathetic administration running the most powerful government in the world. Last month he was invited to the White House alongside Amazon, Microsoft and Google to sign up to an oversight regime, albeit a voluntary one. Given tech’s poor record at self-regulation, this may seem a little underpowered. But he’s reassured the Biden administration is alive to the problem. “I would say that they’re taking the threat very seriously. And they are making a lot of progress on it.” How does he think that would go under Trump if he wins again? “How do I think it would go under Trump?” He gets the giggles. “I mean, you know … that’s American politics.”

For Suleyman, the only powers realistically capable of acting to contain AI are states, and despite the gallows humour, he’s deeply worried about how fragile they are becoming. This doesn’t seem very Silicon Valley, somehow, at least as represented by the likes of Ayn Randian radicals such as Peter Thiel and Elon Musk. “I mean, I couldn’t disagree more with their politics. I’m extremely against them, if I’m extreme about anything. I think this idea that we need to dismantle the state, we need to have maximum freedom – that’s really dangerous. On the other hand, I’m obviously very aware of the danger of centralised authoritarianism and, you know, even in its minuscule forms like nimbyism. That’s why, in the book, we talk about a narrow corridor between the danger of dystopian authoritarianism and this catastrophe caused by openness. That is the big governance challenge of the next century: how to strike that balance.”

“I think we’re in a generational transition at the moment,” he continues. “There’s the existing leaders who are in their mid-50s, have spent 25 years in Silicon Valley and risen up through the ranks either as founders or executives. And they have an outlook which is probably closer to the libertarian tendency. So Zuck [Mark Zuckerberg, co-founder of Facebook] and Larry and Sergey [Page and Brin, the co-founders of Google]. But there’s this new crop leading AI companies, like Sam [Altman, CEO of OpenAI] and myself. And I think that we’ve been talking about the potential dangers for a long time.”

Navigating the politics of running a company hasn’t always been easy for this new generation. And to some extent Suleyman now finds himself at the margins of it. In 2019 he took a “break” from DeepMind, and in 2021 the Wall Street Journal revealed he had been stripped of management responsibilities at the company because of complaints about his style. He apologised and told the paper that he “accepted feedback that, as a co-founder … I drove people too hard and at times my management style was not constructive”.

When I bring it up he seems unfazed, or maybe just practised at answering the question. “This was five years ago now. I was young, super hard-charging, very driven, working 14 hours a day. And it’s intense. You know, you just get better. You learn from these things.” For what it’s worth, Suleyman comes across to me as friendly and open – confident and clear but not pompous. Working under him, of course, may be a different experience. But he does seem to have a genuine desire to do good. As for DeepMind, he says he doesn’t miss it, being less interested in fundamental research than putting things into production – like his chatbot, Pi, which is designed to be friendlier and more supportive than ChatGPT – a “hype man” as he puts it. And he doesn’t seem to have fallen out with Hassabis: “We always have dinner when either one of us is in Palo Alto or London.”

Isn’t he worried The Coming Wave’s grim predictions will just set off another round of dismissals – that pessimism aversion he talked about? “The book is a provocation,” he says. He wants attention for his containment plan, and suggestions to improve it. Does that mean we shouldn’t take his warnings at face value? “I think the cool thing about what I’m doing is that I’m predicting something. And I think a lot of people don’t have the courage to predict things. I don’t think I’m wrong, but we do have time to intervene.” It sounds as though, having seen one version of the future, he’s desperately trying to change the timeline, like a real-life Marty McFly. “Exactly,” he laughs. Let’s hope he’s not too late.

 

Source: The GUARDIAN

 

 

 

 

Comments are closed.