AI bosses are feeling the high-stakes pressure

Asked if he worried about "ending up like Robert Oppenheimer," Google DeepMind's CEO said that he loses sleep over the idea.

AI bosses are feeling the high-stakes pressure
A photo of Google Deepmind CEO Demis Hassabis in a blazer and sweater in front of a blue background
Google Deepmind CEO Demis Hassabis said there's "probably too much" pressure on AI leaders.
  • The CEOs of Google DeepMind and Anthropic spoke about feeling the weight of responsibilities in a recent interview.
  • The executives advocated for the creation of regulatory bodies to oversee AI projects.
  • Both AI leaders agree that people should better grasp and prepare for the risks posed by advanced AI.

When asked if he ever worried about "ending up like Robert Oppenheimer," Google DeepMind's CEO Demis Hassabis said that he loses sleep over the idea.

"I worry about those kinds of scenarios all the time. That's why I don't sleep very much," Hassabis said in an interview alongside Anthropic CEO Dario Amodei with The Economist editor in chief Zanny Minton Beddoes.

"I mean, there's a huge amount of responsibility on the people — probably too much — on the people leading this technology," he added.

Hassabis and Amodei agreed that advanced AI could present destructive potential whenever it becomes viable.

"Almost every decision that I make feels like it's kind of balanced on the edge of a knife — like, you know, if we don't build fast enough, then the authoritarian countries could win," Amodei said. "If we build too fast, then the kinds of risks that Demis is talking about and that we've written about a lot, you know, could prevail."

"Either way, I'll feel that it was my fault that, you know, that we didn't make exactly the right decision," the Anthropic CEO added.

Hassabis said that while AI appears "overhyped" in the short term, he worries that the mid-to-long-term consequences remain underappreciated. He promotes a balanced perspective — to recognize the "incredible opportunities" afforded by AI, particularly in the realms of science and medicine, while becoming more keenly aware of the accompanying risks.

"The two big risks that I talk about are bad actors repurposing this general purpose technology for harmful ends — how do we enable the good actors and restrict access to the bad actors?" Hassabis said. "And then, secondly, is the risk from AGI, or agentic systems themselves, getting out of control, or not having the right values or the right goals. And both of those things are critical to get right, and I think the whole world needs to focus on that."

Both Amodei and Hassabis advocated for a governing body to regulate AI projects, with Hassabis pointing to the International Atomic Energy Agency as one potential model.

"Ideally it would be something like the UN, but given the geopolitical complexities, that doesn't seem very possible," Hassabis said. "So, you know, I worry about all the time, and we just try to do at least, on our side, everything we can in the vicinity and influence that we have."

Hassabis views international cooperation as vital.

"My hope is, you know, I've talked a lot in the past about a kind of a CERN for AGI type setup, where basically an international research collaboration on the last sort of few steps that we need to take towards building the first AGIs," Hassabis said.

Both leaders urged a better understanding of the sheer force for change they expect AI to be — and for societies to begin planning accordingly.

"We're on the eve of something that has great challenges, right? It's going to greatly upend the balance of power," Amodei said. "If someone dropped a new country into the world — 10 million people smarter than any human alive today — you know, you'd ask the question, 'What is their intent? What are they actually going to do in the world, particularly if they're able to act autonomously?'"

Anthropic and Google DeepMind did not immediately respond to requests for comment from Business Insider.

"I also agree with Demis that this idea of, you know, governance structures outside ourselves — I think these kinds of decisions are too big for any one person," Amodei said.

Read the original article on Business Insider