Why should leading AI labs join an open coalition of states and labs to create a global AI lab and so reconcile their mission and competitive pressures?

Leading AI labs stand unable to reconcile their mission to create safe, powerful and beneficial AI, with mounting competitive, funding and the geopolitical pressures. As several of their CEOs suggested, they may need to come together in an open coalition of states and AI labs to build an open, democratic, public-private and decentralized global AI lab and a international AI safety agency - that will strictly manage and oversee all dangerous lifecycle components of AI and AGI development, while safeguarding the innovation and oversight role of states and labs.


Last Thursday, Time magazine published a very insightful article, followed by another one, about Anthropic's governance structure and how it does fits to advance best and reconcile its three key goals.

According to statements of its CEO, those goals are (1) to stay on the edge of the capability to matter in the current ongoing race for AGI; (2) to prevent their tech (and others) from producing accidents, misuse or loss of control; and (3) to ensure that control of the most potent AGIs will be largely democratized.  

According to their statements, OpenAI and Google DeepMind widely share those goals as well, as we'll elaborate below. All three recognize the dire need and challenge of evolving their governance structure in due time and promote a new international governance structure in the hope of reconciling those three indivisible goals.

Calls by leading AI labs for a global democratic governance of AGI.

While mostly overlooked by mainstream media, for over one year, Anthropic - as OpenAI and less Google Deepmind - have repeatedly pointed to the dire need for strong global coordination and regulation to stave off the rackless winner-take-all race they are on and ensure that the enormous power and wealth that the most potent future AIs will be shared democratically among humans.

OpenAI's CEO Altman stated that control over OpenAI and advanced AI should eventually be distributed among all citizens of the world. He stated that “we shouldn’t trust” OpenAI unless its board "years down the road will not have sort of figured out how to start” transferring its power to "all of humanity." After OpenAI’s governance crisis, he repeated that people shouldn’t trust OpenAI unless it democratizes its governance. He then repeated that all of humanity should be shaping the future of AI. 

On February 24th, OpenAI stated in its revised mission, “We want the benefits of, access to, and governance of AGI to be widely and fairly shared.”

In the wake of OpenAI’s proposal of a public-private “$7 trillion AI supply chain plan,” he called again for international governance at the UAE World Government Summit but clarified that “it is not up to them” to define such constituent processes, so he called on states, such as the UAE, to convene a Summit aimed at the creation of an “IAEA for AI,” to which the Ministry of AI of UAE reply affirmatively. He even stated that if humanity jointly decided that pursuing “AGI” was too dangerous, they would stop all “AGI” development. "We'd respect that”, he replied. 

   In March 2023, Altman stated that his "platonic ideal" of building a global governance of AI would be a global constituent assembly for AI akin to the U.S. Constitutional Convention of 1787, which established a federal intergovernmental organization to manage AI in a decentralized and participatory way, according to the subsidiarity principle. 

Google DeepMind published last July a detailed "exploration" of the feasibility of creating four new IGOs for AI, including a Frontier AI Collaborative, an "international public-private partnership" to "develop and distribute cutting-edge AI systems, or to ensure such technologies are accessible to a broad international coalition."  

   Its CEO, Demis Hassabis, was interviewed last February. When confronted with the governance of Google as a typical corporation, legally bound to maximize profit for its shareholders, and the prospect of it being in control of transformative AGI, he gave a reply that concluded: "In five or ten years, as we get closer to AGI, we'll see how the technology develops and what stage the world is in, and the institutions in the world like the UN and so on, which we engage with a lot, I think we have to see how that goes and the engagement goes in the next few years."

Anthropic’s CEO, Dario Amodei, has given fewer interviews but has been as vocal as Altman in advocating for a global democratic governance of AI as the only way to avoid immense safety risks and enormous concentrations of power.
  In an interview last August, seven minutes from this frame, he clearly specified that solving the technical half of the AGI alignment problem would, by definition, create an immense undemocratic concentration of power unless the global governance half of AGI alignment was also solved, and that eventually, some global body should be in charge of all advanced AI companies.
  After he explained the non-profit structure controlling the company, he was asked who would be in control if Anthropic found itself at the forefront of achieving world-changing breakthroughs in AGI. He replied, "That doesn't imply that Anthropic or any other entity should be the entity that makes decisions about AGI on behalf of humanity. I would think of those as different things. If Anthropic does play a broad role, then you'd want to widen that body to a whole bunch of different people from around the world. Or maybe you construe this as very narrow, and then there's some broad committee somewhere that manages all the AGIs of all the companies on behalf of anyone."
  He ended by saying, "I don't know. I think my view is that you shouldn't be overly constructive and utopian. We're dealing with a new problem here. We need to start thinking now about the governmental bodies and structures that could deal with it."

From Calls to Reality, as Time is Running Out.

Given the acceleration in AI capabilities and investments and ever-shrinking timelines to AGI and superintelligence, OpenAI's pledges to transfer such power “years down the road” and the global democratic governance pledges of Anthropic and Google Deepmind sound more and more like empty promises unless they are turned very soon into precise timelines and modalities for the transfer of power to humanity. 

As hinted by Altman and implied by Amodei and Google DeepMind, it is not up to them to bring forward such democratic global governance because they are private companies and not legitimate geopolitical actors and because they are bound by the enormous geopolitical, economic and national security interests of their host country, which currently appears determined to secure its lead above all other concerns.

In November, Anthropic's Long Term Public Trust non-profit, governed by five fine US individuals, will nominate three out of five board members of the for-profit arm. This will create unpredictable dynamics, placing, in the best of scenarios, five board members—and Anthropic's host country—in control of their potentially world-changing future AGI.

Do we really have time to resolve those AI governance conundrums?

The first article quoted above recounts how Amodei states to the US Senate that "systems powerful enough to “create large-scale destruction” and change the balance of power between nations could exist as soon as 2025." Anthropic's CEO, Dario Amodei, believes that the chance of an AI-induced civilizational catastrophe is “somewhere between 10-25%”. Google Deepmind's CEO Hassabis believes AGI it "could be just a few years, maybe a decade away.”

“Today’s systems are not anywhere close to posing an existential risk,” said Yoshua Bengio, a professor and A.I. researcher at the University of Montreal. “But in one, two, five years? There is too much uncertainty. That is the issue. We are not sure this won’t pass some point where things get catastrophic.” Altman stated two weeks ago he would not rule out that we are "close" to achieving AGI capabilities that exceed the scientific innovation capabilities of OpenAI - basically the definition of a level of AI capability that would cause an intelligence explosion, Ai-powered global dictatorship.

Hence, the time to resolve those enormously impactful governance problems is now.

What can one or all of those labs do to resolve this conundrum?

They can't do so directly, but they could participate in an open, wide, public-private, transnational initiative to build global AI safety organizations and a joint global AI lab.

They could participate in an initiative to create an international version of the "AI Safety Institutes" being developed by various countries and their security agencies and create a public-private global public-interest AI lab that is democratically governed and party-decentralized. 

OpenAI’s Chief Scientist, Ilya Sutzkever, stated, " It will be important that AGI is somehow built as a cooperation between multiple countries.” Yoshua Bengio called for a multilateral network of AI labs, analyzing the right balance of global and national authority over them in fine detail. 

While such an initiative will require all or nearly all AI superpowers to join it to achieve its safety goals, it can set the seed for a fair and democratic global governance of AI, enable AI labs to innovate without undue pressures to sacrifice safety and allow participant states to benefit economically by joint ownership of the most advanced AI.

The Global Public Benefit AI Lab

The Global Public Benefit AI Lab will be a $15+ billion, open, partly-decentralized, democratically-governed joint-venture of states and suitable tech firms aimed to achieve and sustain a solid global leadership or co-leadership in human-controllable AI capability, technical alignment research and AI safety measures.  

The Lab is one of three agencies of a new intergovernmental organization being built by the Harnessing AI Risk Initiative, a venture to catalyze a critical mass of globally-diverse states in a global constituent processes to build a new democratic IGO and joint venture to jointly build the most capable safe AI, and reliably ban unsafe ones - open to all states and firms to join on equal terms.

  • The Lab will be an open, partly-decentralized, democratically-governed joint-venture of states and suitable tech firms aimed to achieve and sustain a solid global leadership or co-leadership in human-controllable AI capability, technical alignment research and AI safety measures.

  • The Lab will accrue capabilities and resources of member states and private partners, and distribute dividends and control among member states and directly to their citizens, all the while stimulating and safeguarding private initiative for innovation and oversight.

  • The Lab will be primarily funded via project finance, buttressed by pre-licensing and pre-commercial procurement from participating states and client firms.

  • The Lab will seek to achieve and sustain a resilient “mutual dependency” in its wider supply chain vis-a-vis superpowers and future public-private consortia, through joint investments, diplomacy, trade relations and strategic industrial assets of participant states - while remaining open to merge with them on equal terms, as detailed in our recent article on The Yuan.

Follow these links for more on the Global Public Benefit AI Lab, and the Harnessing AI Risk Initiative of which it is an integral part.

Rufo Guerreschi