Building a democratic global governance of AI through a public-private consortium aimed at "mutual assured dependency" vis-a-vis superpowers.

A public-private initiative lead by a wide coalition of globally-diverse states - possibly via the timely convergence of OpenAI's calls for a democratic global governance of AI and its "$7 trillion AI supply chain" plan - could result in an open and democratic public-private consortium and  federal intergovernmental organization to manage AI for humanity. By achieving and sustaining a solid "mutual assured dependency" on its wider AI supply chain, vis-a-vis superpowers, it would foster democratic re-globalization, peace, abundance and liberty, and enable humanity to stave off the immense risks of AI to human safety and concentration of power, and realize its astounding potential.


In a recent video interview, the former CEO of the world leader in chip lithography, the Dutch ASML, suggested that EU states should join up to achieve and sustain a solid "mutual dependency" in the supply chain of AI and AI chips - emerged as the key asset of economic and military competition - rather than pursuing "full autonomy" as China is following US export controls.

Not only full autonomy would be too hard to achieve, but a solid "mutual dependency" would be much more efficient for the world economy, foster peace, and achieve all the benefits of autonomy, he stated.

States that came together in such an international public-private initiative - possibly also beyond the EU - would achieve and sustain full digital sovereignty and foster their economy, without full industrial autonomy.

Such an initiative would benefit from the participation of states like Germany and the Netherlands since they host globally-unique assets in the AI supply chain, such as ASML, Zeiss, Trumpf and a few other firms, which are in turn dependent in many ways from foreign and US firms. Taiwan has unique AI chip manufacturing and know-how, and South Korea is rapidly catching up.

Last month, the World Street Journal reported of a plan by OpenAI to increase the "global infrastructure and supply chains for chips, energy and data centers" in support of the needs of AI and other industries that "could require raising as much as $5 trillion to $7 trillion", largely confirmed by OpenAI. Such amounts likely refer to upward bounds on a ten year frame, so the initial cash investments may be in the order of a few tens or hundreds of billions, with further funding tied to performance.

OpenAI's plan could become such an initiative for "mutual dependency" suggested by the former CEO of ASML, and achieve much more, if the public-private consortium it is proposing will be structured to bindingly remain open to all states, superpowers and firms on equal terms, include an international AI safety agency, and be governed in global democratic and participatory ways.

Altman called last March for a global constituent assembly "akin to the U.S. Constitutional Convention of 1787" to establish a federal intergovernmental organization to manage AI, in a decentralized and participatory way, according to the subsidiarity principle

Far from an extemporaneous statement, it was largely confirmed in later video interviews. He later stated that control over OpenAI and advanced AI should eventually be distributed among all citizens of the world. He stated that “we shouldn’t trust” OpenAI unless its board "years down the road will not have sort of figured out how to start transferring its power to "all of humanity".

He stated OpenAI would stop ("We'd respect that”) all “AGI” development if humanity jointly decided it was too dangerous. After OpenAI’s governance crisis, he repeated that people shouldn’t trust OpenAI unless it democratizes its governance, and repeated at the World Economic Forum that it should be all of humanity shaping AI. Last Feb 24th, OpenAI stated in its revised mission, “We want the benefits of, access to, and governance of AGI to be widely and fairly shared.”

Regardless if OpenAI does or does not follow up on his stated intentions for a global democratization of AI, other states and firms could pursue such an initiative, as proposed by the Harnessing AI Risk Initiative. And such an Initiative could eventually merge with OpenAI's.

Given the acceleration in AI capabilities, investment and concentration in recent months, and OpenAI’s own public-private “$7 trillion AI supply chains plan”, we believe that Altman's pledge to transfer such power “years down the road” sound more and more like an empty promise, unless they are turned very soon into precise timelines and modalities for the transfer of power to humanity. Yet, as he appropriately stated this month at the World Government Summit, “it is not up to them” to define such constituent processes, so he called on states, such as the UAE, to convene a Summit aimed at the creation of an “IAEA for AI”.

OpenAI's plan to bring in wealthy carbon-rich states, like the UAE, as major funders of such plan could have benefits huge for fighting climate change if these state would commit to a binding phasing out carbon in return for a major shareholder position in from massive clean energy infrastructures foreseen by such plan - most likely based on nuclear fusion, whose innovation would turbo-charged by such investments and AI.

OpenAI is not the only leading lab going in that direction. Google Deepmind CEO recently stated last week - at the end of this 5 minute video segment - he sees in the next few years their governance merging into a UN-like organization as we get closer to "AGI" - in line with their July paper calling for a public-private international Frontier AI Collaborative. Similar proposals have come from top scientists and NGOs, including by Yoshua Bengio, via its proposal for a party-decentralized multilateral network of AI labs, and by others via a proposed multinational AGI consortium.

Hence, there is a huge historical opportunity for an open coalition of pioneering states and NGOs to promote the conversion of OpenAI's $7 trillion plan and global democratic vision, or build it in its place if it does not, via the Harnessing AI Risk Initiative

The Initiative would act as convenor of a critical-mass of globally-diverse states and relevant firms to design and jump-start a global constituent assembly to build an intergovernmental organization and public-private consortium to build a shared democratic governance, infrastructure and ecosystem for AI. Similar coalitions led to successful initiatives such as International Thermonuclear Experimental Reactor (ITER), AirBus, and the International Criminal Court.

Such an initiative would be in the best interest of all of the world's citizens and states, countering the ongoing deglobalization in advanced technologies, greatly spurring the world economy, and encouraging a democratic re-globalization, including China. 

Such an initiative would greatly benefit all participating non-superpower states as it would avoid a future whereby they have to choose on an economic, military, safety, cultural and value dependence from one of two superpowers. 

Such an initiative would also benefit the superpowers, China and the US, enabling them to turn their technological leadership from instruments of a breakneck military and AI arms race, into a soft power leadership in a more democratic, safe, and richer world, in a win-win for all.

If eventually joined by the US and China, concurrently, the initiative would enable them to come off their breakneck AI arms race, and make it plausible to entrust with such a new intergovernmental organization a trustworthy and widely trusted global enforcement mechanism against dangerous AI research and developments that will be increasingly needed. 

The risks and opportunities are similar to those that led in 1946 the US and the Soviet Union to propose - with their Baruch Plan and Gromyko Plan - a new independent UN agency to manage all nuclear weapons stockpiles and weapons and energy research. They failed to agree on a middle ground, and today the nuclear threat is bigger than ever. We now have a second chance with AI. We can harness the risk of AI to turn it into an unimagined blessing for humanity, and set a governance model for other dangerous technologies and global challenges.

If advanced AI remains highly dependent on compute power, as they have for the last seven years, a "mutually assured dependency" would be set in place for AI and AI chip, providing a stable balance among superpowers akin to the mutual assured destruction (MAD) achieved for nuclear weapons, but with the benefits of a multilateral control and sharing of the benefits, and better mechanisms to prevent dangerous proliferation, catastrophic accidents or loss of control, via "compute caps" and more.

Rufo Guerreschihiprio