Open Call for the Harnessing AI Risk Initiative (v.1)

Trustless Computing Association

(Published on January 15th, 2024)


We call on a critical mass of globally diverse states and other vision-aligned entities to carefully design the mandate and rules of an open transnational constituent assembly for AI and digital communications to create a new federal intergovernmental organization that can reliably be expected to durably manage the immense risks in terms of human safety and unaccountable concentration of power and wealth of such technologies, and realize their potential to usher us in an era of unimagined prosperity, safety and well-being.

Given the inherently global nature of the threats and opportunities, such an organization should set globally trusted AI safety standards, develop a USD 20+ billion public-private world-leading safe AI lab and a shared digital communications infrastructure, enforce global bans for unsafe AI development and use, and develop globally-trusted governance-support systems.

Awareness of the enormity and urgency of the risks has reached a majority of experts, business leaders, states and citizens. Calls for inclusive global governance multiply from every quarter. An urgent call by top scientists for a sweeping AI Treaty last November was followed by a similar one by Pope Francis

Nonetheless, current global governance initiatives by AI and digital superpowers like the United States' Guidelines for Secure AI Development, China's Global AI Governance Initiative and the European Union's AI Act, and India's global digital public infrastructure and sovereign AI initiatives, while being welcome first steps, are severely lacking in scope, democratic participation and inclusivity, and highly disjointed and fragmented. 

While it is undeniable that leading national security agencies and AI labs currently accrue essential and rare expertise necessary to successfully regulate highly potent, complex, secretive and fast-moving leading-edge AI technologies, we believe it is paramount that such a constituent process and the resulting organization will maximize both expertise, timeliness and agility, as well as participation, democratic process, neutrality and inclusivity. That's crucial, in fact, for such an organization to be sufficiently trusted to: 

  • Ensure broad adoption and compliance with global bans and oversight on dangerous AI; 

  • Enhance AI safety through global diversity and transparency in safety standards; 

  • Achieve a fair distribution of  AI’s power, benefits and wealth, and 

  • Effectively mitigate the risks of global military instability due to power competition.

Current intergovernmental organizations like the G7, G20, G77, EU, and UN bodies can't possibly take on this task due to their lack of mandate, global representativity, or their over-reliance on unanimous decision-making. Meanwhile, single states lack the political strength and strategic autonomy to table alternative proposals in such all-important domains. 

Hence, there is a historical role for a few pioneering states, NGOs and business leaders to play the role of convenors of such assembly, as they did in 1946 with the Baruch and Gromyko Plans, for nuclear weapons and fission energy; or in 1786 US Annapolis Convention for US federal democracy; in the 80s with the International Thermonuclear Experimental Reactor (ITER) for nuclear fusion energy; in the 90s with the Coalition for the International Criminal Court for global criminal justice.

We are taking OpenAI's CEO Sam Altman literally at its word when he called for a global constituent assembly for AI akin to the U.S. Constitutional Convention of 1787 and based on the federal principles of subsidiarity. Far from an extemporaneous statement, Altman repeatedly called for participatory democratic global governance of AI in a subsequent interview, and then another. Along with similar suggestions by Anthropic's CEO Dario Amodei, he even pledged last June to convert OpenAI's governance structure into a globally democratic and participatory body, and repeated it after the OpenAI's governance crisis, last December.

The resulting federal organization and shared public infrastructure should aim to safeguard and enhance the innovation and oversight roles of private and state initiatives, by guaranteeing interoperability, safety, privacy and security standards, fostering a true free market of innovations and ideas. 

We call on all to contribute their time, expertise or resources to realize such an Harnessing AI Risk Initiative to harness the immense risks of AI to turn digital innovations into a key tool to usher humanity in an era of unimagined material and emotional well-being, and establish a model of global governance suitable for other dangerous technologies and global challenges.