Open Call for the Harnessing AI Risk Initiative (v.2)

Trustless Computing Association

(Published on January 31st, 2024)


We call on a critical mass of globally-diverse states and other vision-aligned entities to convene an open transnational constituent assembly to create - in a participatory, timely and expert manner - open federal intergovernmental organizations to manage AI and digital communications for the interest of all of humanity.

The acceleration of AI capabilities and investments, and its convergence with largely unregulated digital communications, poses immense and urgent risks for humanity in terms of safety and unaccountable concentration of power and wealth, as well as an unprecedented potential to usher us in an era of unimagined prosperity, safety and well-being. 

In just over fifteen months, awareness of the enormity and urgency of those risks is now shared by a majority of experts, business leaders, states and citizens.

Given the inherently global nature of the largest risk and opportunities, their governance needs to be globalized to some extent. Some governance functions need to accrue in an open federal intergovernmental organization, or a coherent architecture of organizations, whereby regulation or enforcement are at the lowest level possible, but not lower, and the innovation and oversight role of private and states' initiative is preserved and enhanced.

The functions of such organization will necessarily include setting of globally trusted AI safety standards, developing globally-trusted governance-support systems, enforcing global bans for unsafe AI development and use, and developing a shared, world-leading, public-private and partly-decentralized infrastructure and capabilities for safe AI and digital communications.

It is paramount that such an organization and the constituent process leading up to it will maximize both expertise, timeliness and agility, as well as participation, democratic process, neutrality and inclusivity and rely on “battle-tested” organizational models. That's crucial, in fact, for such an organization to be sufficiently trusted to: 

  • Ensure broad adoption and compliance with global bans and oversight on dangerous AI; 

  • Enhance AI safety through global diversity and transparency in safety standards; 

  • Achieve a fair distribution of the power, benefits, access and wealth of AI; and 

  • Effectively mitigate the risks of global military instability due to power competition.

Building such an organization poses significant risks of excessive concentration of power, military instability, capture by powerful entities and value lock-in, but not building it significantly increases all those risks. Success will likely create a governance model for mastering other dangerous technologies and global challenges.

While early participation by superpowers would be ideal, a critical mass of globally-diverse pioneering states and NGOs can succeed in convening and jump-starting constituent processes and build such organization, delivering a large part of the benefits to participant states, and the world, and leading the way for superpowers to join later on.