Open Call for the Harnessing AI Risk Initiative (v.3)

by the emerging Coalition for the Harnessing AI Risk Initiative,
led by the Trustless Computing Association

(Published on April 23rd, 2024 and closed on July 1st, 2024)


The shocking acceleration of AI innovation, and its convergence with unregulated digital communications, is a call on humanity to come together as never before to build an effective, expert, timely, democratic and federal global governance of Artificial Intelligence. 

A wide majority of experts, states, and world citizens are by now aware of AI's immense risks for human safety, for concentration of power and wealth, while most recognise AI's astounding opportunities for human well-being.

While the challenge is inherently global in nature, international governance initiatives by superpowers and intergovernmental organizations are severely lacking in scope, mandate, global representation, democratic method and urgency and rely on ineffective treaty-making based on unstructured summits and unanimous declarations. 

The current default path is clearly leading towards one, two or a handful of states and their security agencies attempting to control AI risks and benefits, as they did for nuclear and bioweapons. However, much higher risks of dangerous proliferation posed by AI render such an option unworkable even for safety, let alone to prevent an immense unaccountable concentration of power.

As the UN Secretary-General reminded us last March when he called for a new global organization for AI, "only member states can create it, not the Secretariat of the United Nations." It is up to a critical mass of states to lead the way. Leaders of nearly all states understand by now that they stand completely powerless in the face of AI. What we are missing is not political will but rather a better treaty-making process.

Hence, we call on a critical mass of globally diverse states to design and jump-start an open, participatory, timely and expert treaty-making process for AI, taking inspiration from history's most successful and democratic intergovernmental treaty-making process: the one that enabled US states with widely opposing political and religious views to democratically establish the United States of America.

In 1786, two US states convened three more in the Annapolis Convention, setting out a treaty-making process that led to the ratification of the US Constitution by nine states and then all thirteen by 1790. We should replicate the same process - globally and for AI, and weighted for population size and GDP - to build an intergovernmental organization to jointly regulate and enjoy the most capable AIs and reliably ban unsafe ones, while maximizing the autonomy of states, firms and citizens.

The unique scale of the risks posed by AI, as for nuclear technology after Hiroshima, presents a unique opportunity to harness an immense global risk to create the federal, democratic global governance mechanisms that we need to manage AI and to create a model to manage other dangerous technologies and global challenges.


If you agree with the above text, you may be interested in taking 5 minutes to join as an Undersigner of this Open Call and as a Member of the Coalition for the Harnessing AI Risk Initiative:  

Terms: The text of the above Open Call distills the general aims and methods of the Harnessing AI Risk Initiative, led by the Trustless Computing Association (TCA), and its emerging Coalition for the Harnessing AI Risk Initiative. Becoming a "member" of the Coalition does not entail any legal obligation. Neither the Initiative nor the Coalition have legal status, while TCA is a Swiss non-profit. Members agree with the Open Call v.3 and pledge to promote the Coalition and Initiative within their possibilities and abilities. Members will be publicly listed on TCA webpages only after their count has reached 30. Members will receive updates about the Initiative at least monthly and can decide to withdraw their membership at any time. Once you have filled out the form below, you’ll receive a copy of this page, which you will have to confirm via email.

UNDERSIGNERS AND Members of the Coalition

Individuals:

  • Richard Falk, Prof Emeritus of Princeton University

  • Marco Landi, former President and COO of Apple Computers

  • Elisabetta Trenta, former Minister of Defense of Italy

  • Ansgar Koene, Global Ethics and Regulatory Lead at EY

  • Akash Wasil, senior AI analyst formesly at Control AI & Center for AI Safety

  • Bazlur Rahman, Chair of Internet Governance Forum Bangladesh

  • Jan Philipp Albrecht, President of the Henrich Böll Foundation

  • Wendell Wallach, Carnegie Council for Ethics in International Affairs

  • Kay Firth-Butterfield, former Head of AI of the World Economic Forum.

  • David Wood, Chair of London Futurists

  • Philipp Amann, former Head of Strategy at Europol Cybercrime Centre

  • Flynn Devine, researcher in AI Governance and democracy

  • Rasmus Tenbergen, President of the Institute for Leadership Development

  • Nell Watson, President of the European Responsible Artificial Intelligence Office

  • René Wadlow, President, Association of World Citizens

  • Rufo Guerreschi, President of the Trustless Computing Association

  • Xin Zhou, CEO fo the International Finance Forum. Former Editor of The Yuan

  • Hina Sarfaraz, Third Eye Legal

  • Marta Jastrzębska, Director of Partnerhsips at Trustless Computing Association

  • Callum Hinchcliffe

  • Yixin Sui

  • John Masika

  • Lucas Henrique Muniz da Conceicao

  • Reinhold Wochner, CISO at Austrian Post

  • Koen Maris

  • Tjerk Timan

  • Mohammad Mostafizur Rahaman

  • Peter Joyce

  • Sharon Gal Or

  • Jelle Donders

  • Roberto Savio

  • Alexandre Horvath

  • Declan Conway

  • Emily Darraj

  • Igor van Gemert

  • Christophe Zheng

  • Faizan Abbasi

  • Arjun Yadav

  • Hani Patel

  • Edward Madziwa

Organizations

Beyond the Coalition, the Harnessing AI Risk Initiative itself is supported by the 32 advisors of the Trustless Computing Association, and 39 individual and 13 organizational confirmed as speaking participants to our 1st Summit.