Harnessing AI Risk Summit

Transforming our Greatest Threat into Humanity's Triumph

April 17-18th, 2025
Hybrid in Geneva

geneva+sunset.jpg

As a shocking acceleration of AI capabilities and a reckless winner-take-all AI race among states and their firms is unfolding, we face - once again since 1946 - a huge opportunity to harness the immense risks brought by a new technology to come together like never before to realize unprecedented benefits for humanity.

Nearly all states and regional IGOs and their citizens are completely powerless in the face of AI developments in the years to come. They have no way, on their own, to stave off AI risks, nor realize its astounding opportunities. 

They have no way to prevent or even meaningfully mitigate the enormous human safety risks or an immense global, unaccountable concentration of power and wealth, nor to realize its astounding potential for their citizens.

By coming together boldly and openly, in a critical mass, they can jointly build, regulate and exploit the most-capable, safe and sovereign AI, and at least attempt to bring superpowers to lead in building the bold global safety safeguards and institutions we need for AI, as they nearly did in the late 1940s for nuclear technology via the Baruch and Gromyko Plans.

As opposed to 1946, superpowers are not leading the way in a cooperative way. A critical mass of small and medium states could do so, while gaining economically and politically, by spearheading a wide and open coalition of NGOs and states, a constituent process based on the intergovernmental constituent assembly treaty-making model, and jumpstarting a Global Public Benefit AI Lab to build, regulate and jointly exploit the most capable safe AIs.

We have a second chance to harness the sudden, immense and accelerating risks posed by a new dangerous technology, to build a reliable, resilient, democratic and federal global governance model for all current and dangerous technologies, our global public sphere and other global challenges.

(On November 20th the date of the Summit has been moved to next April, due to lack of duning, and a revised strategy following radical developments in the panorama of global governance of AI, as summarize in the new Executive Summary of the Coalition for a Baruch Plan for AI)

Date, Time and Location

April 17-18th, 2025. From 9am to 5pm. Hbrid in TDB venue in Geneva and Zoom.

Aims

Increase the number, diversity and engagement levels of states and regional IGOs, interested to participate in the Coalition, the Harnessing AI Risk Initiative and its Global Public Benefit AI Lab.

Current team members, partners, advisors and members of the Working Groups of the Coalition for a Baruch Plan for AI - and the Harnessing AI Risk Initiative by the Trustless Computing Association - will engage prospective partnering states, their missions to the UN in Geneva, regional IGOs, NGOs and experts interested to join, in order to:

  • illustrate the Coalition’s core goals and methods, and why they are likely to ensure AI safety, advance innovations in safe AI, and ensure a wide, global and democratic sharing of its control, power and benefits.

  • illustrate the massive economic and sovereignty advantages for states deriving from the envisioned public-private, decentralized and open Global Public Interest AI Lab.

  • discuss ways to improve the Coalition’s roadmap and strategy.

The 1st Harnessing AI Risk Summit follows the hybrid 1st Pre-Summit, held along the G7 Side Event of the G7 Summit in Italy last June, and the hybrid 2nd Pre-Summit.

It will, be followed by future editions every 4 months, in Geneva or elsewhere, and other pre-summits around the world, in-person, virtual or hybrid. As per the Coalition’s roadmap and strategy, the Harnessing AI Risk Summit series aims to incrementally bring together a critical mass of globally-diverse NGOs and states to design and jump-start a treaty-making constituent process for AI and digital communications that is sufficiently open, global, timely, democratic, participatory, effective, expert-led to create a new global intergovernmental organization that can reliably steer and manage AI and digital communications for benefit of all.

Join Us as a Attendee or as a Speaker

You are invited to join as our Summit an in-person or remote attendee, to apply to as a in-person or remote speaker, and apply to join our Coalition in other forms, as an NGOs, expert or concerned citizen, as a prospective partnering states, or as a prospective donor partner.

Speaking Participants

Participant States and IGOs

  • Confirmed:

  • Engaged:

    • While the Coalition for a Baruch Plan for AI was launched on September 10th, 2024, a substantial interest shown by several states and NGOs for the Harnessing AI Risk Initiative - an initiative by the Trustless Computing Association, convenor of the Coalition - that shares its main core goals and methods with the Coalition. Over the last 5 months:

      • We held several bilateral and multilateral meetings with interested states, especially from Africa and Europe, including a number of ambassadors to the UN in Geneva or their domain experts.

      • We received formal written interest from the mission of the largest regional intergovernmental organization to the UN.

      • We have met and are actively engaged with two large EU member states, at the foreign ministry and the office of the prime minister levels.

Individual Speakers

Participant NGOs

Participant AI Labs

Agenda

April 17th, 2024

Each session of this day one will be live-streamed and posted on Youtube.

08.15: Doors Open nd Coffee

08.45: Welcome and Introductions 

09.00: TBD Lightning Talk

09.10: Panel
AI Risks and Opportunities: Extreme and Unaccountable Concentration of Power and Wealth (democracy, Inequality, civil rights, biases and minorities, unemployment and loss of agency). Human Safety Risks (loss of control, misuse, accidents, war, dangerous science). Risks’ comparative importance and timelines, shared mitigations, win-wins and synergies. Abundance, Health, Safety, Peace, Happiness. Can future AI not only bring amazing practical benefits but even increase very significantly the average happiness and wellbeing of the average human?

09.45: Q&A
10.00: TBD Lightning Talk

10.10:  Panel
Global Situational Awareness: Need for a timely, bold, democratic global governance of AI. Need for an open, effective, expert and participatory treaty-making process. Need of an open coalition of states and NGOs.

10.45: Q&A
11.00: TBD Lightning Talk

11.10:   Panel
Future AI Scenarios 2030+: (a) Mostly Business as Usual; (b) Global autocracy or oligarchy; (c) Human Safety Catastrophes or Extinction; (d) AI Takeover: Bad and Good Cases; (e) Humanity's Federal Control of Advanced AI. 

11.45: Q&A
12.00: TBD Lightning Talk

12.10:   Panel
Scope of the Mandate of an Intergovernmental Constituent Assembly for AI: An AI Safety Agency to set and enforce AI safety regulations worldwide? A Global AI Lab to jointly develop, control and benefit leading or co-leading capabilities in safe AI, and digital communications/cloud infrastructure, according to the subsidiarity principle? An IT Security Agency, to develop and certify trustworthy and widely trusted “governance-support” systems, for control, compliance and communications? Other? Federalism & Subsidiarity (global, nation and citizen levels). Checks and Balances. Complexity, Urgency, Expertise, and Acceleration. Transparency, participation, trustlessness and decentralization. Political, technical and future-proof feasibility of bans of unsafe AI. Win-wins for oversight, public safety, civil liberties and democracy. Democracy & monopoly of violence. Role of superpowers, firms and security agencies.

12.45: Q&A
13:00: Lunch Recess
14.00: TBD Lightning Talk

14: Panel
Treaty-making and Constituent Process: Participation. Expertise. Inclusiveness. Weighted Voting. Global citizens’ assemblies. A Global Collective Constitutional AI?. Scope and Rules for the Election of an Open Transnational Constituent Assembly. Interaction with other constituent initiatives. 

14.45: Q&A
15.00: TBD Lightning Talk

15.10: Panel
Global Public Interest AI Lab: Viability. Decentralization vs Safety. Subsidiarity principle. Initial funding: project finance, spin-in or other model? Role of private firms. Business models. Safety accords with other leading state private AI labs. The Superintelligence/AGI “option”.

15.45: Q&A
16.00: TBD Lightning Talk

16.10: Panel
Setting and Enforcing AI Safety Standards: Technical, socio-technical, ethical and governance standards for the most advanced AIs. Agile, measurable and enforceable methods to assess AI systems, services and components that are safe and compliant.
16.45: Q&A

17.00: Conclusions and Open Networking

19.30 - 22.00: Aperitif and Dinner for in-person participants

April 18th, 2024

The second day of the Summit will be made of non-recorded informal (and in some cases confidential) 1-to-1 and multilateral meetings among in-person speaker and audience participants to the previous day. This will entail:

  • To-be-determined close-door, closed and open workshops, working session and self-organized meetings, whereby states and other participants will engage in fostering consensus on key documents.

  • Several educational sessions on the technical and non-technical aspects of advanced AI safety, security and privacy and governance. Mainly geared towards state representatives, and run by leading expert NGO participants.

Expected Outcomes

  • 1) Deepen and expand the Coalition for a Baruch Plan for AI among NGOs and States, by agreeing on: basic principles, a work schedule of joint analysis, and remote and in-person meeting and discussion - with a small but committed number of NGOs and states participating.

  • 2) Achieve a highly preliminary agreement - among an expanding and deepening open coalition of globally-diverse NGOs and diverse states - on the design of a timely, expert-led, multilateral and participatory treaty-making process for the creation of an open global treaty organization based on the model of the open intergovernmental constituent assembly. More specifically, discuss and agree on a first barebone version of a Mandate and Rules for the Election of an Open Transnational Constituent Assembly for AI and Digital Communications, based at least on these basic principles:

    • Its Rules will embody robust participation, inclusivity, expertise, and resilience principles to maximize the probability of an organization that will consistently and effectively promote the safety, welfare, and empowerment of all individuals for many generations to come.

    • Its Mandate of the assembly will include the creation of an open organization that will collectively develop, regulate and exploit the most advanced safe AI technologies and reliably ban unsafe ones - akin to the 1946 Baruch and Gromyko Plans for nuclear technology, victims of the UN Security Council veto.

    • Its Process, following the Summit, will draw inspiration from the successful and democratic intergovernmental treaty-making process started by two U.S. states convening three more in the Annapolis Convention, and culminating in the ratification of the U.S. Constitution by nine and then all thirteen US states.

  • 3) Achieve preliminary agreement among states, AI labs, investors, funders and technical partners on their participation in a $15+ billion democratically governed, partly decentralized public-private Global Public Benefit AI Lab and ecosystem that will develop the most advanced “human-controllable” AI/AGI and advance strictly controlled research in Superintelligence - open to be joined by all states and all leading AI labs. Set up working groups to deepen the technical, economic, supply chain and geopolitical analysis of the AI Lab, and ensure that it will not turn in one more large advanced AI capability initiative racing with the other to the brink.