Harnessing AI Risk Summit
Transforming our Greatest Threat into Humanity's Triumph
December 5-6th, 2024
Virtually on Zoom
December 5-6th, 2024
Virtually on Zoom
As a shocking acceleration of AI capabilities and a reckless winner-take-all AI race among states and their firms is unfolding, we face - once again since 1946 - a huge opportunity to harness the immense risks brought by a new technology to come together like never before to realize unprecedented benefits for humanity.
Nearly all states and regional IGOs and their citizens are completely powerless in the face of AI developments in the years to come. They have no way, on their own, to stave off AI risks, nor realize its astounding opportunities.
They have no way to prevent or even meaningfully mitigate the enormous human safety risks or an immense global, unaccountable concentration of power and wealth, nor to realize its astounding potential for their citizens.
By coming together boldly and openly, in a critical mass, they can jointly build, regulate and exploit the most-capable, safe and sovereign AI, and at least attempt to bring superpowers to lead in building the bold global safety safeguards and institutions we need for AI, as they nearly did in the late 1940s for nuclear technology via the Baruch and Gromyko Plans.
As opposed to 1946, superpowers are not leading the way in a cooperative way. A critical mass of small and medium states could do so, while gaining economically and politically, by spearheading a wide and open coalition of NGOs and states, a constituent process based on the intergovernmental constituent assembly treaty-making model, and jumpstarting a Global Public Benefit AI Lab to build, regulate and jointly exploit the most capable safe AIs.
We have a second chance to harness the sudden, immense and accelerating risks posed by a new dangerous technology, to build a reliable, resilient, democratic and federal global governance model for all current and dangerous technologies, our global public sphere and other global challenges.
December 5-6th, 2024, from 9am to 5pm, via Zoom.
Increase the number, diversity and engagement levels of states and regional IGOs, interested to participate in the Coalition, the Harnessing AI Risk Initiative and its Global Public Benefit AI Lab.
Current team members, partners, advisors and members of the Working Groups of the Coalition for a Baruch Plan for AI - and the Harnessing AI Risk Initiative by the Trustless Computing Association - will engage prospective partnering states, their missions to the UN in Geneva, regional IGOs, NGOs and experts interested to join, in order to:
illustrate the Coalition’s core goals and methods, and why they are likely to ensure AI safety, advance innovations in safe AI, and ensure a wide, global and democratic sharing of its control, power and benefits.
illustrate the massive economic and sovereignty advantages for states deriving from the envisioned public-private, decentralized and open Global Public Interest AI Lab.
discuss ways to improve the Coalition’s roadmap and strategy.
The 1st Harnessing AI Risk Summit follows the hybrid 1st Pre-Summit, held along the G7 Side Event of the G7 Summit in Italy last June, and the hybrid 2nd Pre-Summit.
It will, be followed by future editions every 4 months, in Geneva or elsewhere, and other pre-summits around the world, in-person, virtual or hybrid. As per the Coalition’s roadmap and strategy, the Harnessing AI Risk Summit series aims to incrementally bring together a critical mass of globally-diverse NGOs and states to design and jump-start a treaty-making constituent process for AI and digital communications that is sufficiently open, global, timely, democratic, participatory, effective, expert-led to create a new global intergovernmental organization that can reliably steer and manage AI and digital communications for benefit of all.
You are invited to join as our Summit an in-person or remote attendee, to apply to as a in-person or remote speaker, and apply to join our Coalition in other forms, as an NGOs, expert or concerned citizen, as a prospective partnering states, or as a prospective donor partner.
Confirmed:
Engaged:
While the Coalition for a Baruch Plan for AI was launched on September 10th, 2024, a substantial interest shown by several states and NGOs for the Harnessing AI Risk Initiative - an initiative by the Trustless Computing Association, convenor of the Coalition - that shares its main core goals and methods with the Coalition. Over the last 5 months:
We held several bilateral and multilateral meetings with interested states, especially from Africa and Europe, including a number of ambassadors to the UN in Geneva or their domain experts.
We received formal written interest from the mission of the largest regional intergovernmental organization to the UN.
We have met and are actively engaged with two large EU member states, at the foreign ministry and the office of the prime minister levels.
Confirmed (in-person or remote)
Muhammadou M.O. Kah. - Professor and Ambassador Extraordinary & Plenipotentiary of The Gambia to Switzerland & Permanent Representative to UN Organisations at Geneva, WTO & Other International Organisations in Switzerland, TCA Advisor
Tolga Bilge, initiator of the AITreaty.org, an initiative to advance the development and ratification of a strong international AI treaty signed by hundreds of top AI experts.
Wendell Wallach - Co-director the Artificial Intelligence & Equality Initiative Carnegie Council for Ethics in International Affairs. Emeritus Chair of Technology and Ethics Studies at Yale University. Founder and co-lead of the 2018-2021 International Congress for the Governance of Artificial Intelligence.
Robert Whitfield, Chairman of the Transnational Working Group on AI of the World Federalist Movement. WFM has been since 1947 one of the foremost NGOs in global federal democratization. It was the Convenor of the Coalition for the International Criminal Court of 2500 NGOs that led to the signing by 125 states. They have produced outstanding reports, proposals and analysis on the global governance of AI.
Joep Meindertsma, CEO of PauseAI, an international non-profit promoting a Proposal for a strong global treaty for AI, widely known in US/US AI safety circles, and supported by wide youth volunteer engagement around the world.
Rufo Guerreschi - Coordinator and Spokesperson of the Coalition for a Baruch Plan for AI, and Founder and Executive Director of the Trustless Computing Association (TCA)
Jan Camenisch - Chief Technology Officer of Dfinity, a blockchain-based internet computer, Phd researched with 130 paper and 140 filed patents
Richard Falk - Professor emeritus of international law at Princeton University. Renowned global democratization expert, Chairman of the Trustees of the Euro-Mediterranean Human Rights Monitor.
Confirmed subject to date availability. The following leading experts had confirmed their in-person or remote attendance for an earlier date last Spring. The new date - December 5-6th, 2024 - was recently finalized. We are awaiting for confirmation that the new date does not conflict with their agendas.
Ansgar Koen - Global AI Ethics and Regulatory Leader at Ernst & Young, TCA Advisor
Robert Trager - Director, Oxford Martin AI Governance Initiative and International Governance Lead at the Centre for the Governance of AI
Kenneth Cukier - Deputy Executive Editor of The Economist, and host of its weekly tech podcast
Kay Firth-Butterfield - CEO of Good Tech Advisory. Former Head of AI and Member of the Exec Comm at World Economic Forum
Akash Wasil - AI Policy Researcher at Control AI, Former senior researcher at Center on Long-Term Risk and Center for AI Safety
Flynn Devine - researcher on participatory AI governance methods, including research with the Collective Intelligence Project and on 'The Recursive Public', Co-Initiator of the Global Assembly for COP26
Gordon Laforge - Senior Policy Analyst at New America Foundation, TCA Advisor.
Brando Benifei - Member of European Parliament and Co-Rapporteur of the European Parliament for the EU AI ACT
Lisa Thiergart - Research Manager at Machine Intelligence Research Institute (MIRI), AI Alignment Researcher
David Wood - President of the London Futurists association
Mohamed Farahat - Member of UN High-Level Advisory Board on Artificial Intelligence, TCA advisor
Marco Landi - President of the EuropIA Institut, Former Group President and COO of Apple Computers in Cupertino, TCA steering advisor
Jan Philipp Albrecht - President of the Heinrich Böll Foundation. Former Greens MEP, Former Minister of Digitization of the German state of Schleswig-Holstein, TCA steering advisor
Paul Nemitz - Principal Advisor at the European Commission, Senior Privacy and AI policy expert, TCA advisor
Axel Voss - Member of European Parliament and member of the Committee on Civil Liberties, Justice and Home Affairs (LIBE), and the Committee on Artificial Intelligence in a Digital Age (AIDA)
Aicha Jeridi - Vice President of the North African School and Forum of Internet Governance, Member of the African Union Multi-Stakeholder Advisory Group on Internet Governance
Beatrice Erkers - Chief Operating Officer at the Foresight Institute
Allison Duettmann - Chief Executive Officer at the Foresight Institute
Chase Cunningham - Vice President of Security Market Research at G2., Former Chief Cryptologic Technician at the US National Security Agency, Pioneer of Zero Trust, TCA advisor
Darren McKee - Senior Advisor at Artificial Intelligence Governance & Safety Canada (AIGS), Author of “Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World”
Sebastian Hallensleben - Head of AI at VDE, Co-Chair of the OECD Expert Group on AI (AIGO), Chair, Joint Technical Committee 21 "Artificial Intelligence" at CEN and CENELEC
John Havens - Exec. Dir. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
Philipp Amann - Group CISO at Austrian Post, Former Head of Strategy EUROPOL Cybercrime Centre
Ayisha Piotti - Director of AI Policy at ETH Zurich Center for Law and Economic
Alexander Kriebitz - Research Associate at the Institute for Ethics in Artificial Intelligence
David Evan Harris - Chancellor's Public Scholar at UC Berkeley. Senior researcher at Centre for International Governance Innovation (CIGI), Brennan Center for Justice, International Computer Science Institute.
Peter Park - MIT AI Existential Safety Postdoctoral Fellow and Director of StakeOut.AI
Pavel Laskov - Head of the Hilti Chair of Data and Application Security University of Liechtenstein
Albert Efimov - Chair of Engineering Cybernetics at the Russian National University of Science and Technology, VP of Innovation and Research at Sberbank
Joe Buccino - AI policy and geopolitics expert. US Defense Ret. Colonel, TCA Advisor
Tjerk Timan - trustworthy and fair AI Researcher, TCA Advisor
Roberto Savio - communications Expert, Founder and Director of Interpress Service, TCA advisor
Confirmed
Confirmed subject to date availability. The following leading experts had confirmed their in-person or remote attendance for an earlier date last Spring. The new date - December 5-6th, 2024 - was recently finalized. We are awaiting for confirmation that the new date does not conflict with their agendas.
Confirmed:
none to date
Engaged:
Since December, we have been in extended talks with 3 of the 5 top AI Labs about their interest in participating in a democratically-governed public-private Global Public Interest AI Lab. envisioned by the Coalition and the Initiative Read more here about their strong state interest is a strong global governance of AI, and here about the support by some of them of the idea of a “Baruch Plan for AI”.
Each session of this day one will be live-streamed and posted on Youtube.
08.15: Doors Open nd Coffee
08.45: Welcome and Introductions
09.00: TBD Lightning Talk
09.10: Panel
AI Risks and Opportunities: Extreme and Unaccountable Concentration of Power and Wealth (democracy, Inequality, civil rights, biases and minorities, unemployment and loss of agency). Human Safety Risks (loss of control, misuse, accidents, war, dangerous science). Risks’ comparative importance and timelines, shared mitigations, win-wins and synergies. Abundance, Health, Safety, Peace, Happiness. Can future AI not only bring amazing practical benefits but even increase very significantly the average happiness and wellbeing of the average human?
09.45: Q&A
10.00: TBD Lightning Talk
10.10: Panel
Global Situational Awareness: Need for a timely, bold, democratic global governance of AI. Need for an open, effective, expert and participatory treaty-making process. Need of an open coalition of states and NGOs.
10.45: Q&A
11.00: TBD Lightning Talk
11.10: Panel
Future AI Scenarios 2030+: (a) Mostly Business as Usual; (b) Global autocracy or oligarchy; (c) Human Safety Catastrophes or Extinction; (d) AI Takeover: Bad and Good Cases; (e) Humanity's Federal Control of Advanced AI.
11.45: Q&A
12.00: TBD Lightning Talk
12.10: Panel
Scope of the Mandate of an Intergovernmental Constituent Assembly for AI: An AI Safety Agency to set and enforce AI safety regulations worldwide? A Global AI Lab to jointly develop, control and benefit leading or co-leading capabilities in safe AI, and digital communications/cloud infrastructure, according to the subsidiarity principle? An IT Security Agency, to develop and certify trustworthy and widely trusted “governance-support” systems, for control, compliance and communications? Other? Federalism & Subsidiarity (global, nation and citizen levels). Checks and Balances. Complexity, Urgency, Expertise, and Acceleration. Transparency, participation, trustlessness and decentralization. Political, technical and future-proof feasibility of bans of unsafe AI. Win-wins for oversight, public safety, civil liberties and democracy. Democracy & monopoly of violence. Role of superpowers, firms and security agencies.
12.45: Q&A
13:00: Lunch Recess
14.00: TBD Lightning Talk
14: Panel
Treaty-making and Constituent Process: Participation. Expertise. Inclusiveness. Weighted Voting. Global citizens’ assemblies. A Global Collective Constitutional AI?. Scope and Rules for the Election of an Open Transnational Constituent Assembly. Interaction with other constituent initiatives.
14.45: Q&A
15.00: TBD Lightning Talk
15.10: Panel
Global Public Interest AI Lab: Viability. Decentralization vs Safety. Subsidiarity principle. Initial funding: project finance, spin-in or other model? Role of private firms. Business models. Safety accords with other leading state private AI labs. The Superintelligence/AGI “option”.
15.45: Q&A
16.00: TBD Lightning Talk
16.10: Panel
Setting and Enforcing AI Safety Standards: Technical, socio-technical, ethical and governance standards for the most advanced AIs. Agile, measurable and enforceable methods to assess AI systems, services and components that are safe and compliant.
16.45: Q&A
17.00: Conclusions and Open Networking
19.30 - 22.00: Aperitif and Dinner for in-person participants
The second day of the Summit will be made of non-recorded informal (and in some cases confidential) 1-to-1 and multilateral meetings among in-person speaker and audience participants to the previous day. This will entail:
To-be-determined close-door, closed and open workshops, working session and self-organized meetings, whereby states and other participants will engage in fostering consensus on key documents.
Several educational sessions on the technical and non-technical aspects of advanced AI safety, security and privacy and governance. Mainly geared towards state representatives, and run by leading expert NGO participants.
1) Deepen and expand the Coalition for a Baruch Plan for AI among NGOs and States, by agreeing on: basic principles, a work schedule of joint analysis, and remote and in-person meeting and discussion - with a small but committed number of NGOs and states participating.
2) Achieve a highly preliminary agreement - among an expanding and deepening open coalition of globally-diverse NGOs and diverse states - on the design of a timely, expert-led, multilateral and participatory treaty-making process for the creation of an open global treaty organization based on the model of the open intergovernmental constituent assembly. More specifically, discuss and agree on a first barebone version of a Mandate and Rules for the Election of an Open Transnational Constituent Assembly for AI and Digital Communications, based at least on these basic principles:
Its Rules will embody robust participation, inclusivity, expertise, and resilience principles to maximize the probability of an organization that will consistently and effectively promote the safety, welfare, and empowerment of all individuals for many generations to come.
Its Mandate of the assembly will include the creation of an open organization that will collectively develop, regulate and exploit the most advanced safe AI technologies and reliably ban unsafe ones - akin to the 1946 Baruch and Gromyko Plans for nuclear technology, victims of the UN Security Council veto.
Its Process, following the Summit, will draw inspiration from the successful and democratic intergovernmental treaty-making process started by two U.S. states convening three more in the Annapolis Convention, and culminating in the ratification of the U.S. Constitution by nine and then all thirteen US states.
3) Achieve preliminary agreement among states, AI labs, investors, funders and technical partners on their participation in a $15+ billion democratically governed, partly decentralized public-private Global Public Benefit AI Lab and ecosystem that will develop the most advanced “human-controllable” AI/AGI and advance strictly controlled research in Superintelligence - open to be joined by all states and all leading AI labs. Set up working groups to deepen the technical, economic, supply chain and geopolitical analysis of the AI Lab, and ensure that it will not turn in one more large advanced AI capability initiative racing with the other to the brink.
As in early 1946 for atomic energy, we witnessing today a skyrocketing rise in awareness of the immense and urgent risks for human safety and unaccountable concentration of power that derive from the unfettered proliferation, arms race and progress of a highly disruptive technology. This time for Artificial Intelligence.
While it took only 7 years after Hiroshima for both US and Russia to test bombs that were 3000 times more powerful, the race now for AI is much faster. AI increasingly used to build better AI software and hardware. States are placing no effective safety controls on their private and states labs afraid to loose in race for capability. The leading US and Chinese AI scientists recently warned that catastrophic risks of AI for human safety could materialize in a few years or even "at any time". A large and widening number of AI scientists believe unchecked progress will likely lead to loss of control and extinction.
On June 14th 1946, as scientists loudly and publicly realized the looming immense risks of nuclear arms race and proliferation, the US proposed to the UN the Baruch Plan, for the creation of a powerful, federal, democratic, global organization to strictly manage all dangerous nuclear weapons and energy research, development and arsenals, and to be extended to all dangerous technologies.
After many months, and a Russian counterproposal, the five veto-holding members of the UN failed to agree due to the intersecting vetoes. We have a second chance with AI. Once again, we are challenged by the immense risks and opportunities of the accelerated emergence of a new disruptive and dangerous technology.
Once again, we have a rare and short window of opportunity to turn a immense risk into a triumphs for humanity. As back then, the astounding beneficial potentials of a new technology awaits to be realized once the risks are contained.
To avoid the failure of the Baruch Plan, and avoid weak, failed and late treaty-making as those for climate or nuclear, we need a much more effective, democratic and timely treaty-making method that is up to the challenge, that of the open intergovernmental constituent assembly model. It was pioneered with great success in 1786 when a few US states convened a few more in the Annapolis Convention, ultimately culminating in the ratification of the US federal constitution in 1787.
It is impossible to overstate how important and momentous this historical moment. Our actions and inactions of the next few years are likely to decisively condition our future for generations to come, and beyond. is shocking.