Harnessing AI Risk Pre-Summit


Could the “open intergovernmental constituent assembly” be a better treaty-making model for AI?


Date and Time  
The event was held June 12th, 2024, from 4 pm to 6 pm.

Location:
It was held in hybrid online/in-person mode at Masseria Cuturi, in Manduria (TA), Italy, within the venue of the G7 Side Event of the Italian G7 Summit 2024.

Background

This event is a preparatory meeting in advance of the 1st Harnessing AI Risk Summit, this November in Geneva. It is an integral part of the Harnessing AI Risk Initiative, and its emerging Coalition around its Open Call, and its plans for a Global Public Interest AI Lab.

Transcripts

Find at this blog post the nearly-verbatim transcripts of the opening keynote given by Rufo Guerreschi, also available from minute 00:58 of the event youtube video.

Video Recording

Agenda

Global experts in the international governance of AI and dangerous technologies, and advisors of the Trustless Computing Association, will give an answer to this question:

Could the “open intergovernmental constituent assembly” be a better treaty-making model for AI?

AI presents unprecedented opportunities to tackle critical global challenges, such as climate change, drug discovery, and economic productivity. Yet, these can only be realized if we manage the immense safety risks and ensure that the power and wealth it’ll generate will be equitably shared.

At this year’s G7 Summit in Italy, participation in the AI governance session will include non-G7 states and Pope Francis, who called for a strong AI treaty, along with the top AI scientists’ signatories of the aitreaty.org open call.

After attempts to create a strong nuclear treaty failed in 1946, the risks and opportunities of nuclear technology were managed via a loose coordination of competing great powers and weak treaties. While major catastrophes have been averted so far, the risks are higher today than ever before.

AI is expected to be even more disruptive. Risks of catastrophic proliferation, accidents, loss of human control, and immense concentrations of power are widely foreseen to be much harder to prevent. Prevailing treaty-making methods, based on unstructured summits and unanimous declarations, may be far from fit for the task. While the adopted Council of Europe treaty on AI and other initiatives for international coordination as the AI Summit series are praiseworthy, they are severely insufficient in tackling the main risks.

Perhaps, we should rely instead on a treaty-making method that has proven to be much more effective and democratic, that of the intergovernmental constituent assembly. It could be designed and jump-started by an open coalition of even a few globally diverse states, akin to how two US states in 1786 convened three more to the Annapolis Convention and set out a process that culminated with the federal US constitution.

Perhaps, the mandate of such a open constituent assembly should find inspiration in the surprisingly bold Baruch Plan, proposed by the USA in 1946 to the UN to manage the emerging nuclear weapons and energy, largely based on the work of Robert Oppenheimer. Such a plan prescribed the creation of a very powerful new treaty organization that would have sole, worldwide and strict control of all capabilities, arsenals, research, and source materials for dangerous nuclear weapons and nuclear energy. Facilities would be located equitably worldwide, developed by public or private entities, but strictly licensed, controlled and overseen by it. All was to be governed by the UN Security Council governance model but without the veto. While the participation of AI superpowers would be required to achieve global AI safety, massive benefits would befall on participating states in terms of industrial capability, sovereignty and geopolitical leverage - via the creation of a public-private global public interest AI lab.

Program & Speakers

16.00 -16.20 - Opening Keynote

  • Speaker: Rufo Guerreschi, Executive Director of the Trustless Computing Association.

16.20 - 17.10 - Panel 1: Could the intergovernmental constituent assembly be a better treaty-making model for AI? Should the Baruch Plan be the main inspiration for its mandate?

  • Moderator: David Wood, President of the London Futurists.

  • Speakers:

    • Wendell Wallach - Co-director of the Artificial Intelligence & Equality Initiative Carnegie Council for Ethics in International Affairs. Emeritus Chair of Technology and Ethics Studies at Yale University. Founder and Co-lead of the 2018-2021 International Congress for the Governance of Artificial Intelligence. (video link)

    • Marco Landi - President of the Institute EuropIA. Former President and Chief Operating Officer of Apple Computers, Cupertino. Member of the Advisory Board of the Trustless Computing Association.

    • Ambassador Prof. Muhammadou Kah - Chairman of the UN Commission on Science and Technology Development. Permanent Representative of the Mission of The Gambia to the UN in Geneva. Member of the Advisory Board of the Trustless Computing Association. (video link)

    • Robert Whitfield - Chair of the Transnational Working Group on AI at the World Federalist Movement, the convenor of the Coalition for the International Criminal Court. Chair of One World Trust. (video link)

    • *Kay Firth-Butterfield - Former Head of AI and Member of the Executive Committee at World Economic Forum. CEO of Good Tech Advisory. (video link)

    • Sundeep Waslekar - President of Strategic Foresight Group, an international think tank that has worked with 65 countries on global risks and challenges. (video link)

    • Brando Benifei - Member of European Parliament and Co-Rapporteur of the EU Artificial Intelligence Act. Member of the Special Committee on Artificial Intelligence in a Digital Age.

17.10 - 17.50 - Panel 2: Could the intergovernmental constituent assembly be a better treaty-making model for AI? Should the Baruch Plan be the main inspiration for its mandate?

  • Moderator: David Wood, President of the London Futurists (video link)

  • Speakers:

    • Fola Adeleke, Executive Director of the Global Center for AI Governance, Executive Director of the African Observatory on Responsible Artificial Intelligence. (video link)

    • *Ansgar Koen - Global AI Ethics and Regulatory Leader at Ernst & Young. Member of the Advisory Board of the Trustless Computing Association. (video link)

    • Nell Watson - President of the European Responsible AI Office. Chair of the IEEE Human or AI Interaction Transparency Working Group and the IEEE AI Ethics Certification Maestro. (video link)

    • Patrick S. Roberts - Senior Political Scientist at The RAND Corporation. Professor of Policy Analysis, Pardee RAND Graduate School. Former senior foreign policy advisor in the State Department’s Bureau of International Security and Nonproliferation (video link)

    • Jan Philipp Albrecht - President of the Heinrich Böll Foundation, Former MEP. Former Minister of Digitization of the German state of Schleswig-Holstein. Member of the Advisory Board of the Trustless Computing Association. (video link)

    • Jerome Glenn - Renowned futurist. CEO of the Millennium Project, publisher of the State of the Future reports. (video link)

17.50 - 18.00 - Closing Remarks

*unable to attend due to last minute problems.