Harnessing AI Risk Initiative
The convergence of accelerating AI innovation and unregulated digital communications has brought us to what may be the most critical juncture in human history.
We can still turn the dawn of this new era into the greatest opportunity for humanity, but only if we come together globally like never before to govern its immense risks and opportunities.
Fact Sheet
Mission:
Contribute to enable Humanity to steer off the immense and timely risks of AI for human safety and for unaccountable concentration of power and wealth - and realize and equitably share its astounding opportunities - while affirming the subsidiarity principle.
Goals:
Facilitate an open, timely, democratic and efficient treaty-making process to build a global federal intergovernmental organization for AI - inspired by the 1946 Baruch Plan proposal for international control of nuclear weapons and energy.
Strategy:
Aggregate a critical mass of globally diverse NGOs and states to design and launch an open, expert and participatory constituent and treaty-making process.
Rely on the intergovernmental constituent assembly treaty-making model, inspired by the process that led to the US Constitution in 1787.
Aim to jointly attract both AI superpowers, while incentivizing smaller states via the massive economic advantages of an open Global Public Interest AI Lab.
Current Status:
Announced on September 10th 2024 by its six founding NGO partners.
Actively onboarding additional NGO partners and experts, states and donors.
Roadmap:
Expanding the coalition, engaging states and experts.
Hold a 1st Harnessing AI Risk Summit in December 5-6th 2024 virtually, preceded by Pre-Summits, and followed by new hybrid editions in Geneva every 4 months.
Aim to onboard progressively 3-7 highly diverse states by Dec 2024 and then 15-30 by Nov 2025.
Once onboarded states are over 50 and include at least 50% of the word population and GDP, set a date and finalize a Mandate and Rules of the Election and Mandate for an Open Intergovernmental Constituent Assembly for AI.
Positioning:
Implementers of the call for states and NGOs contained in the Global Digital Compact and the UN Advisory Board on AI, while recognizing the human safety and concentration of power risks are already large enough to warrant a much stronger and more timely global governance approach.
Positions itself as more inclusive and democratic than other AI governance initiatives.
Recognizes the need to engage AI superpowers while maintaining neutrality.
Opportunities and Challenges:
While the US appears set on a unilaterally-led AI alliance of "liberal democracies" against illiberal states, and China is calling for a global inclusive AI governance without acting upon it, we count on the fact that a fast rising of awareness about the risks will lead them, as in 1946 with the Baruch and Gromyko Plans, to embrace much stronger global coordination approached.
Substantial funding will be needed advance our goals.
Executive Summary
The Harnessing AI Risk Initiative is aggregating an open coalition of NGOs and a critical mass of globally diverse states to facilitate a democratic, expert-led treaty-making process to create a democratic, federal and expert-led global intergovernmental organization - including a stat-of-the art AI lab - that can reliably stave off the AI immense risks for human safety and for unaccountable concentration of power, while realizing and sharing equitably its benefits and control. Starting September 10th 2024, the Initiative will be advanced primarily through an open and participatory Coalition for a Baruch Plan of AI, centered on advancing its core goals and methods.
Risks and Opportunities of AI
It only took eighteen months since the release of ChatGPT for a vast majority of experts, world leaders, and citizens to become cognizant of the immense risks of AI to human safety, global concentration of power, and potential distortion of human nature. Many experts believe both risks could materialize unpredictably soon due to radical algorithmic innovations.
At the same time, the advantages of AI are becoming increasingly apparent, posing extraordinary opportunities to accelerate scientific advancement, generate abundance, eradicate poverty, and improve education and the quality of life itself. Reflecting the urgency and scale of this issue, hundreds of AI experts and influential figures have called via AItreaty.org for a robust and binding global treaty for AI, and so has the Pope. Such an idea is even shared by 77 percent of U.S. citizens.
Shortcomings of Current Global AI Governance Initiatives
Historically, the security agencies of a handful of countries have safeguarded humanity against the dangers and misuse of powerful technologies. Although we have averted major catastrophes, the risks associated with nuclear and bioweapons are greater today than ever before. While the recent Council of Europe treaty on AI and other initiatives for international coordination, such as the AI Summit series, are praiseworthy, they fall massively short in addressing the principal risks for safety and concentration of power and lack mechanisms for their future improvement.
The current dominant path toward treaty-making and global governance of AI is centered on a tightly US-led coalition of "liberal democracies" and rich autocracies pitched against dozens of "undemocratic" states, including China.
This is inadequate because: (1) Reliably preventing dangerous AI proliferation will require unprecedented global compliance; (2) If successful, it is likely to amount to immense undemocratic concentrations of power and wealth in very few states, firms, and/or agencies; (3) The so-called "undemocratic" states represent a rich variety of human political and cultural models that are entrenched, legitimate, and often comparable in levels of democraticness with the "West" one.
Hence, we may need to consider a much more inclusive and globally democratic approach that faces head-on the unprecedented levels of global coordination that are needed.
Aims of the Initiative and Coalition
As detailed on the Coalition's home page, "Amidst an accelerating, reckless, winner-take-all race among states and their firms for ever more capable AGI and ASI forms of AI, there may still be time for humanity to properly come together to (A) ensure all AIs will be human-controllable and and largely controlled and shaped by humanity as a whole, (B) prevent catastrophic AI misuses and runway ASIs, and maximize the chances that, if ASI will emerge, it will result in a beneficial outcome for humanity, democratically imbued with the most widely-shared positive human values.
We are leading an early open coalition of NGOs and states to advance a much more timely, effective, expert, and democratic treaty-making process for AI based on the open intergovernmental constituent assembly model - akin to that which led to the US federal constitution.
This new constituent assembly welcomes all states and citizens' representative entities, irrespective of their political orientation. Over a time frame as short as realistically possible - yet with extreme care - the coalition aims to build a strong, expert-led, federal, and democratic governance of AI as it was attempted in 1946 for nuclear technologies via the Baruch Plan while avoiding its failure due to vetoes".
Expert’s calls for the Baruch Plan for AI
While calls by top AI scientists and CEOs about the need for a strong global and democratic governance of AI abound, several notable figures have referred to it directly and literally. Even more than Hiroshima, the fast emergence of a broad consensus among top scientists and political leaders on the immense challenge of state control of atomic energy coalesced in the Acheson-Lilienthal Report and led the US to propose the Baruch Plan to the UN. Such consensus is already emerging for AI.
In 2014, Nick Bostrom referred to the Baruch Plan in his foundational Superintelligence book as a (positive) future scenario for the governance of AI. In 2018, Ian Hogarth, Chair of the UK AI Safety Institute, called for the Baruch Plan as a solution to AI governance. In 2021, the Baruch Plan application to AI was explored in depth and endorsed as an inspiration in a paper titled "International Control of Powerful Technology: Lessons from the Baruch Plan for Nuclear Weapons" by Waqar Zaidi and Allan Dafoe, the president of the Centre for Governance of AI of Oxford and current Head of Long-Term AI Strategy and Governance at DeepMind. In March 2023, the Trustless Computing Association extensively referred to the Baruch Plan as a critical inspiration in a LinkedIn post announcing its Harnessing AI Risk Initiative.
In May 2023, the Economist reported that Jack Clark, global head of policy and cofounder of Anthropic, suggested the Baruch Plan for AI. In December 2023, Jaan Tallinn suggested the Baruch Plan in an interview as a potential solution to the global governance of AI. On July 7th, 2024, Yoshua Bengio mentioned the Baruch Plan as an "important avenue to explore to avoid a globally catastrophic outcome."
Also, in March 2023, Sam Altman suggested that the intergovernmental constituent assembly process that led to the US Constitutional Convention of 1787 should be the "platonic ideal" of the treaty-making process we need to build a proper global governance for AI.
Strategy
Our strategy is centered on expanding and deepening the Coalition by actively engaging with states, NGOs, industry experts, and companies by first expanding participation in its guidance via a Secretariat of mission-aligned NGOs and then, in parallel, an open coalition of states and AI labs.
This will be achieved through a series of events and summits both in Geneva and other locations, beginning with the 1st Harnessing AI Risk Summit on November 20-21st, 2024. The first seven states and private AI labs to join the initiative will enjoy substantial yet temporary economic and political advantages and agree on an initial version of rules for the constituent assembly.
Considering the vast disparity in power between states, particularly in AI and more broadly, and recognizing that three billion people are illiterate or lack internet access, we foresee the voting power in such assembly to be initially weighted by population and GDP.
AI Superpowers' Veto?
While the participation of the US and China is crucial for achieving the initiative's AI safety goals, other states will have very strong incentives beforehand.
By incorporating in the assembly's mandate the creation of a state-of-the-art public-private $15+ billion Global Public Benefit AI Lab - and a "mutually dependent" and eventually autonomous supply chain - participant states and AI labs will secure vast economic and political benefits.
They will gain cutting-edge industrial AI capabilities, digital sovereignty, political leadership, and an enhanced negotiating power vis-a-vis other less inclusive global governance initiatives. Participation will remain open to all states at all stages of the process, including during the mandatory, periodic statutory reviews of the treaty's charter.
Reconciling Freedom, Democracy, and Safety in Global Governance of AI
We believe that rather than finding a balance or trade-off in a "zero-sum game" among those values and necessities, our initiative for a global constituent process for AI and the resulting global organization can maximize each concurrently in a sum that is larger than its parts.
It can do so via a) careful engineering of the constituent processes centered on the principles of federalism and subsidiarity; b) sophisticated, time-proven constitution and constituent process science; b) applying battle-tested, democratic, and trustless computing paradigms for society-critical IT and socio-technical systems. Just as freedom, fairness, and safety were all concurrently increased when "Wild West" US states were turned in 1787 into liberal democratic states under the US federal constitution, the same can be achieved in the process of building federal global governance of AI in the "Wild West" that is the world today.
The Trustless Computing Association
The Trustless Computing Association (TCA), established in 2015 and headquartered in Geneva, is a non-profit organization that promotes the development of international standards and institutions for the secure and democratic governance of digital communications and Artificial Intelligence (AI). Our efforts span leading research, producing scholarly publications, organizing influential conference series, engaging with nation-state officials, and conducting focused campaigns.
Traction with NGOs and Experts: The Coalition
We have built and are extending a vast network of over 22 world-class long-time R&D partners since 2015 and 28 distinguished advisors. In July, such a coalition was merged into a new, more participatory Coalition for a Baruch Plan for AI launched with 5 leading NGOs and centered on the main goals and methods of the Initiative. A wide and expanding number of world-renowned experts, distinguished individuals, diplomats, and NGOs have participated and will participate in its events, especially our virtual 1st Harnessing AI Risk Summit to be held on December 5-6th. In April, we launched a Coalition for the Harnessing AI Risk around an Open Call joined by 7 NGOs and over 45 individuals (now concverged in a new Open Call for a Baruch Plan for AI.
Traction with States and AI Firms
Over the last four months, we have held bilateral and multilateral meetings with several interested states, especially from Africa and Europe, including a number of ambassadors to the UN in Geneva. We have received formal interest from the mission of the largest regional intergovernmental organization to the UN. Since December, we have had ongoing highly preliminary discussions with three of the top five AI labs regarding their participation in the Global Public Benefit AI Lab, as many of them have called for something similar.
Goals and Impact
Within the next three years, through the Initiative and its Coalition, we aim to have contributed significantly to the creation of an open, efficient, expert, and democratic constituent process that has led to the establishment of an open intergovernmental organization for AI, capable of avoiding the immense risks, while realizing the extraordinary opportunities. At least dozens of diverse states will have joined the constituent process and the resulting organization. AI superpowers have joined, or at least are actively and collaboratively engaged. The initiative's success will serve as a model for the global governance of other dangerous technologies.
Funding
While up until last year our association has benefited from modest funding in various forms, including from its spin-off and from EU public agencies, our recent efforts have been dedicated to shaping the new initiative on a purely voluntary basis. Currently, our work is sustained by two committed full-time volunteer staff members, Rufo Guerreschi and Marta Jastrzębska, supported by contributions from 28 advisors and partners. To meet our strategic goals over the next three years, we require funding of $1.5 million, but we could effectively use up to $5 million to leverage and fully empower our Coalition and directly its members. Given our proven track record of efficiency and cost-effectiveness, securing $80,000 would allow us to cover essential operations, engage with key stakeholders and partners, secure the logistics for the Harnessing AI Risk Summit on November 28-29th, 2024, in Geneva, and accomplish significant goals for 2024.
TCA Track Record (2015-2023)
Between 2015 and the launch of the Harnessing AI Risk Initiative in March 2023, TCA successfully advanced the Trustless Computing Certification Body and SeevikNet Initiative to create an open intergovernmental organization aimed at radically increasing the actual and globally perceived trustworthiness of the most critical IT and socio-technical systems for digital diplomatic communications and for the control of critical infrastructure such as frontier AIs - based on the Trustless Computing Paradigms. We engaged extensively with over 15 states in bilateral and multilateral meetings and conferences we organized; we led 4 research initiatives with 32 industry experts and research R&D partners since 2015; we spun off TRUSTLESS.AI to build open technologies compliant to such certifications; and advanced it via 11 Free and Safe in Cyberspace conferences, held in three continents and three times in Geneva, and participated by over 100 distinguished speakers and advisors. The TCCB, under the name of IT Security Agency, is one of the three agencies of the IGO foreseen by the Initiative, to ensure trustworthiness of its "governance-support systems". Our work was published in Geneva Le Temps and The Yuan.
Philanthropic Opportunities
We are seeking long-term philanthropic partners who are interested in accompanying us and the Coalition we are convening in this ambitious undertaking, providing financial support, contributing strategic support, or leveraging their networks.
Conclusion
Our Initiative and Coalition lead a strategic, timely, and expert-led effort to establish a democratic global governance model for AI and digital communications. Although ambitious, it directly confronts significant risks and aims to unlock substantial benefits for humanity, offering the potential for a profound positive impact for All.
Contact details:
Rufo Guerreschi, Founder and Executive Director
Email: rufo@trustlesscomputing.org — Tel: +393289376075Marta Jastrzębska, Director of Fundraising and Partnerships
Email: marta@trustlesscomputing.orgWebsite: www.trustlesscomputing.org
More information
For references, refer to the 80-page Harnessing AI Risk Proposal (v.4), published on ResearchGate on September 19th, 2024, for all the details of the Initiative.
For the most up to date information, refer to a weekly-updated, live, unpublished version of such Proposal - which includes the above Fact Sheet and Executive Summary - download here the 90+-page Harnessing AI Risk Proposal Live Version PDF.
Call to Action
If you agree with our Initiative, please review opportunities to join the Coalition for a Baruch Plan for AI - the main vehicle to advance HARI since September 10th, 2024 - as an NGO, expert of volunteer, as a state, or as a donor.