Keynote of Rufo Guerreschi at the Harnessing AI Risk Pre-Summit

Find below the nearly-verbatim transcripts of the opening keynote given by Rufo Guerreschi at the Harnessing AI Risk Pre-Summit, held on June 12th 2024, also available in video recorded format on youtube:

——

Ladies and gentlemen,

Thank you for joining us today.

In this talk, I will make a case for the creation of a new powerful, federal and democratic global organization to manage AI risks and opportunities, inspired by that formally proposed by the United States to the United Nations for nuclear energy in 1946, the Baruch Plan. 

I will then argue that we should rely on the treaty-making model of the open intergovernmental constituent assembly, to avoid the failure of the Baruch Plan.

It has taken only 18 months, from the launch of ChatGPT, for a wide majority of AI scientists, world leaders and citizens to become aware of AI's immense risks for human safety and for concentration of power and wealth.

What's even more alarming is that an hyper-acceleration of AI capabilities and investments has shortened to just a few years the likely timelines for the materialization of those risks. 

On the other hand, awareness of the potential upsides of AI - in terms of abundance, health, eradication of poverty - has also grown. But the potential of AI to radically increase happiness and reduce suffering is yet to be fully appreciated.

The stakes are incredibly high, and the urgency mounts with each passing week and month. We stand at a critical juncture, where our actions today will shape the future of humanity and AI.  Only by coming together like never before we stand a chance - by creating unprecedented global democratic coordination mechanisms.  

Learning from history

The trajectory we are in is very similar to that of the emergence of atomic energy in 1945. It took less than one year from the Hiroshima Bomb for the unthinkable to become formal diplomatic US policy. 

The US proposed a bold solution to the UN: a new treaty organization with a global monopoly on all dangerous technologies and unconventional weapons and nuclear energy. It prescribed that all dangerous capabilities, arsenals, research, source materials, and facilities worldwide should fall under the strict control of a new International Atomic Development Authority. 

Facilities would be located equitably around the world, built by national programs, public or private, but strictly licensed, controlled and overseen by it, and eventually extended to all existing and future weapons, including biological and chemical. It would prevent any further development of more destructive and dangerous nuclear weapons.

Its governance would mirror that of the UN Security Council - consisting of 11 states, including the five permanent UN veto-holders and six non-permanent members elected bi-annually by the General Assembly - but, crucially, this body would not be subject to the veto power of any state. 

The Baruch Plan would have amounted to nothing less than a federal and democratic world government. Negotiations went on for 1-2 years, but failed to pass the veto of each of the 5 UN veto-holding members. Consequently, national security agencies were brought in to fill in. 

We have to be immensely thankful to those national security agencies if we are still  alive and if nuclear conflict was avoided, so far. But there were many near misses, and the risks are higher today than they ever were. Not to mention the cold war and entrenching global inequalities.

Many think that unless there is some kind of AI catastrophe world leaders will not take the need action. But is it really the case?

The Baruch, and Gromyko Plan, came about not so much because of Hiroshima but because of nuclear scientists, led in the US by Oppenheimer, foresaw a huge proliferation and increase in capabilities. And they were right. By 1951 both Russia and the US were already making bombs 3000 times more destructive than Hiroshima. 

The same exact scenario is playing out for AI: the same fast emerging risks and the same fast emerging awareness of risks. But no one has yet convincingly made a case for a “Baruch Plan for AI” nor has suggested a treaty-making method that can succeed where the Baruch Plan failed.

China and the US initiatives for AI governance.

What were the US and Russia back then, are the US and China today - locked in an accelerating arms race for military and economic dominance - this time around AI.

To date, their relations are still dominated by increasing confrontation, mistrust and disrespect, rather than cooperation in front of a shared enormous threat. The US president called the Chinese president a "dictator" in a press conference just after having spoken to him for 6 hours.  A reckless winner-take-all AI race is ongoing via aggressive export-control laws and industrial decoupling in deeptech and AI sectors. This is raising tensions and propaganda war around Taiwan, where 80-90% of AI chips are made today and will be made in a few years. 

The Council of Europe Treaty on AI adopted by 57 states, the AI Summits in UK and Korea, and Guidelines for Secure AI Development by the NSA and GCHQ amount to direly needed technical dialogue on understanding and interoperability among US allies. Yet, overall, they point to a clear strategy by the US to entrench its dominance while co-opting into a highly-secondary role "liberal democracies" and a few oil-rich autocracies, with a special role for a few allied states like UK, France and Israel.

Meanwhile, China stated in its Global AI Governance Initiative - presented to the 150 member states of its Road and Belt initiative - wants to create “a new international UN institution to govern AI''. It states it will “ensure equal rights, equal opportunities, and equal rules for all countries in AI development and governance.” 

Such rhetoric is very encouraging - similar to that of some leading US AI labs. But for every month that goes by without actions following those words, a doubt advances that the intentions may be just hegemonic.

The dominant trajectory of treaty-making and global governance initiatives for AI amounts to the US and China pitched in a confrontation over AI dominance and hegemony through co-optation, paying lip service to truly international co-operation, coordination and equality.

The Moral High-ground and AI Proliferation

Both China and the US claim the moral high ground. The US is willing to turn the race for AI into a crusade for democracy and human rights, against what it considers the new evil empire of China. 

This approach cannot work for two main reasons.  

First, the US stark moral and democratic superiority is not shared by most citizens of the world. According to surveys, while most world citizens have favorable views of the US and China, they believe they have comparable weaknesses and merits as political models and foreign policy. Too many in the US equate a one-party democracy with dictatorship, while underestimating the huge internal and external democratic deficiencies of the US. 

Half of US citizens believe the last presidential elections were stolen, while the other half believes the former President attempted a coup. Inequality is sky high, and relative poverty in the US is higher than most other developed countries. Big tech and security apparatus have undue influence over democratic institutions. 

While China's digital surveillance of its population is rightly often highlighted as a risk of a dystopian future, rarely western mainstream media rarely reminds us that the US has a similar undemocratic surveillance apparatus for foreign citizens including allied states.

Second, even if the US was starkly morally superior, the unprecedented risks of unregulated proliferation of dangerous AI demands that each state compromises on its ethical principles and preference.  It is widely expected that the proliferation of dangerous AI will be much more difficult to prevent compared to nuclear weapons.  

Unlike nuclear technology, AI is multiplying the ability of its architects and other AIs to develop ever more powerful AIs, with a wide and fast rising acknowledgement of significant risk of losing any human control. 

Just as nuclear weapons became 3,000 more destructive in six years from Hiroshima, AI has been consistently increasing in capabilities and investments, five to ten times per year over the last seven years, with no slowdown in sight. 

Avoiding the failure of the Baruch Plan: a better treaty-making for AI

To avoid the fate of the Baruch Plan, a treaty-making process for an International AI Development Authority could use perhaps a more effective and inclusive treaty-making model, that of the open, intergovernmental constituent assembly. - to avoid vetoes and better distill the democratic will of states and peoples.

Instead of relying on unstructured summits, unanimous declarations and vetos, we should rely on the most successful and democratic treaty-making process of history. 

That's the one that led to the US federal constitution. It was pioneered and led by two US states which convened three more in the Annapolis Convention in 1786, setting off a constituent process that culminated culminated with the ratification of a new US federal constitution by nine and then all thirteen US states, achieved by a simple majority after constituent assembly deliberations of over two months. 

We could and should do the same, globally and for AI.  

Surely, participation by all five UN veto-holding members would be very important and even essential. But the approval of each of them should be the end goal and not the starting-point of a process. If it is, it would make any attempt impossible, as it has happened to the Baruch Plan, as well as all UN reform proposals since 1945, also subject to the veto. 

 As opposed to 1946, the Security Council has, unfortunately, much-reduced importance and authority, as many of its members have violated the UN charter over the decades. For this reason, working towards global safety and security in AI initially outside of its framework could be more workable today for AI than it was in 1946 for nuclear. 

In the case of AI, such a treaty-making model would need to be adapted to the huge disparities in power and AI power among states and take into consideration that 3 billion citizens are illiterate or lack internet connection. 

Therefore, such an assembly would need to give more voting weight to richer, more populous, and powerful states until the literacy and connectivity gap is bridged within a fixed number of years. This would produce a power balance among more and less powerful states, resembling that foreseen by the Baruch Plan. 

The Harnessing AI Risk Initiative

As the Trustless Computing Association, we are facilitating such a scenario by expanding a coalition initially of NGOs and experts, and then of a critical mass of diverse states to design and jump-start such a treaty-making process - via the Harnessing AI Risk Initiative.

Through a series of summits and meetings in Geneva, we aim soon to arrive at an initial agreement among even as little as 7 globally-diverse states on the Mandate and Rules for the Election of an Open Transnational Constituent Assembly for AI and Digital Communications. All other states would then be invited to participate, with China and the US only allowed to join together.

While the AI safety goals require the participation of all or nearly all of the most powerful nations, massive economic and sovereignty benefits will be reserved to states and leading AI labs that join early on. In fact, alongside an international AI Safety Institute, the mandate of such Assembly will include the creation of a democratically-governed public-private consortium for a Global Public Interest AI Lab to develop and jointly exploit globally-leading capabilities. 

In the long-term, costs of the treaty organization and its Global Lab are foreseen to be over 15 billion dollars. The Lab will be financed via the project finance model, by sovereign funds and private capital, buttressed by pre-licensing and pre-commercial procurement by participating states. 

In the short-term, funding will come from donations to the Coalition, early pre-seed investment in the Lab and membership fees of participant states.

As proof that impactful treaties can be advanced successfully by a coalition of NGOs and smaller states, consider that the Coalition for the International Criminal Court was started by the World Federalist Movement and a small state like Trinidad and Tobago, setting out a process that gathered 1600 NGOs and led to 124 signatory states.

In conclusion, the unprecedented risks and opportunities posed by AI require a skillful urgent and coordinated global response. 

By learning from historical examples and adapting them to the current context, we can create a framework for AI governance that ensures safety, fairness, and prosperity for all. 

I invite all stakeholders to join us in this crucial endeavor. Together, we can shape a future where AI serves humanity's greatest interests.

Thank you for your attention.

Rufo Guerreschi