Trustless Computing Association

View Original

An International AI Development Authority in Geneva?

The immense risks and opportunities of AI perhaps requires us pursue the creation of a very powerful global organization to manage them. We nearly created it for nuclear after Hiroshima, with the Baruch and Gromyko Plans, but the UN Security Council veto impeded it. Perhaps, pursuing for AI a treaty-making approach based on the intergovernmental constituent assembly could enable us to pursue such plan for AI, while avoiding our failures back then for nuclear technology.


Back in 1946, as the world grappled with the looming threat of nuclear weapons, the United States proposed a bold solution to the United Nations: a new treaty organization that would have amounted to nothing less than a federal and democratic world government. 

In six dense pages, the Baruch Plan recommended that all potentially dangerous capabilities, arsenals, research, source materials, and facilities of nuclear weapons and energy anywhere on the globe fall under the strict control of a single entity: the International Atomic Development Authority. New nuclear energy plants would be located equitably around the world, built by national programs, public or private, but strictly licensed, controlled and overseen by it.

The authority would prevent any further development of more destructive and dangerous nuclear weapons, and would not pursue it. The authority's control would eventually extend to all existing and future weapons, including biological and chemical agents.

Modeled after the UN Security Council, the plan prescribed a governing body would consisting of 11 states, including the five permanent UN veto-holders and six non-permanent members elected biennially by the General Assembly. Crucially, this body would operate without the power of veto.

Due to its virtual global "monopoly on violence," the defining essence of any state - in addition to a monopoly on future core sources of energy - the plan would have created, in essence, a world federal government. While it lacked checks and balances, direct elections of a world parliament, and it mandated an outsized voting weight for a few states, its governance model was nonetheless largely democratic in global terms, albeit in a crude form by some national standards. Given that all nuclear energy activities and control were decentralized to states and their private enterprises, it was also clearly federal in its democratic form.

Failure of the Baruch Plan and its Consequences

In response to the Baruch Plan, the Soviet Union presented a counterproposal, the Gromyko Plan, which advocated for the destruction of all nuclear weapons. However, securing approval from the newly established UN Security Council, which requires unanimity among its five veto-wielding members, proved insurmountable. The failure to reach an agreement called on their national security agencies to fill the gap in one way or another via informal coordination, to be only partly complemented by the International Atomic Energy Agency, starting from 1957. 

While we must express deep gratitude to those security agencies for their instrumental role in preventing a major nuclear catastrophe and preserving our existence, such international political failure had far-reaching negative consequences. It led to a nuclear arms race characterized by increasingly destructive, rapidly-deployable and fragile weapons systems. It marked the beginning of the Cold War, which entailed numerous proxy conflicts and exacerbated global inequality. There were numerous near misses of nuclear catastrophe scenarios, and according to nuclear scientists, the risks are higher today than they ever were. 

Regrettably, the failure of the Baruch Plan remains humanity's greatest missed opportunity to establish a world founded on peace, security, safety, justice, and democratic principles in international relations.

An International AI Development Authority?

Should we try to replicate the same model for AI by pursuing an "International AI Development Authority"? If so, what adjustments would be required? How would we avoid the failure of the Baruch Plan and secure a more democratic and resilient governance?

Advanced AI is extremely similar to nuclear technology in the nature and scale of its inherently global safety risks as well as its economic benefits. Just as nuclear weapons became 3,000 more destructive in six years from Hiroshima, AI has been consistently increasing in capabilities and investments, five to ten times per year over the last seven years, with no slowdown in sight. 

Unlike nuclear technology, AI is multiplying the ability of its architects and other AIs to develop ever more powerful AIs, with a wide and fast rising acknowledgement of significant risk of losing any human control. It is widely expected that the proliferation of dangerous AI will be much more difficult to prevent compared to nuclear weapons.

To avoid the fate of the Baruch Plan, a treaty-making process for an International AI Development Authority could use perhaps a more effective and inclusive treaty-making model - that of the intergovernmental constituent assembly - that can avoid any state having a veto and better distill the will of a wide majority of states.

Learning from the US Constitutional Convention of 1787

The most successful historical example of such a treaty-making model is the one that led to the US federal constitution, whereby two US states convened three more in the Annapolis Convention in 1786. This constituent process culminated with the ratification of a new US federal constitution by nine and then all thirteen US states, achieved by a simple majority after a constituent assembly deliberations of over two months. 

Surely, participation or approval by all five veto-holding members would be very important and even essential. But the approval of each of them should be the end goal and not the starting-point of a process. If it is, it would make any attempt impossible, as it has happened to and the Baruch Plan, the Russian Gromyko Plan counterproposal, as well as all UN reform proposals since 1945, also subject to the veto. As opposed to 1946, the Security Council has, unfortunately, much-reduced importance and authority, as many of its members have violated the UN charter over the decades. For this reason, working towards global safety and security in AI initially outside of its framework -  in a way that is more inclusive of all states, but acknoledging superior role for superpowers and veto-holding UN member states - could be more workable today for Ai than it was in 1946 for nuclear. 

In the case of AI, such a treaty making model would need to be adapted to the huge disparities in power and AI power among states and take into consideration that 3 billion citizens are illiterate or lack internet connection. Hence, such an assembly would need to give more voting weight to richer, more populous, and powerful states until the literacy and connectivity gap is bridged within a fixed number of years. This would produce a power balance among more and less powerful states, resembling that foreseen by the Baruch Plan. 

With its missions to the UN of nearly all UN member states, numerous technical UN agencies, and tradition as a neutral ground for geopolitical negotiations, Geneva could stand to benefit enormously from such a development, as hosting such new international agencies for AI would provide thousands of jobs.

Such is the plan and strategy of the Harnessing AI Risk Initiative, and its emerging coalition launched by the Geneva-based Trustless Computing Association, that the author of this text is honored to lead, and of the 1st Harnessing AI Risk Summit this November in Geneva.