Trustless Computing Association

View Original

OpenAI's disbandment of its AI alignment team and prospects of human-controllable and humanity-controlled AI

Recent news about OpenAI’s resignation and disbanding of its AGI alignment and "superalignment" team - along with Altman's failure to follow up on his pledges to fully democratize the control of OpenAI and AI in general - should be immensely concerning for all of us. 

It is a clear sign that a reckless race to superintelligence may be reaching a point of no return. Time is getting shorter to attempt to create a strong and democratic global governance that, alone, can have a chance to turn AI into humanity's best invention rather than its worst.


Such recent developments at OpenAI, the current leader in the (private) AI race, is a big red sign that we may well be on the verge of an unstoppable phase of the winner-take-all race towards AGI/Superintelligence among a few states and labs that started with the release of Chat GPT in November 2022. 

Assuming Altman and OpenAI were sincere about their pledges, all signs point to the fact that they have grown deeply skeptical of solving neither the technical half of the AI/AGI/superintelligence problem—to ensure we can make an AI both very powerful and human-controllable—nor its governance half—to ensure that all AIs will be both human-controllable and humanity-controlled AI.

While Altman does not rule out that they may be "close" to a level of capability that could lead to loss of control and AI takeover, he and OpenAI seem overwhelmingly focused on charging head-on towards achieving superintelligence before others do in the attempt to resiliently instill in the AI that will dominate their own values, visions and safeguards—that they understandably perceive as superior.

While the heads of all top US AI labs, except Meta, strongly called last year for strong global governance to reduce the winner-take-all race to the bottom and ensure safety safeguards, there are many signs they are increasingly embracing the same skepticism and frame of mind. This is very understandable, given the very weak response states and superpowers have given to those calls so far. It is furthermore concerning that OpenAI had clauses in their employees' contracts - evidently legal in California and likely practiced by other AI labs - that would have them lose their earned equity if they said anything negative about it anytime afterward.

At this incredible consequential juncture in human history, you'd imagine that the national security agencies of the superpowers and AI superpowers would step into this immense gamble, as they did for nuclear in 1946, but they have not yet. Nor have their governments engaged in the scope of global coordination negotiations that the US and Soviet Union engaged in when nuclear energy posed similar prospects by decisively and formally tabling proposals like the Baruch Plan, largely based on the work of Robert Oppenheimer.

Considering that the US and Chinese security agencies have access to systems at least as advanced as the private sector, there are two possible explanations. They either think they can surreptitiously control the current global anarchy in AI development and use - as they did with mixed success for nuclear, bioweapons and encryption - likely relying on their own superior AI systems and their immense global surveillance apparatus. Or else they are in the same frame of mind as Altman. Or they are advising the political leaders to act, but their request stands yet unheard. Or some mix of those three.

We have very little time to try to enact a sort of "Baruch Plan for AI." Such a plan prescribed in 1946 the creation of a very powerful new treaty organization that would have sole, worldwide and strict control of all capabilities, arsenals, research, and source materials for nuclear weapons and nuclear energy that were potentially dangerous. Facilities would be located equitably worldwide, developed by public or private programs, but strictly licensed, controlled and overseen by it. All was to be governed by the UN Security Council governance model but without the veto. But the plan, and its counter proposal by the Soviet Union, failed to find an agreement that would pass the veto of all 5 permanent members of the UN Security Council.

We have a second chance to get it right for AI and, if we do, extend the resulting governance framework for other dangerous technologies and global challenges. If successful, the resulting global federal institutions could be immensely positive for humanity for generations to come.

Perhaps a key aspect of trying to make it happen is to attempt a treaty-making process for AI that will seek to avoid the veto of the 5 veto holders of the UN Security Council - and especially China, the US or Russia - by taking an approach to treaty-making that is timely and expert-led but also truly multilateral, democratic and equitable, so as to confront vetoes by one or a few powerful states with a much stronger soft and hard incentives to align to a wide and fair global solution for AI.

This is the vision of the Harnessing AI Risk Initiative, led by the author of this text, and its upcoming 1st Summit in November in Geneva, its Pre-Summit at the G7 in June, its emerging Coalition and its Open Call. If you believe we may be on to something, we have opportunities for NGOs, states, IGOs, AI labs, funding entities and donors. Join our movement!