General Call to Action
Are you, as we, highly concerned about AI leading to immense concentration of power and even loss of control? Are you, as we, excited about what AI could bring if we manage the risks?
By now, a wide majority of AI scientists, world citizens are as well, and increasingly so by the week, while mainstream media underestimates and seeds doubt on the risks and their timeline.
The challenge is monumental and the time to act is short. Many think that nothing can be done, while most others are clueless of what can be done or how. Superpowers are locked in a competition for supremacy instead of working together to steer AI on a safe and beneficial course.
Hope comes from a decisive attempt in 1946 to tame another extremely destructive and productive technology, atomic energy. Grappling with the immensity of the proliferation risks, both the US and Russia advanced formal proposals to the UN for the creation of a new global democratic organization to strictly manage its risks and share its benefits, to be extended to all other and future dangerous technologies.
They failed to agree as five states had a veto in the UN Security Council. Somehow, we managed to get by till today without catastrophes, but the nuclear risk is higher today than it ever was.
We have a second chance. In the face of AI's immense risks and opportunities, even superpowers could be brought again to consider highly ambitious global governance to manage it. But how do we avoid the same failure?
By pursuing a more effective and democratic treaty-making process, without any state's veto, we could bring a wide majority of states and eventually superpowers to agree on such a treaty. A blueprint for such a treaty-making process started from two US states convening the Annapolis Convention in 1786, that culminated in 9 and then all 13 ratifying a US federal constitution. We should do the same globally and for AI.
We are expanding a coalition of NGO, leading experts and states to foster such a prospect.
Join us or support us in the Harnessing AI Risk Initiative!
How You Can Help
Join our Coalition: Read and Sign our Open Call for the Harnessing AI Risk Initiative.
Sponsor: Sponsor our 1st Harnessing AI Risk Summit, this November in Geneva.
Donate: Joining as donor partner, no matter what size, will directly support our efforts to build a safer future with AI. Even a small donation matters, enabling us to confirm and fixed the date for our 1st Summit in Geneva this November.
Partner: Become a partner as state, NGO, regional IGO, leading AI lab, or early pre-seed investor in the Lab.
Introduce Us: Personal recommendations to prospective donors or partners can open doors to valuable partnerships.
Speaker or Advisor: Apply to join our current and future boards and speaker to our events, if you have fitting expertise.
Spread the Word: Share our links mission with your network on Linkedin or sign up to our newsletter.
Learn More: We suggest you start from our 3-page Fact Sheet PDF or our Harnessing AI Risk Initiative webpage.