Global Public Benefit AI Lab

The Global Public Benefit AI Lab is a planned open, public-private, democratically-governed consortium - among a critical mass of diverse states and AI labs - aimed at achieving and sustaining a solid global leadership or co-leadership in human-controllable AI capabilities, technical alignment research and AI safety measures.

At A Glance

  • The Lab will accrue capabilities and resources of member states and participating firms, redistributing dividends and control to both member states and citizens directly, while stimulating and safeguarding the initiative of state and private firms for innovation and oversight. 

  • The Lab is one of three agencies of a new intergovernmental organization being built via the Harnessing AI Risk Initiative. This venture aims to catalyze a critical mass of globally-diverse states in a global constituent process to build a new democratic IGO and a consortium to jointly build the most capable safe AI technologies, and reliably ban unsafe ones. This opportunity is open to all states and firms, allowing them  to participate on equal footing.

  • The Lab will aim to achieve and sustain a resilient “mutual dependency” within its broader supply chain relative to superpowers and future public-private consortia. This will be accomplished through joint investments, diplomacy, trade relations and strategic industrial assets of participant states. Additionally the Lab will remain open to merging with these entities into a unified global organization and a unified global AI lab to ensure AI remains under human and humanity control.

  • The Lab will require investment of at least $15 billion, primarily funded via project financing, buttressed by pre-licensing and pre-commercial procurement agreements from participating states and client firms.

Fact Sheet

Mission: To advance and maintain cutting edge  AI capabilities that are safe, controlled by humans, and aligned with the global public interest.

Governance: A democratically governed joint venture among member states and citizens, ensuring transparency, safety and democratic accountability.

Funding Requirement: Initial endowment of $15+ billion, primarily secured through project finance, with contributions from participating states and firms.

Revenue Sources: Revenue will be generated through licensing of backend services and intellectual property (IP), leasing infrastructure, direct services, and compliance certifications.

Inspiration and Models: International Thermonuclear Experimental Reactor (ITER), CERN, 1946 Baruch Plan and other large multinational infrastructure projects.

Public-Private Model: An open, public-private partnership model involving states, AI labs, and other stakeholders.

Role of Participant AI Labs: AI labs contributing skills, workforce, and IP will be compensated through revenue shares and long-term procurement agreements. Participant labs will gain access to cutting-edge AI/AGI capabilities, infrastructure, and services developed by the Lab, promoting continuous innovation and market leadership.

Supply Chain Resilience: Securing critical resources through strategic investments, joint diplomatic efforts, and ensuring a resilient supply chain for AI development.

Funding Strategy:   The Lab's initial funding will follow a project finance model, leveraging sovereign and pension funds, sovereign private equity, and private international finance.

Investment Safeguards: Mechanisms such as non-voting shares will limit undue influence from private funding sources, ensuring that the Lab's governance remains transparent and accountable.

Revenue Model: The Lab will generate revenue through a diversified approach, including licensing, infrastructure leasing, direct services, and compliance certifications, ensuring financial sustainability and growth.

Early Participation Advantages: The first seven AI labs and states to join will enjoy significant economic and decision-making benefits, providing a strong incentive for early participation.

Talent Attraction: To attract and retain top AI talent, the Lab will offer competitive compensation, highlight the social importance of the mission, and ensure high security and confidentiality.

Open Source and Safety: A balanced approach to open-source software will be implemented, with stringent security reviews and a translucent oversight process to manage the safety of critical AI components, such as those of parliamentary oversight committees for national intelligence.

Current Status: The project is in its early stages, with ongoing engagements with states, AI labs, and high-level consultations with UN missions. Support has been secured from 32 world-class experts and advisors.

Upcoming Events: 1st Harnessing AI Risk Summit: Scheduled for November in Geneva. Pre-Summit on June 12th along the G7 Summit in Italy.

Investment Range: Seeking early pre-seed investments between $50,000 and $300,000. Seeking Memorandum of Understanding and pre-engagement for investments and funding of up to $2 billion.

Investor Benefits: Early investors will gain significant economic advantages and decision-making influence, participating in a globally impactful initiative with high potential for returns.

Precedents and Models

The initiative draws inspiration from the $20 billion International Thermonuclear Experimental Reactor (ITER), a multi-national consortium focused on nuclear fusion energy. Unlike ITER, the Lab will work on current state-of-the-art generative AI technologies. These technologies are known and expected to yield massive exploitable capabilities and substantial economic benefits within a predictable timeframe. 

Our initiative is also inspired by  the CERN, a joint venture  established in 1954 by European states  to advance nuclear energy capabilities, which later welcomed participation from non-EU countries. With an annual budget of $1.2 billion, CERN serves as a model along with other multinational infrastructure projects ranging from dam constructions to the International Space Station (ISS). 

Yet, the most appropriate inspiration for the Lab, and for the Initiative more broadly, is the 1946 Baruch Plan. This proposal by the United States  to the United Nations suggested the formation of a global multinational consortium and organization to centralize the development, and research of nuclear weapons and nuclear energy generation, nearly achieving its ambitious goals.

Financial Viability and the Project Finance model

The Lab will generate revenue from governments, firms and citizens via licensing of enabling back-end services and IP, leasing of infrastructure, direct services, and issuance of compliance certifications. 

Considering the proven scalability, added value and profit potential of current open source LLMs technologies, coupled with the potential for pre-commercial procurement contracts with states to support its financial viability, the initial funding for this project could predominantly adapt the project finance model. This funding could be sourced through sovereign and pension funds, intergovernmental sovereign funds, sovereign private equity and private international finance. 

The undue influence on the governance of private funding sources will be limited via various mechanisms, including non-voting shares. 

A Public-Private Partnership Model

Participant AI labs will contribute their expertise, workforce and portion of their Intellectual Property in a manner that promotes their dual objectives: benefiting humanity, and enhancing their stock valuations. Additionally, they will maintain their ability at both  the foundational and application level, all within established  safety parameters.

Participant AI labs would join as innovation and go-to-market partners, within a consortium democratically managed by the participant states. This arrangement is open to all labs and states. Early participants will be granted considerable, but temporary, economic and decision making advantages:

  • As innovation partners and IP providers, they will be compensated via revenue share, secured via long-term pre-licensing and pre-commercial procurement agreements from participating states and firms.

  • As go-to-market partners, they will gain permanent access to the core AI/AGI capabilities, infrastructure, services and IP developed by the Lab. 

    • These capabilities and IP will aspirationally far outcompete all others in capabilities and safety, and be unique in the actual and perceived trustworthiness of their safety, security and democratic accountability.

    • Participant AI Labs will maintain the freedom to innovate at both the base and application levels, and retain their ability to offer services to states, firms and consumers, within some limits.

The partnership terms for AI labs will be strategically designed to maximize the potential for a consistent growth in their market valuation. This approach aims to attract involvement of AI labs, including major technology firms,  typically structured as US for profit entities. These firms are legally required to prioritize maximizing shareholder value, as mandated for their CEO’s.  The proposed structure ensures alignment with their governance models and incentivizes their participation. 

This framework will enable such labs to continue to innovate in both capabilities and safety across foundational and application layers. It is designed to steer clear of race to the bottom scenarios among states and labs, thereby advancing both their mission and market valuation in a structured and responsible manner.

Size of the Initial Funding

Given the cost of state-of-the-art LLMs "training runs" are expected to grow 500-1000% annually, and several leading US AI labs have announced billion-dollar LLM training runs for the this year, and likely ten billion ones for the next, the Lab would require an initial investment of at least $15 billion. This substantial funding is essential for the Lab to effectively meet its capacity and safety goals, and achieve financial independence within  3-4 years. To maintain its position at the forefront of technology, this amount will need to increase by 5-10 times annually, unless significantly more efficient and advanced AI architectures become available.

Supply-Chain Viability and Control

Acquiring and sustaining access to the specialized AI chips, essential  for  running leading-edge large language model (LLM) training runs, is expected to  be challenging due to anticipated surge in global demand and the implementation of export controls. 

This risk can likely be mitigated through joint diplomatic dialogue that emphasizes  open and democratic principles of the initiative. Additionally,  the initiative can further secure its position by engaging states that host firms developing advanced AI chips designs or rare or unique AI chips fabrication equipment, or possibly start pursuing its own AI chip designs and manufacturing capabilities. Investing in newer, safer, and more powerful AI software and hardware architectures, beyond large language models, will also strengthen the initiative’s technological foundation and resilience. 

Ensuring that member states have access to adequate energy sources, appropriate data centers, and resilient network architecture, will require timely, speedy and coordinated action for the short term and careful planning for the long term.

Consequently, the Lab will seek to achieve and sustain a resilient “mutual dependency” in its wider supply chain vis-a-vis superpowers and future public-private consortia. This will be pursued through joint investments, diplomatic engagements, trade relations and strategic industrial assets of participant states. Additionally, the Lab remains open to merging with these entities into a unified  global organization and a single global AI lab, dedicated  to achieving AI - as detailed in our recent article on The Yuan.

Talent Attraction Feasibility

Achieving and maintaining a decisive advantage in advanced AI capability and safety relies on attracting and retaining Top AI talents and experts. This is particularly crucial if, or until AI superpowers and their firms become involved. Success in this area entails not only engaging individuals, but also securing the participation of leading AI labs. 

Talent attraction in leading-edge AI is driven by compensation, social recognition and mission alignment and requires very high security and confidentiality. 

Staff will be paid at their current global market rate, and their social importance will be highlighted through proper communications. Member states will be mandated to support top-level recruitment and to enact laws that ensure that knowledge gained is not leaked. Additionally, staff selection and oversight procedures will surpass the rigor found in f the most critical nuclear and bio-labs facilities.

The unique mission and democratic nature of the Lab would likely have a strong chance of being perceived by most top global AI researchers, even in non-member states, as being ethically superior to others. This perception mirrors how Open AI originally, and Meta more recently, successfully attracted top talent through their commitment to an "open-source" ethos. This advantage is particularly significant given the existing concerns regarding  trustworthiness of the leadership and governance structures of leading US AI labs. 

Just as OpenAI attracted top talent from Deepmind due to a mission and approach perceived as superior, and top talents from OpenAI went on to create Anthropic for the same reasons. The Lab should be able to attract top talents as the next "most ethical" AI project. Substantial risks of authoritarian political shifts in some AI superpowers, as warned (1.5 min video clip) by Yoshua Bengio, could attract top talents to join the Global AI Lab to avoid their work being instrumental to a future national or global authoritarian regimes. 

Open Source, "Translucency" and Public Safety

The new organization will need to define its approach to the public availability of source designs of critical AI technologies. The latter can bring huge advantages and immense risks, depending on the circumstances, and needs to be carefully regulated, but is currently framed in the public debate, quite idiotically, as a binary “all open” or “all closed” choice. 

A sensible approach to ​open source, we believe, will be to require it in nearly all critical software and hardware stacks of an AI system or service, as a complement to extremely trustworthy and transparent socio-technical systems and procedures around them.

Yet, open source is insufficient to ensure that the code is both sufficiently trustworthy and widely trusted by users, states and citizens. Therefore, all source codes of critical components will also be required to undergo an extreme level of security review in relation to complexity, performed by a diverse set of incentive-aligned experts.

Also, none of the current open source licenses (not even the GNU GPLV3 Affero License) requires that those running open source code on a server infrastructure - such as an AI lab providing its service via apps, web interface or API - to provide publicly a (sufficiently trustworthy) proof that the copy of the code downloaded by an end-user matches that which is being used. This needs to be ensured. 

That said, there will be exceptions for components whose proliferation could cause very substantial safety risks, such as dangerously powerful LLM weights, which could not only be published, but also hacked or leaked.

The trustworthiness of such components, as well as the safety of their public availability, should be managed via a very carefully designed "translucent oversight process", similar to the national legislative committee tasked to review highly classified information, but in an intergovernmental fashion and with much more resilient safeguards for procedural transparency, democratic accountability and abuse prevention. 

This translucent oversight process will aim to maximize effective and independent review of the source code by a selected and controlled set of  expert delegates of states and independent ethical researchers to maximize actual and perceived trustworthiness by states and citizens.

Those and other requirements are described in the Trustless Computing Paradigms

Tackling the Superintelligence Option

The design of governance of the Lad and the IGO it will be part of should consider that it may be forced to make choices that will be incredibly impactful for the future of humanity. 

Consider, in fact, that three leading US AI labs - Open AI, Google DeepMind and Anthropic - have all repeatedly declared they aim to build AI that surpasses human-level intelligence in all human tasks, without limits, realizing the so-called AGI or Superintelligence. 

While recognizing the immense risks for safety, these firms are moving ahead in such pursuit anyhow because the ongoing race dynamics may be unstoppable; and because they each claim that their specific technical approach may succeed better than others' in ensuring the technical alignment to prevent "loss of control" or to produce an outcome more beneficial for humanity compared to other similar initiatives.

While most of them publicly agree that the most positive scenarios would be those retaining a wide human control over AI, many AI scientists privately consider it possible or likely that loss of human control over AI, so-called takeover AI, under some conditions, may be overall highly beneficial for humanity. 

While this state of things is extremely unsettling, it must be assumed that it is also the intention of the US government, given that it has not stopped nor even questioned such publicly declared plans. This is likely due to a perceived AI arms race with China, confidence in backend safety guardianship of its national security agencies, or other motives.

Hence, under possible future scenarios, advances in frontier AI safety and alignment or an increased risk of other entities releasing more dangerous superintelligences may lead such IGOs to decide, after wide participation and pondering, that it is overall most beneficial for humanity to substantially relax their "loss of control" safety requirements or even intentionally unleash a Superintelligence.

Milestone and Achievements so Far

The Lab project is currently in its early stage of development, serving as a key component  of the Harnessing AI Risk Initiative. States and leading AI labs are being engaged to become partners of the Lab as well as partners of the broader Initiative, which also encompasses an international AI Safety Agency and IT Security Agency. 

Through our collaborative efforts, we have successfully onboarded 32 world-class advisors to the Association and the Initiative. Seven of these experts, along with other leading  experts whose contributions remain confidential, played a pivotal role in the initial formal development of the Lab, as part of the Harnessing AI Risk Proposal v.3, published in January 2015. In April 2024, we launched the Coalition with an Open Call for the Harnessing AI Risk Initiative

In recent months, we conducted high-level consultations with the United Nations missions in Geneva from four states. These discussions included three heads of mission - ambassadors - and specialists in artificial intelligence and digital technologies. These participating states, located in Africa and South America, collectively represent a population of 120 million, have a Gross Domestic Product (GDP) of $1.4 trillion, and manage sovereign wealth funds amounting to $130 billion. Currently, we are in the process of engaging with three additional delegations.

In April 2024, we received formal correspondence expressing interest from the Ambassador to the United Nations in Geneva, representing one of the largest regional intergovernmental organizations, which includes  dozens of member states. More recently we garnered preliminary interest from two EU member states.

Since December 2023, we have initiated discussions with three of the top five AI Labs regarding their participation in the Global Public Benefit AI Lab, along with an association of large Chinese Cloud and AI providers.

Opportunities

We offer diverse opportunities related to the Global Public Benefits AI Lab and the Initiative for various stakeholders, including  states, IGOs, donors, NGOs, experts, leading AI labs and prospective early pre-seed investors and funders of the Lab.

The first seven AI labs and states to join as participants will receive significant, though temporary, economic advantages and decision making privileges within the Initiative and the Global Public Benefit AI Lab, compared to states that join at a later stage. 

Road Ahead

We plan to continue engagement with states, leading AI labs, investors and funders, and scale up our operations. Considering the current phase of development, we are approaching an early pre-seed stage to initiate engagements with investors and funders.

A central component of such engagement is the 1st Harnessing AI Risk Summit, scheduled for this November in Geneva. The Summit has already attracted distinguished speakers, including prominent  AI and global governance experts, as well as  diplomats as speakers. Preceding this event a hybrid Harnessing AI Risk Pre-Summit will take place this June 12th, alongside the G7 Summit in Italy, featuring the participation of esteemed advisors and experts.

More Information

This test is also available as an Executive Summary of the Global Public Benefits AI Lab PDF, preceded by a 2-page Fact Sheet of the Lab.

For more information, please review: