Opportunities For leading AI Labs

We offer opportunities to leading AI labs to join as innovation and go-to-market partners of the Global Public Interest AI Lab and the related Harnessing AI Risk Initiative.

The Problem

It is increasingly challenging for even the most advanced and innovative AI Labs to reconcile three essential and indispensable goals:

  • (1) stay on the edge of the capability to matter in the current ongoing race for AGI 

  • (2) prevent their tech (and others) from producing accidents, misuse or loss of control

  • (3) ensure that control of the most potent AGIs will be democratized.  

It is increasingly challenging for them to stay independent and compete with a few US and Chinese BigTech firms, secure even some market niches, resist undue geopolitical pressures from their host countries, or positively influence the direction of a reckless winner-take-all race for AGIs on a safe and beneficial course for humanity. 

They can't solve this crucial conundrum alone. 

But they can join and co-lead an open coalition of states and leading AI firms in an initiative to create a Global Public Benefit AI Lab, along with an international version of the AI Safety Institutes being developed by various countries and their security agencies.

The Lab At A Glance

The Global Public Benefit AI Lab (or "Lab") is a planned open, public-private, democratically-governed joint-venture - among an open consortium critical mass of diverse states and AI labs aimed to achieve and sustain a solid global leadership or co-leadership in human-controllable AI capability, technical alignment research and AI safety measures. 

  • The Lab will accrue capabilities and resources of member states and firms, and distribute dividends and control to member states and directly to their citizens, while stimulating and safeguarding the initiative of state and private firms for innovation and oversight. 

  • The Lab is one of three agencies of a new intergovernmental organization being built via the Harnessing AI Risk Initiative, to catalyze is gathering in Geneva a critical mass of globally-diverse states to design and jump-start an open global constituent assembly and joint venture to build the most capable safe AI, and reliably ban unsafe ones - open to all states and firms to join on equal terms.

  • The Lab will seek to achieve and sustain a resilient “mutual dependency” in its wider supply chain vis-a-vis superpowers and future public-private consortia - via joint investments, diplomacy, trade relations and strategic industrial assets of participant states - while remaining open to merge with them in a single global organization and a single global AI lab to achieve humanity-controlled AI.

  • The Lab will cost $15+ billion and be primarily funded via project finance, buttressed by pre-licensing and pre-commercial procurement from participating states and client firms.

For more see the Global Public Interest AI Lab webpage (also available in PDF), which includes the following information on:

  • Precedents and Models

  • Financial Viability and the Project Finance model

  • Public-Private Partnership Model

  • Size of the Initial Funding

  • Supply-Chain Viability and Control

  • Talent Attraction Feasibility

  • The Superintelligence Option

  • Milestone and Traction so Far

  • Road Ahead

Participant AI labs will contribute their skills, workforce and part of their IP in such a way as to advance both their mission to benefit humanity, their stock valuations, and retain their agency to innovate at the root and application level, within safety bounds. 

Participant AI labs would join as innovation and go-to-market partners, in a joint-venture or consortium controlled democratically by the participant states, and open to all labs and states to join on equal basis - while reserving considerable temporary economic and decision making advantages for early participants:

  • As innovation partners and IP providers, they will be compensated via revenue share, secured via long-term pre-licensing and pre-commercial procurement agreements from participating states and firms.

  • As go-to-market partners, they will gain permanent access to the core AI/AGI capabilities, infrastructure, services and IP developed by the Lab. 

    • These will near-certainly far outcompete all others in capabilities and safety, and be unique in actual and perceived trustworthiness of their safety and accountability.

    • They will maintain the freedom to innovate at both the base and application layers, and retain their ability to offer their services to states, firms and consumers, within some limits.

Participant AI labs partnership terms will be designed so as to maximize the chances of a steady increase in their market valuation, in order to attract the participation of AI labs - such as so-called Big Tech firms - that are governed by legal conventional US for-profit vehicles that legally mandate their CEOs to maximize shareholder value.

This setup will enable such labs to continue to innovate in capabilities and safety at the base and application layers but outside a “Wild West" race to the bottom among states and labs, advancing both mission and market valuation.

Early Bird Advantages

The first seven AI labs that will join as participants will enjoy substantial economic advantages in relation to the Initiative and the Global Public Interest AI Lab relative to states that join later. More specifically, concerning all revenue share, IP compensations, decision-making, fees, and co-investments that will be required of AI labs of similar kind in the future by the Initiative and the Lab:

  • The 1st to the 2nd lab participant will receive a 45% premium

  • The 3rd to the 4th lab participant will receive a 30% premium

  • The 5th to the 6th lab participant will receive a 15% premium

The Deal

We invite leading AI labs to:

  • Join the Initiative as an AI Lab Partner by executing a straightforward, non-binding Letter of Interest and contributing a small fee in proportion to their revenue to reserve their position as one of no more than seven partner labs.

  • If interested but not ready to commit:

Contact us to discuss possible collaboration:
partnerships@trustlesscomputing.org