GLOBAL AI BILL OF RIGHTS
Foundational Rights and System Requirements for Sovereign and Public-Interest AI
Positioning Statement
A framework for rights-aligned AI grounded in system design and institutional responsibility.
The Global AI Bill of Rights (GABR) defines the rights, design principles, and system requirements that sovereign and public-interest AI platforms must satisfy in order to serve people, institutions, and democratic societies responsibly.
This work is presented as an open, research-driven framework intended to inform policy, institutional design, and future governance efforts, rather than as an adopted standard or regulatory instrument.
It establishes what rights AI systems must uphold and what technical, institutional, and physical structures must exist to make those rights enforceable in practice. The long-term funding and durability of these structures are addressed separately through Sovereign AI Finance.
Why This Work Exists
Artificial intelligence increasingly mediates access to justice, public services, economic opportunity, security, and political power. Yet many AI systems are developed and deployed without the institutional, technical, and infrastructural foundations required to uphold rights at scale within different national contexts.
As a result, harms are often structural rather than intentional—arising from opacity, non-representative data, weak accountability, misaligned incentives, and infrastructure decisions made without public oversight. These harms disproportionately affect nations, regions, populations, and communities that are already marginalized or underrepresented.
The Global AI Bill of Rights was created to address this gap by translating rights-based AI governance into clear system-level requirements that can guide the design of sovereign AI platforms, public AI infrastructure, and high-impact institutional deployments.
What This Site Does — and Does Not Do
This framework is intentionally iterative and expected to evolve through research, policy engagement, and institutional learning over time.
This site does:
-
Define foundational rights for national AI systems
-
Specify the system requirements needed to uphold those rights
-
Establish a normative and structural baseline for sovereign and public-interest AI
This site does not:
-
Propose regulation or enforcement mechanisms
-
Define financing instruments or capital vehicles
-
Prescribe implementation or deployment strategies
Those questions are intentionally addressed elsewhere.
From Rights to Capacity
Rights are meaningful only when institutions are capable of sustaining them over time.
The capital formation, governance continuity, and long-term durability required to operationalize these principles are addressed through Sovereign AI Finance.

