PreambleWhat Just Happened
On 27 February 2026, the President of the United States ordered all federal agencies to cease doing business with Anthropic — one of the world's leading artificial intelligence companies. The Pentagon simultaneously designated Anthropic a "supply chain risk," a classification normally reserved for suspected agents of foreign adversaries.
What triggered the standoff? Anthropic refused to remove two safety guardrails from its AI model, Claude: a prohibition on powering fully autonomous weapons systems — machines that can select and engage targets without meaningful human control — and a prohibition on enabling mass domestic surveillance of citizens.
These are not fringe positions. They reflect a widely held view among AI safety and security experts: today’s systems are powerful, but still fallible, and must not be treated as reliable decision-makers in lethal or rights-eroding contexts. In many democratic jurisdictions, the underlying principles — oversight, accountability, proportionality, and civil-rights protections — are already embedded in law and procurement rules.
The Facts of the Standoff
- Anthropic held a Pentagon contract worth up to $200 million for Claude's deployment on classified military networks
- The Pentagon demanded Claude be available for "all lawful purposes" with no company-set restrictions
- Anthropic refused, citing the unreliability of today's AI for autonomous targeting, and the civil rights implications of mass surveillance
- The U.S. administration ordered federal agencies to cut ties; the Pentagon added a "supply chain risk" blacklist designation
- Anthropic announced it would challenge any supply chain risk designation in court, calling it "legally unsound" and a dangerous precedent
- OpenAI subsequently struck a deal with the Pentagon that reportedly included safeguards and limitations on certain high-risk uses
This is not a story about one company's contract. This is a story about whether democratic societies will allow principled limits to be placed on the most powerful technology ever built — or whether those who control the largest military budgets will decide those limits alone.
I.The Principle at Stake
For decades, democratic societies have accepted that weapons manufacturers, chemical companies, and pharmaceutical firms operate under legally enforceable ethical constraints. A pharmaceutical company cannot sell a drug it knows will kill. A weapons manufacturer must comply with international humanitarian law. These are not ideological impositions — they are the civilisational achievement of taming power with principle.
Anthropic's position was precise and reasonable: today's AI models are not reliable enough to make lethal targeting decisions without meaningful human oversight. This is a widely held view among AI safety and security researchers. It is not pacifism. It is engineering honesty. And it is the kind of honesty that should be protected, not punished.
By blacklisting a company for maintaining safety standards, the US administration has sent a message to every AI lab in the world: safety principles can become a commercial liability. The market will hear this message. The consequences will be felt in every laboratory, every board room, every corner of the technology industry where someone is deciding whether to raise a safety concern or stay quiet.
II.Why This Is Europe's Moment
The European Union spent years developing the AI Act — the world's first comprehensive legal framework for artificial intelligence. It bans certain high-risk applications. It requires human oversight of consequential automated systems. It protects citizens from mass biometric surveillance. It is, in its essentials, a legislative expression of exactly what Anthropic was punished for defending.
This is not a coincidence. It is an alignment of values.
The European Union has long spoken of "technological sovereignty" — the desire to develop and host world-leading AI capacity on its own soil, under its own democratic norms. For years this aspiration has struggled against the gravitational pull of Silicon Valley capital and American regulatory permissiveness.
That gravitational field just shifted. A world-leading AI safety laboratory — one with deep research talent, an established global customer base, and a demonstrated commitment to the principles Europe has encoded in law — has been effectively expelled from the US government ecosystem. Its future is suddenly, genuinely open.
Europe should make an offer. Not merely a financial incentive package — though that matters — but a civilisational offer: a guarantee that a company which holds the line on autonomous weapons and mass surveillance will be welcomed, protected, and supported as a strategic asset, not punished as an inconvenient contractor.
III.What We Are Calling For
A formal European invitation
The European Commission and member state governments should issue a public, coordinated invitation to Anthropic — and to any AI company committed to equivalent safety standards — to establish primary European operations. This invitation should be backed by clear regulatory certainty, not bureaucratic ambiguity.
A dual-home structure with EU-anchored governance
Anthropic should be able to keep global operations while placing model release control and safety governance inside a European entity: release gating for model versions, enforceable restrictions for high-risk deployments, and a Safety & Democracy Committee with binding veto power over mass surveillance and fully autonomous lethal use.
AI safety as a European competitive advantage
Europe must reframe safety not as a regulatory burden but as the source of global competitive trust. Governments and enterprises worldwide will pay a premium for AI they can trust. Europe can become the jurisdiction where that trust is manufactured and certified.
Strategic investment and procurement
European public institutions — defence, health, justice, and public administration — should accelerate procurement of safety-certified AI systems. This creates a sovereign market that rewards rather than punishes principled design choices, replacing the lost US government contracts with European ones.
What Europe can procure immediately (high-trust, non-lethal)
- Cyber defence: SOC triage, incident response copilots, phishing and malware analysis support
- Critical infrastructure resilience: emergency planning, dependency mapping, scenario simulation support
- Public service productivity: secure drafting, translation, casework support with audit logs and access controls
- Regulated compliance: policy analysis support, procurement and legal drafting assistance, controlled document workflows
Procurement should be tied to governance: EU-based release control, binding veto over red-line deployments, independent oversight, and published safety evaluation summaries.
An international coalition of the principled
Europe should convene, with partners in Canada, the United Kingdom, Japan, Australia, and beyond, a binding multilateral framework: AI systems used in weapons or surveillance must maintain human-in-the-loop accountability. This cannot be left to individual companies to fight alone.
Talent welcome, fast and visible
European immigration policy should be immediately and visibly reformed to offer fast-track residency and research visa pathways to AI safety researchers, engineers, and ethicists — wherever they currently work, wherever they currently live.
IV.The Longer Stakes
We are at the beginning of the AI age, not the middle of it. The systems that exist today are primitive compared to what will exist in ten years. The habits, norms, laws, and institutional arrangements we establish now — about who controls AI, under what constraints, accountable to whom — will shape the character of that more powerful future.
If we establish now that safety principles are commercial liabilities, we will build a future in which AI systems have no meaningful ethical constraints. Not because engineers don't care. Not because companies don't care. But because the market, the political pressure, and the regulatory vacuum will have made caring too expensive.
If we establish now that democratic jurisdictions will protect and reward those who hold the line, we create a different future: one in which safety is built into the competitive logic of the industry, not constantly fighting against it.
Anthropic chose to refuse. Europe should choose to answer.
SourcesReporting and primary statements
This manifesto summarizes publicly reported information and primary statements. Key references:
Links are provided for reader verification; the argument of the manifesto stands independently of any single outlet.
ConclusionA Question of Character
This manifesto is not an anti-American document. It does not celebrate what has happened to Anthropic. It is addressed to those in Europe — in governments, in institutions, in civil society — who believe that the values encoded in the EU's founding treaties are not merely words: human dignity, democratic accountability, the rule of law over the exercise of power.
Those values are now in direct competition with a different vision: that the most powerful technology ever built should be available to the most powerful military on earth, without limits set by anyone but themselves.
Europe has a choice. It can watch this contest from the sidelines, offering commentary and concern. Or it can act — extending a genuine, concrete, well-resourced welcome to the people and institutions trying to build AI that is worthy of human trust.
The safe harbour exists, if Europe chooses to build it. The moment to begin is now.
Sign This Manifesto
If you believe responsible AI deserves a home in Europe, add your name and help take this into the public debate.
Not affiliated with Anthropic or any political party.