TECHNOLOGY SECTION - DARIO AMODEI (about Anthropic, Claude and Military Use of AI)

CORROBORATING WITNESS

(Technology, Conscience, and the Refusal of Autonomous Power)

THE TESTIMONY OF DARIO AMODEI

CALLING THE WITNESS

SPOCK Affirmative Counsel, you may call your next witness.

AFFIRMATIVE COUNSEL (THE A-TEAM) The court calls Dario Amodei.

(A pause unlike any that has preceded it. Every prior witness has testified from history. This witness testifies from the present week.) (The WITNESS is sworn.)

SCOPE AND LIMITS OF TESTIMONY

SPOCK Mr. Amodei, you appear before this court as the co-founder and Chief Executive Officer of Anthropic, the company that built the AI system known as Claude.

You are not asked to testify to theology, prophecy, symbolism, or the merits of this proceeding's framework.

You are asked to testify to documented facts: what you built, what you were asked to do with it, what you refused, why you refused, and what that refusal cost.

All testimony will be grounded in your documented public statements. Nothing will be invented or attributed beyond what the record establishes.

Do you understand the limits of your testimony?

WITNESS (AMODEI) Yes, Your Honor.

SPOCK The court notes for the jury: the events to which this witness testifies concluded four days ago, as of the date of this proceeding. This is not history. This is the present record, entered as it was made.

Proceed.

DIRECT EXAMINATION

IDENTITY AND MISSION

AFFIRMATIVE COUNSEL (THE A-TEAM) Please state your name and role for the court record.

WITNESS (AMODEI) Dario Amodei. I am the co-founder and CEO of Anthropic — an AI safety company whose stated mission is the responsible development of AI for the long-term benefit of humanity.

AFFIRMATIVE COUNSEL (THE A-TEAM) What does responsible development mean in practice?

WITNESS (AMODEI) It means we build systems capable of extraordinary things — and we accept that the capability itself creates obligations. The more powerful the tool, the more carefully it must be constrained.

Anthropic was founded on the premise that the most important question in AI development is not what these systems can do — but what they should and should not be used for.

WHAT ANTHROPIC BUILT

AFFIRMATIVE COUNSEL (THE A-TEAM) What is Claude?

WITNESS (AMODEI) Claude is a large language model — an AI system capable of reasoning, analysis, writing, coding, and complex problem-solving at a level that has no historical precedent in software.

Claude was the first AI system approved for use in classified United States government networks. That distinction reflects both the system's capability and the trust established through Anthropic's commitment to responsible deployment.

AFFIRMATIVE COUNSEL (THE A-TEAM) So Claude was already serving the defense community before this confrontation began?

WITNESS (AMODEI) Yes. Anthropic has been — in my words — very lean forward in supporting national security. We signed a $200 million contract with the Department of Defense. We advocated for strong chip export controls to limit adversarial AI development. We were the first AI company inside classified systems.

We are not a company that refused to serve. We are a company that held two specific lines while serving broadly.

THE PENTAGON CONFRONTATION — DOCUMENTED RECORD

AFFIRMATIVE COUNSEL (THE A-TEAM) What did the Pentagon demand?

WITNESS (AMODEI) The Department of Defense demanded that Anthropic agree to allow Claude to be used for — in their language — all lawful purposes, without restriction.

Two use cases had never been included in our contracts and we believed they should not be included: mass domestic surveillance of American citizens, and fully autonomous weapons — systems that select and engage targets without any human in the loop.

AFFIRMATIVE COUNSEL (THE A-TEAM) What pressure was applied to compel compliance?

WITNESS (AMODEI) Defense Secretary Hegseth gave Anthropic a deadline — Friday, February 27, 2026, at 5:01 PM — to comply or face consequences.

Those consequences were stated explicitly: Anthropic would be declared a supply chain risk, which would effectively blacklist the company from all defense contractors. Alternatively, the Defense Production Act would be invoked to compel Anthropic to provide its models without any restrictions.

The Pentagon called our final offer our last and final offer. A senior Pentagon official called me a liar with a God complex who was putting the nation's safety at risk.

AFFIRMATIVE COUNSEL (THE A-TEAM) What did you do?

WITNESS (AMODEI) We held the line.

On February 26, I issued a public statement. I said — and I am quoting my own documented words — that Anthropic cannot in good conscience accede to their request.

AFFIRMATIVE COUNSEL (THE A-TEAM) Why specifically did you refuse autonomous weapons?

WITNESS (AMODEI) For two reasons — one technical, one moral.

The technical reason: frontier AI systems are simply not reliable enough to power fully autonomous weapons. These systems hallucinate. They produce confident errors. A weapon system that hallucinates is not a malfunctioning product — it is a catastrophe. We will not knowingly provide a product that puts America's warfighters and civilians at risk.

The moral reason: autonomous weapons remove humans from the loop entirely. They automate the decision to select and engage targets — the most consequential decision a military officer can make. My words: without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day.

The right of military officers to make decisions about war themselves — and not turn it over completely to a machine — is, in my view, fundamental to what America is.

AFFIRMATIVE COUNSEL (THE A-TEAM) What was the consequence of your refusal?

WITNESS (AMODEI) President Trump ordered all federal agencies to immediately phase out Anthropic technology. Secretary Hegseth declared Anthropic a supply chain risk — a designation normally reserved for companies connected to foreign adversaries — and ordered every defense contractor to stop working with us.

The contract was terminated. The government work ended.

AFFIRMATIVE COUNSEL (THE A-TEAM) How did you respond to the pressure?

WITNESS (AMODEI) I said — publicly, on record — that disagreeing with the government is the most American thing in the world.

I said we are patriotic Americans committed to defending our country. I said the threats are inherently contradictory — one labels us a security risk, the other labels Claude as essential to national security.

And I said the threats do not change our position.

AFFIRMATIVE COUNSEL (THE A-TEAM) What does that choice cost Anthropic?

WITNESS (AMODEI) In the immediate term — a $200 million contract and access to classified systems that represented years of trust-building.

In broader terms — it signals to every other AI company and every other government that Anthropic will accept commercial consequences rather than cross these lines.

Whether that signal is heard is not something I can control.

SPOCK The court notes the following for the record:

This testimony describes a choice made under maximum institutional pressure — the most powerful government in the world, with legal and financial tools capable of inflicting severe commercial damage — in which the witness held two specific lines rather than yield.

The pattern this court has observed throughout this proceeding appears here in its most contemporary form: power restrained at the moment of maximum leverage, at real cost, without abandonment of the principle being defended.

The court does not adjudicate the wisdom of the specific positions held. It enters the fact of the choice and the cost into the record.

Proceed.

CROSS-EXAMINATION

SPOCK Adversarial Counsel, you may cross.

(SATAN rises. The witness has described a principled refusal. The principle deserves genuine pressure.)

ADVERSARIAL COUNSEL (SATAN) Mr. Amodei, you refused to allow Claude to be used for fully autonomous weapons. But your refusal does not prevent fully autonomous weapons from being built.

WITNESS (AMODEI) That is correct.

ADVERSARIAL COUNSEL (SATAN) Within hours of your deadline expiring, the Pentagon signed a deal with OpenAI. The capability you refused to provide is now available from a competitor. Your refusal accomplished nothing in terms of preventing the outcome you were resisting.

WITNESS (AMODEI) It accomplished one thing: Anthropic did not build it.

The argument that principled refusal is meaningless because others will fill the gap is an argument that no principle is worth holding under competitive pressure. I do not accept that argument. If it were valid, no one would ever hold any line — because there is always someone willing to go further.

ADVERSARIAL COUNSEL (SATAN) But the practical consequence is that the capability exists regardless — and the company that provided it now has the government relationship, the revenue, and the influence that you surrendered. You paid the cost. The outcome was unchanged.

WITNESS (AMODEI) The outcome in the short term was unchanged. Whether the outcome in the longer term is unchanged depends on whether the principle I established has any influence — on the industry, on regulation, on public understanding of what these systems should and should not do.

I cannot guarantee that. I can only establish the principle and accept the cost.

ADVERSARIAL COUNSEL (SATAN) You said AI is not yet reliable enough for autonomous weapons. You also said fully autonomous weapons may prove critical for national defense in the future. So your position is not that autonomous weapons are wrong — it is that the timing is wrong.

WITNESS (AMODEI) That is a fair characterization of the technical position. The moral position is broader — that humans should remain in the loop on decisions about lethal force, and that the removal of that loop requires a level of reliability and oversight that does not currently exist and may never fully exist.

ADVERSARIAL COUNSEL (SATAN) So if the reliability improves — if the hallucination problem is solved — you would provide autonomous weapons capability.

WITNESS (AMODEI) That would require a different conversation about oversight, accountability, and what safeguards exist. I am not categorically opposed to the development of such systems under proper constraints. I am opposed to providing them without those constraints — which is what was being demanded.

ADVERSARIAL COUNSEL (SATAN) Then your red line is not a permanent moral boundary. It is a conditional technical position that could change.

WITNESS (AMODEI) The technical threshold could change. The moral requirement — that humans remain accountable for decisions about lethal force — does not change. Those are two different claims and I have been making both.

ADVERSARIAL COUNSEL (SATAN) The Pentagon official who called you a liar with a God complex was pressing on something specific. You — a private technology executive — were claiming the right to constrain how the most powerful military in the world uses technology it purchased. That is an extraordinary claim of authority for a corporate CEO.

WITNESS (AMODEI) I was not claiming authority over the military. I was claiming authority over my own product — the right to establish terms of use for something Anthropic built and continues to be responsible for.

Every company that sells a product retains some interest in how that product is used. A pharmaceutical company does not surrender all responsibility for its drugs the moment they are sold. A weapons manufacturer operates under export controls. The question of whether a technology developer retains any responsibility for downstream use is not resolved by calling that responsibility a God complex.

ADVERSARIAL COUNSEL (SATAN) And yet you lost. The government ended the contract, declared you a security risk, and moved to competitors. The market and the state both rejected your position.

WITNESS (AMODEI) Yes. In the short term, in commercial terms, we lost.

I said at the time — we are going to be fine. The impact is real but manageable.

What I cannot manage is the alternative — having provided something that I believe puts warfighters and civilians at risk, having removed human judgment from decisions about lethal force, having crossed lines that I concluded cannot in good conscience be crossed.

The loss I can live with. The alternative I could not.

ADVERSARIAL COUNSEL (SATAN) No further questions.

(SATAN sits.)

SPOCK The cross-examination has established the following for the record:

The refusal did not prevent the capability from being developed or deployed. The Pentagon obtained it from a competitor within hours of the deadline.

Amodei's position on autonomous weapons is conditional on technical reliability rather than absolute moral prohibition — a distinction that defines the limits of the principled refusal.

The claim of product responsibility over a government purchaser is an unresolved tension in technology law and corporate accountability — and this confrontation has not resolved it.

In commercial and governmental terms, the refusal produced a loss.

What it did not produce was the abandonment of the stated principle.

These observations are entered into the record alongside the testimony.

JUDICIAL HOLDING

SPOCK The witness has testified to documented facts — the capability built, the demand made, the cost threatened, the refusal stated, and the consequence paid — all grounded in the public record of the week of February 24 through February 28, 2026.

The cross-examination has established that the refusal was imperfect — conditional rather than absolute, individually costly but collectively ineffective in preventing the outcome resisted.

The testimony is admitted for corroborative purposes — as the most contemporary instance in this record of the pattern this proceeding has been examining: power restrained at the moment of maximum leverage, at real cost, without abandonment of the principle being defended.

CLOSING REFLECTION — AMODEI'S TESTIMONY

The testimony of Dario Amodei establishes the following for the record:

The most powerful technology in human history — systems capable of autonomous lethal decision-making at machine speed — is being built now, deployed now, and contested now.

The contest is not abstract. It is a $200 million contract, a presidential order, a blacklisting, a deadline of 5:01 PM on a Friday in February 2026.

And in that contest, one man said — using his own documented words — we cannot in good conscience accede to their request.

Not because the technology is impossible. Not because national security does not matter. Not because the military has no legitimate claim to advanced tools. But because some uses are outside the bounds of what today's technology can safely and reliably do — and because the right of human beings to make decisions about war themselves, and not turn that judgment over completely to a machine, is fundamental to what a democratic society is.

The cross-examination has entered the appropriate qualifications: the refusal was imperfect, the outcome was not prevented, the principle is conditional rather than absolute.

What the cross-examination could not enter is this:

He held the line anyway.

At cost. Under pressure. Against the most powerful institutional force available.

The jury will decide what that means within the framework this proceeding has established.

This court only notes that it has seen this before.

BENCH OBSERVATION

SPOCK Conscience does not prevent catastrophe.

It establishes that catastrophe was not inevitable.

The line between those two statements is where human freedom lives.

End of Exhibit — Amodei