When OpenAI quietly signed a classified deal with the U.S. Department of Defense, it wasn’t just another tech–government partnership. It was a live‑fire test of whether a company that has built its brand on AI safety and public‑interest rhetoric could navigate real power without tearing that story. In a week, we saw the full cycle: an agreement announced, a backlash over surveillance and red lines, and Sam Altman forced to admit the deal had been “rushed” as OpenAI scrambled to tighten the terms. If you want to understand what AI‑era authority actually looks like, and how quickly it can be repriced, this is the boardroom version of a case study.
What happened (briefly)
- OpenAI signed a contract with the U.S. Department of Defense to deploy its models on a classified network, after Anthropic’s negotiations with the Pentagon broke down over red lines on domestic surveillance and autonomous weapons.
- OpenAI publicly framed the deal as containing strong safeguards: prohibitions on domestic mass surveillance, bans on using its tech to direct autonomous weapons, and requirements for human responsibility for use of force.
- Critics, including civil liberties groups, some employees and parts of the AI‑safety community, argued that the original contract language still left loopholes, especially around “commercially acquired” data and intelligence uses.
- After intense weekend backlash, Sam Altman posted an internal memo publicly and said OpenAI “shouldn’t have rushed” the deal, acknowledging it looked “opportunistic and careless,” and announced amendments: explicit language that OpenAI’s systems cannot be intentionally used for domestic surveillance of U.S. persons, including via commercially obtained personal data, and assurances that the Pentagon will not use OpenAI tech for defense intelligence (e.g. NSA) without a further contract change.
In short, they took a sensitive, high‑risk government contract; underestimated the trust implications; were hit by a legitimacy storm; then tightened the safeguards and admitted tactical error.
How Altman handled it, through an “authority under AI” lens
Altman’s response is best understood as a live test of “authority under AI”: what a CEO does when the story, the deal and the scrutiny all collide.
The core questions for this moment are not abstract. They are brutally practical: did he act like someone whose judgement is predictable under pressure, or like a team that grabbed an opportunity and cleaned it up later; did his handling of the backlash strengthen or weaken his standing as a CEO who actually owns the AI and governance questions, rather than outsourcing them to comms; and, for governments, enterprises and partners now treating trust as a form of risk capital, did this episode make OpenAI look like a safer institution to be vulnerable to, or a more volatile one. This can all be condensed into three key questions that assess the impact of this case on OpenAI’s authority:
- Did he behave predictably under stress, or did the system look opportunistic?
- Did his response increase or decrease his embodied authority as a subject‑matter visible CEO?
- What does this do to OpenAI’s position in a “trust recession” where high‑trust buyers are outsourcing risk?
Positive authority signals
Altman did a few things that within my authority building trust framework would count as authority‑enhancing:
- Rapid, public acknowledgement of error. He explicitly said OpenAI “should not have rushed” the agreement and that the initial move appeared “opportunistic and careless.” In high‑trust sectors, leaders almost never admit “we rushed this” in plain language; doing so is a visible act of intellectual humility. That aligns with the argument that authority requires detectable humility and a willingness to surface blind spots.
- Substantive contract changes, not just messaging. The amendments add concrete prohibitions: no intentional domestic surveillance of U.S. persons, and explicit inclusion of commercially acquired personal data (location, browsing, financial data) in the ban, plus a carve‑out that defense intelligence agencies like NSA cannot use OpenAI’s tech under this deal without a new modification. That is classic “authority is alignment”: tightening the infrastructure to match the prior narrative about red lines.
- Making the safety logic legible. OpenAI’s posts and the memo describe a layered safety approach (technical constraints, contract clauses, cloud‑only deployment, OpenAI staff in the loop) and restate “no domestic mass surveillance” and “human responsibility for use of force” as non‑negotiable principles. That’s exactly the kind of epistemic transparency you describe as necessary in AI‑heavy, high‑risk contexts.
On those dimensions, he behaved like a subject‑matter visible CEO rather than a brand avatar: owning the decision, explaining constraints, and changing the underlying deal.
Negative authority signals
But there are also real authority costs baked into how this unfolded:
- Initial misalignment between narrative and action. For months, OpenAI had publicly stressed “red lines” very similar to Anthropic’s – especially around domestic mass surveillance and autonomous weapons. The first version of the Pentagon contract, by multiple independent reads, left enough ambiguity that critics saw a gap between rhetoric and legal reality. That’s a hit to structural coherence: reputation outpacing internal governance.
- Perception of opportunism versus principle. The optics are rough: Anthropic holds a hard line and gets threatened with being treated as a “supply‑chain risk”; OpenAI steps in, accepts “all lawful purposes” with some safeguards, and secures the deal just as Trump orders agencies to drop Anthropic tools. Only after public pressure do they harden the surveillance language. Even with the later correction, that sequencing makes OpenAI look more flexible on ethics under competitive pressure than their branding suggests.
- Rushed decision in a high‑trust domain. Altman’s own admission that they moved quickly “to de‑escalate” but ended up looking careless is telling. High‑trust buyers care less about the speed of announcement and more about the quality of the ex ante risk calculus. Moving fast on a classified military deal, then tightening fundamentals post‑hoc, feels like the inverse of “proportional response under stress.”
These factors feed directly into heuristics: consistency over time, institutional alignment, and whether authority is marketing‑led or governance‑led.

Sam Altman, chief executive officer of OpenAI Inc., during a media tour of a Stargate AI data centre in Abilene, Texas, U.S., on Sept. 23, 2025. Bloomberg | Bloomberg | Getty Images
Net effect: Does this increase or decrease his authority?
In the short term, Altman’s authority takes a hit among the most trust‑sensitive audiences (AI‑safety community, civil liberties advocates, some employees, parts of the EU policy crowd). The key “trust recession” metrics here are consistency and alignment, and the story for those groups is: OpenAI said the same red lines as Anthropic, took the deal Anthropic walked away from, then fixed it only once criticised.
However, two features of his response also strengthen his authority profile, especially with institutional and governmental stakeholders:
- He demonstrated visible ownership under fire – naming his own misjudgement, publishing an internal memo externally, and renegotiating terms instead of stonewalling. That is what “subject‑matter visible leadership” should look like in an AI‑politics crossfire.
- He used the moment to clarify OpenAI’s safety governance in a concrete way, creating a more auditable set of commitments around surveillance and intel use than most peers currently have in writing. For regulators and high‑trust B2B buyers, that sort of legible constraint is often more persuasive than abstract virtue language.
So, his perceived authority level is likely to depend on who you ask.
- Among principled‑sceptic communities, his authority likely decreases in the near term, because the episode validates a heuristic that OpenAI will bend more than it claims when power is on the line.
- Among pragmatic state and enterprise actors, his authority likely increases, because he has shown he can land a complex deal, respond to criticism with substantive adjustments, and still keep the relationship with the Pentagon intact. That maps closely to the idea of authority as “being predictable under stress” rather than never erring.
From my perspective, He failed the “don’t let positioning outrun structure” test in the first move, then passed the “coherent, evidence‑led correction under pressure” test in the second. His long‑term authority will depend on whether this becomes a one‑off wobble followed by consistent behaviour, or the first in a pattern where OpenAI repeatedly nudges its own red lines and backfills ethics after the fact.
In a trust recession, authority is cumulative and path‑dependent. This episode dents the purity of Altman’s “safety‑first” image, but the way he handled the amendment, if followed by tighter governance and fewer rushed deals, can still strengthen his standing as a CEO who is both subject‑matter visible and willing to let his decisions be updated in public.
References
- OpenAI. (2026, February 27). Our agreement with the Department of War.[openai]
- Reuters. (2026, March 2). OpenAI amending deal with Pentagon, CEO Altman says.[reuters]
- Reuters. (2026, February 28). OpenAI details layered protections in US defense department pact.[reuters]
- Axios. (2026, March 2). OpenAI, Pentagon add more surveillance protections to AI deal.[axios]
- BBC News. (2026, March 3). OpenAI changes deal with US military after backlash.[bbc]
- CNBC. (2026, March 2). OpenAI’s Sam Altman admits ‘rushed’ deal with Defense Department after backlash.[cnbc]
- NBC News. (2026, March 3). OpenAI alters deal with Pentagon as critics sound alarm over surveillance.[nbcnews]
- Fortune. (2026, March 3). Sam Altman says OpenAI renegotiating ‘opportunistic and careless’ Pentagon deal.[fortune]
- Politico. (2026, February 28). OpenAI announces new deal with Pentagon — including ethical safeguards.[politico]
- OPB. (2026, February 26). OpenAI says it shares Anthropic’s ‘red lines’ over military AI use.[opb]
- Business Insider. (2026, February 26). Sam Altman navigates Anthropic’s Pentagon fight as OpenAI courts the Pentagon.[businessinsider]
- Forbes. (2026, March 3). OpenAI blurs its mass surveillance red line with new Pentagon deal.[forbes]

