All Insights
March 2, 20266 min read

The CEO of the Company Behind Claude Just Told the Pentagon “No.”

The CEO of the Company Behind Claude Just Told the Pentagon “No.”

Dario Amodei is the CEO of Anthropic, the company behind Claude, the AI model I use every single day to build the tools and analysis that power this newsletter.

Last night, he was featured on 60 Minutes, warned about AI displacement, and then stood his ground against the federal government.

For investors, the lessons are not about politics. They are about positioning - let me explain.


Last Friday, hours after Defense Secretary Pete Hegseth declared Anthropic a supply chain risk to national security, Amodei sat down with CBS News and did something unusual for a tech CEO in a crisis: he explained his reasoning clearly, without hedging, and without backing down.

The dispute centers on two restrictions Anthropic wants written into its military contracts.

  1. No domestic mass surveillance of American citizens.

  2. No fully autonomous weapons that fire without human oversight. The Pentagon wanted unrestricted access for all lawful purposes. Anthropic said no. The White House responded by ordering every federal agency to stop using Anthropic’s technology.

Within hours, OpenAI stepped in and signed a deal with the Pentagon that its CEO Sam Altman said included the same guardrails Anthropic had been asking for. Anthropic now faces a six-month wind-down from classified government networks where it had been the first AI lab deployed.

There is a lot to unpack here. But for readers of this newsletter, the question is not who was right or wrong in a political standoff.

The question is: what does this tell us about where AI is heading, and how should we be positioning around it?


What the Interview Actually Revealed

Beyond the Pentagon drama, Amodei made several statements during the broader 60 Minutes segment that deserve attention from anyone allocating capital in or around technology.

  1. He said AI could displace up to 50% of entry-level white-collar jobs within five years. Not manufacturing jobs. Not blue-collar work. Consulting, law, and financial services. The analysts who build models. The associates who compile reports. The junior PMs who run screens and prepare memos. If you work in finance or invest alongside people who do, this is not abstract.

  2. He acknowledged that Anthropic’s own internal operations reflect this trajectory. The company’s AI writes approximately 90% of its code. The tool Amodei is warning you about is the same one building his company.

  3. He framed the timeline as something people can still prepare for (though the coverage kinda glossed over this.

His phrasing was telling: you can steer the train ten degrees in a different direction. Not stop it. Steer it.

That framing matters. Because the difference between displacement and opportunity is almost always a question of timing and preparation, not talent.

Subscribe now


The Governance Signal Investors Should Not Ignore

Let me set the politics aside and focus on what this standoff reveals structurally.

For the first time, a major AI company drew a line with the federal government over how its technology would be used. Whether you agree with Anthropic’s position or not, the fact that this confrontation happened tells you something important about the next decade of AI deployment: governance is becoming the competitive battleground.

This is not new territory for anyone who has studied how industries mature. Early internet companies fought similar battles over encryption export controls in the 1990s. Financial institutions went through decades of regulatory evolution after 1929, 1987, and 2008. The pattern is consistent: when a technology becomes powerful enough to matter, the question shifts from “can we build it?” to “who decides how it gets used?”

AI has now reached that inflection point. And the investors who recognize this shift will be better positioned than those still focused exclusively on capability benchmarks.

Here is what the Anthropic standoff tells us concretely:

1. AI governance risk is now a portfolio-level consideration

Any company building AI products for government or enterprise clients faces a new category of risk: policy misalignment. Anthropic just demonstrated that a single contract dispute can cascade into a presidential directive, a supply chain designation, and a competitor immediately filling the vacuum. This is not theoretical risk. It happened in 72 hours.

For investors evaluating AI companies, whether public equities tied to the ecosystem or digital assets building AI infrastructure, governance posture is now a material factor.

2. The competitive landscape just shifted

OpenAI signed a Pentagon deal within hours of Anthropic being blacklisted. xAI already had classified access. Google has its own evolving contract. The AI provider landscape for government and enterprise is consolidating fast, and the deciding factor is not just technical capability. It is willingness to negotiate on terms that powerful buyers demand.

This creates a two-track market.

On the one hand, companies that accommodate government requirements will capture a specific revenue stream. On the other hand, companies that prioritize independence will need to build commercial moats deep enough to absorb the loss.

Both strategies can work.
But the bifurcation itself is a signal that this market is maturing.

3. Congress will eventually act, and that creates opportunity windows

Amodei made the point explicitly: Congress needs to catch up with the technology. Laws around domestic surveillance, autonomous weapons, and AI-driven decision-making were written before these capabilities existed. Until new frameworks emerge, the gap between what is technically possible and what is legally governed will continue to create uncertainty for some companies and opportunity for others.

If you are watching this space from an investment perspective, the regulatory pipeline is worth tracking closely. Every major tech regulatory moment in history, from the Telecommunications Act of 1996 to Dodd-Frank, reshuffled winners and losers. AI governance legislation will do the same.


The Positive Framing (Because This Is Not All Doom!)

It would be easy to read this story and feel pessimistic. An AI CEO warning about mass job displacement while simultaneously battling the government over how his technology gets used. That sounds like the opening of a dystopian novel.

But here is the framing I want you to take away from this, and it is the same principle that guides everything I am building at BitFinance:

AI does not replace judgment. It replaces the tedious work that sits underneath judgment.

Scoring 50 protocols on five fundamental pillars? That is tedious. That is 15 hours a week of research time. AI does it in minutes. But deciding what those scores mean for your portfolio? Interpreting a regime shift? Weighing a whale accumulation signal against a sentiment extreme? Deciding how much risk to take given your personal circumstances?

That is judgment. AI can’t do it - but you can!

The future of finance is not “AI replaces humans.” It is “humans with AI replace humans without AI.” The analyst who can direct an AI to run 50 screens in 10 minutes and then apply experienced judgment to the results will outperform the analyst still building those screens manually by the time the first analyst has already made her trades.

Amodei himself made a version of this point. The people who learn to work with AI will have a structural advantage. The people who ignore it will be working against it. That is not doom. That is a call to learn new tools.


3 Things You Can Do This Week

I promised actionable takeaways, not just analysis. Here they are.

Subscribe now

1. Start using AI in your research process, even imperfectly

You do not need to build a trading bot or deploy a model on classified networks. Start with something simple. Use an AI tool to summarize an earnings call. Ask it to compare two company filings side by side. Have it generate a preliminary screen of assets that meet your criteria, then apply your own judgment to the results.

The goal is not perfection. The goal is fluency.

The people who will thrive in an AI-augmented finance landscape are not the ones who waited until the tools were flawless. They are the ones who built muscle memory early.

2. Evaluate your exposure to AI governance risk

If you hold positions in companies that sell to government clients, or in AI infrastructure plays, ask yourself: what happens to this position if a policy dispute causes a contract cancellation overnight? The Anthropic situation is a case study you can use right now.

Look at the companies in your portfolio and consider their governance exposure the same way you would consider their balance sheet or competitive position.

3. Track the regulatory pipeline

Congressional hearings on AI oversight, executive orders, and agency-level guidance documents are producing a steady stream of signals about where regulation is heading.

You do not need to become a policy expert. You need to know enough to recognize when a regulatory shift is about to create winners and losers in sectors you are invested in.

The Kalshi prediction markets piece I published recently makes a related point: the data streams that matter for forward-looking investment decisions are expanding beyond traditional sources. Policy is becoming a data stream that sophisticated investors need to monitor.


— Matthew
X: @bit_finance_

Subscribe now

Leave a comment

oh! one last thing…if you want to dive deeper into how Buffett’s investing principles applies to digital assets, check out my book.

Warren Buffett in a Web3 World: Applying 60 Years of Sage Advice to Cryptocurrency, NFTs, Blockchains and More

We took 1,300+ pages of wisdom from the Oracle of Omaha and condensed it into a snackable, easy-to-read guide for digital asset investors. Pick up your copy today!


Matthew Snider is the founder of Block3 Strategy Group, author of “Warren Buffett in a Web3 World,” and publisher of the BitFinance newsletter. He holds a Series 65 and MBA, and has been an active participant in digital asset markets since 2015. This article is for educational purposes only and should not be considered financial advice. Always consult with a qualified professional before making investment decisions.

Thanks for reading BitFinance with Matthew Snider! Subscribe for free to receive new posts and support my work.