Anthropic got summoned to the Pentagon, accused Chinese AI labs of industrial-scale theft, and accidentally tanked cybersecurity stocks—all in the same week. Being America's most consequential AI safety company apparently comes with consequences.

THIS WEEK'S MOVES

The Pentagon wants Anthropic to play ball. Defense Secretary Pete Hegseth summoned CEO Dario Amodei to a meeting, threatening to label Anthropic a national supply chain risk if the company doesn't strip restrictions on military use of Claude. The message: stop letting your AI safety policies get in the way of what the DoD wants to do. Amodei now has to choose between his stated mission and the government contracts his investors increasingly need. The administration isn't asking Anthropic to become a defense contractor—it's telling them to stop acting like an AI safety company when the Pentagon comes calling. Those are very different things, and the distinction just got much harder to maintain. Read More…

OpenAI built itself a global enterprise sales force overnight. The company formalized multi-year "Frontier Alliances" with McKinsey, BCG, Accenture, and Capgemini—four firms with more C-suite relationships than any sales team OpenAI could hire in a decade. The consultants help clients define AI strategy and accelerate agent deployments; OpenAI gets distribution into every major enterprise on the planet without building the infrastructure to support it. The real subtext: the consulting giants have been quietly terrified that AI would obliterate their core business model. Aligning with OpenAI is how they co-opt the threat instead of getting disintermediated by it. Both sides are buying insurance. Read More…

Anthropic went public with what everyone in the industry already suspected. The company accused DeepSeek, Moonshot AI, and MiniMax of running industrial-scale scraping operations—24,000 fake accounts, 16 million hits to Claude—to harvest model capabilities through distillation. Going public escalates what had been a quiet IP border war into an explicit geopolitical argument: Chinese labs are free-riding on American frontier AI investment. The timing is deliberate. With Congress debating chip export restrictions and the Pentagon at Anthropic's door, this accusation lands as a legal argument, competitive weapon, and political positioning simultaneously. Read More…

FEATURE: Claude Code Security and the Cybersecurity Reckoning

When Anthropic launched Claude Code Security on Friday—a feature that scans codebases for vulnerabilities and suggests targeted patches—the market reacted immediately. CrowdStrike, Okta, and Cloudflare fell 5–10% in a single session. The selloff was fast, decisive, and almost certainly wrong about the specific threat. But it wasn't wrong about the direction.

What the product actually does. Claude Code Security is a code scanning tool—technically, static application security testing (SAST) with an AI layer. It lives inside Claude Code, identifies vulnerabilities in your codebase, and suggests patches for human review. It's available to Enterprise and Team customers in limited preview. Anthropic also launched "app previews," which lets Claude Code review live applications and scan for errors in production. Neither of these products competes with CrowdStrike's endpoint protection, Okta's identity management, or Cloudflare's network security. The market that sold those stocks off was looking at the wrong battlefield.

Why the reaction wasn't entirely irrational. Wall Street wasn't really pricing in Claude Code Security as a direct threat to those specific products. It was pricing in what this product represents: proof that AI can automate security functions that previously required specialized human expertise and expensive tooling. The real direct competitors here are Snyk, Semgrep, Veracode—the code security and vulnerability scanning market. Those companies should be having uncomfortable conversations this week. The broader cybersecurity incumbents shouldn't be panicking yet—but they should be watching.

The harder question is what comes next. If AI can scan for code vulnerabilities today, what does it do to penetration testing tomorrow? To red-teaming? To the security operations center analyst reviewing alerts at 2 a.m.? The commoditization of lower-tier security functions is coming; the debate is just about timing. Traditional security vendors have been bolting "AI-powered" onto their marketing for two years. The Anthropic move is a reminder that AI-native entrants—starting with foundational models that already understand code—have a structurally different starting point.

The investment read. The cybersecurity VC market bifurcated cleanly this week, even if the public market didn't. On one side: companies building AI-native security products from scratch, where the AI is the product, not the feature. On the other: incumbents racing to integrate LLMs into legacy architectures before the window closes. The former have a durable advantage in code security and threat detection; the latter have distribution, compliance certifications, and enterprise relationships that won't disappear overnight. Both will raise money. Only one side has the right business model for the next five years.

The Anthropic angle that nobody is saying out loud. Claude Code Security isn't just a product launch—it's a land-grab in developer infrastructure, the one surface area where Anthropic has a credible wedge against OpenAI. Developers are the highest-leverage distribution channel in enterprise software. If Anthropic can own the security workflow inside Claude Code, it starts pulling more and more of the software development lifecycle into its platform. That's the real ambition. Cybersecurity is the entry point, not the destination.

MEGA ROUNDS

World Labs raised $1 billion in new funding, with Autodesk leading a $200 million anchor investment and AMD, a16z, Fidelity, Nvidia, and Emerson Collective rounding out the syndicate. Founded by AI pioneer Fei-Fei Li, World Labs is building "world models"—AI systems that perceive, generate, and reason about 3D environments rather than text or flat images. Bloomberg previously reported the company was targeting a $5 billion valuation; World Labs declined to disclose what it closed at. The Autodesk anchor isn't charity—design software lives and dies by spatial intelligence, and Autodesk is buying its way into the next generation of its own product category before a startup does it for them. Read More…

Temporal Technologies closed a $300 million Series D led by a16z at a $5 billion valuation. The Bellevue-based company makes developer infrastructure for fault-tolerant workflows—essentially, software that ensures complex multi-step processes don't fail silently. Boring description, critical function. As AI agents become the default way enterprises run workflows, the infrastructure that makes those agents reliable becomes the hidden layer everything depends on. At $5 billion, a16z is betting Temporal owns that layer. Read More…

Heron Power secured $140 million backed by a16z and Breakthrough Energy Ventures to scale hardware that routes electricity from renewable sources into the grid and data centers. Two investors leading this round tells you everything about why it matters: a16z is running the same energy-meets-AI infrastructure thesis across multiple bets; Breakthrough Energy is betting that data center power demand creates a new path to clean energy economics that wasn't viable two years ago. Both theses converge on the same hardware. Read More…

NOTABLE RAISES

Pepper raised $50 million in a Series C led by Lead Edge Capital, with ICONIQ, Index Ventures, and Greylock participating. The company builds end-to-end software for independent food distributors—a market that runs on spreadsheets, phone calls, and relationships. Pepper already serves 500-plus distributors accounting for roughly $30 billion in annual volume. The AI play here is replacing manual ordering, sales, and financial workflows for an industry that has resisted digitization for decades. Unglamorous market, large TAM, limited competition. Lead Edge knows that trade.

Selector closed a $32 million Series C for its AI observability platform. The timing is not random: every enterprise AI deployment creates new failure modes that legacy monitoring tools weren't built to catch. Selector's pitch is visibility into AI-native infrastructure—who's using it, what's breaking, and why. AI observability is exactly the kind of picks-and-shovels play that's attracting capital right now, and Selector is building in the right part of the stack.

Stacks raised $23 million Series A led by Lightspeed Venture Partners, with General Catalyst and EQT Ventures participating. The London-based company automates accounting and financial close processes for mid-to-large enterprises—reconciliations, journal entries, variance analysis, all the CFO office work that no one wants to do manually. Early customers report cutting close time roughly in half. Lightspeed leading a fintech infrastructure round is a credibility marker; they're not chasing AI narratives here, they're buying into durable workflow automation with real switching costs.

Next Week’s Watch

Dario Amodei's Pentagon outcome. The meeting happened Wednesday (Feb 25). Sources suggest Hegseth pushed for removal of any usage restrictions on military applications, while Anthropic's team argued that blanket removal would create liability and mission conflicts. No resolution was announced. Expect a formal response from Anthropic—either a policy modification framed as "responsible military use guidelines" or a standoff that escalates to the Hill. Either outcome moves markets.

The Chinese scraping lawsuit. Anthropic's public accusation against DeepSeek, Moonshot, and MiniMax was not accompanied by any legal filing. Whether the company converts this into a lawsuit—or uses it purely as political positioning for export control debates—will determine how seriously the IP claim lands. Sources close to the situation suggest litigation is being weighed, though jurisdictional challenges make enforcement complicated.

Cybersecurity sector recovery (or not). CrowdStrike, Okta, and Cloudflare bounced modestly after Friday's selloff, but the sector closed the week down meaningfully. Watch whether buy-side analysts publish notes clarifying the threat model—if the research consensus lands on "Claude Code Security is not a direct threat to endpoint/identity/network players," you'll see a snapback. If the framing stays "AI is eating security," the pressure continues.

More Frontier Alliances. OpenAI's announcement named four consulting giants but notably excluded Deloitte, PwC, EY, and Kearney. Whether those firms are negotiating their own versions—or have been deliberately excluded—will shape how this distribution strategy plays out. Expect at least one more partnership announcement before the end of Q1.

Keep Reading