Business | Pentageddon
Silicon Valley will be caught in the fallout
Feb 28th 2026
SINCE PRESIDENT DONALD TRUMP returned to power in 2025, he has courted a cadre of Silicon Valley billionaires, with one big exception. The White House has long had Dario Amodei, co-founder of Anthropic, an artificial-intelligence (AI) lab, in its crosshairs. On February 27th Mr Trump unleashed the full force of his government against Anthropic in a sharply escalating row over the use of AI in war. The implications could seriously damage one of America’s leading AI pioneers. But the fallout may spread beyond Anthropic to Silicon Valley at large.
In an unprecedented one-two punch against a big American firm, Mr Trump said he would immediately ban all American federal agencies from using Anthropic’s technology. Pete Hegseth, the secretary of war, swiftly followed by designating Anthropic a “supply-chain risk”. He banned any company doing business with America’s armed forces from conducting commercial activity with Anthropic. They both attacked Anthropic on ideological grounds. On Truth Social, Mr Trump accused it of being an “out-of-control, Radical Left AI company”.
The assault on the firm, recently valued at $380bn, has echoes of the way previous American governments have blacklisted Chinese firms, such as Huawei, a telecoms-equipment maker. It stems from Mr Amodei’s insistence in negotiations with the Pentagon that Anthropic retain two guardrails when its models are used in defence contracts; that they neither be used to make autonomous weapons, nor for mass domestic surveillance. The firm said that frontier AI models are not reliable enough to be used for the former, and that the latter is a violation of fundamental rights.
Mr Hegseth has demanded carte blanche for military planners to do what they want with large language models, within the bounds of the law. He had given Anthropic a Friday ultimatum: agree to these conditions or face severe penalties. He casts Anthropic’s red lines as an attempt to “strong-arm” the armed forces. Mr Amodei has countered that violating them threatens to undermine democracy.
Among America’s leading AI model-makers, Anthropic has been the most outspoken about the importance of keeping restrictions on use of its models. But it is not alone. Before Friday’s ultimatum, Sam Altman, boss of OpenAI, a rival to Anthropic, said his firm shared some of the latter’s red lines. Though both companies acknowledge that it is up to the Pentagon, not private companies, to make military decisions, they think AI may require special guardrails. Their position partly reflects concerns that AI researchers will protest if they feel their products are misused in harmful ways. Such boffins are the scarcest and most highly valued resource in Silicon Valley today. That makes their employers particularly sensitive to their opinions.
The Trump administration provided a possible hedge. Anthropic would be subject to a six-month phase-out period with the Department of War. If Anthropic did not “get their act together, and be helpful during the phase-out period”, Mr Trump said he would force it to comply in unspecified ways. That may refer to Mr Hegseth’s previous threat to invoke the Defence Production Act against Anthropic, giving the president authority to oblige companies to do national-security work. In a statement, Anthropic denied Mr Hegseth had the right to restrict anyone doing business with the military from using Anthropic’s models. It can only extend to the use of its models for Department of War contracts, it said. It told many of its clients that they would be “completely unaffected” by Mr Hegseth’s ruling.
If things do not go Anthropic’s way, the toll could be high. It would lose a $200m contract with the Pentagon. More seriously, losing access to firms in the defence supply chain would affect its relationship with numerous clients. Rivals may overcome their scruples to muscle in on its turf. For instance, Elon Musk, once an outspoken opponent of autonomous weapons, has changed his tune. xAI, his AI lab, has won approval from the Pentagon to do classified work and appears to have shelved any restrictions on the use of its models.
But former defence officials say it would be foolhardy for the armed forces to rely exclusively on xAI’s buggy Grok models for sensitive military work. Moreover, Anthropic is no peacenik. It was the first American AI model-maker to allow its models to be used for classified government work. Claude, its family of models, provides some of America’s leading AI tools and is particularly popular within corporate America. Crippling it would have profound implications beyond the Pentagon. ■

No comments:
Post a Comment