Monday, March 30, 2026

Trump Administration Orders Halt on Anthropic AI Technology

Related

“Daryl Hannah Criticizes TV Show Portrayal”

Daryl Hannah strongly criticized the new television series centered...

Trump Administration Orders Halt on Anthropic AI Technology

The Trump administration issued an order on Friday directing...

“3D-Printed Feeding Stand Saves Foster Kitten with Megaesophagus”

A couple in Kamloops, British Columbia, utilized 3D printing...

US Sanctions on Russian Oil Giants Shake Global Markets

When the United States Treasury imposed sanctions on two...

“Canadian Football Fans Divided on CFL Rule Changes”

A recent national poll indicates that nearly half of...

Share

The Trump administration issued an order on Friday directing all U.S. agencies to cease the use of Anthropic’s artificial intelligence technology and imposed significant penalties, marking a notable public dispute between the government and the company regarding AI safety. President Donald Trump, along with Defense Secretary Pete Hegseth and other officials, criticized Anthropic on social media for not granting the military unrestricted access to its AI technology by a specified deadline. This led to accusations of jeopardizing national security, as CEO Dario Amodei stood firm on concerns that the company’s products could potentially violate its safeguards.

Trump expressed on social media that the government does not require or desire the technology from Anthropic and would terminate any future business relations with the company. Hegseth labeled the company as a “supply chain risk,” a classification typically reserved for foreign adversaries that could disrupt crucial partnerships with other businesses. In response, Anthropic contended that such a designation was unprecedented for an American company negotiating with the government and could set a dangerous precedent.

Anthropic had sought specific assurances from the Pentagon regarding the usage of its AI chatbot Claude to prevent mass surveillance of Americans or its deployment in fully autonomous weapons. While the Pentagon indicated disinterest in those applications and pledged to use the technology lawfully, it insisted on unrestricted access without limitations. The government’s move to assert control over the company’s internal decision-making comes amidst broader debates over AI’s role in national security and concerns about its potential applications in scenarios involving lethal force, sensitive data, or government monitoring.

Trump criticized Anthropic for attempting to exert pressure on the Pentagon, announcing that most agencies must immediately discontinue the use of the company’s AI technology. However, the Pentagon was given a six-month period to phase out the technology integrated into military platforms. The president emphasized that the U.S. would not allow a company to dictate military strategies and operations. Amid escalating tensions, Anthropic rejected the government’s contract terms, arguing that they would allow safeguards to be bypassed at will.

The decision to designate Anthropic as a supply chain risk was met with criticism from Virginia Senator Mark Warner, raising concerns about whether national security decisions were driven by careful analysis or political motivations. The dispute drew attention from AI developers in Silicon Valley, with prominent figures and rival companies like OpenAI and Google expressing support for Anthropic. Elon Musk endorsed Trump’s stance, while OpenAI CEO Sam Altman sided with Anthropic and questioned the Pentagon’s actions.

The fallout from the clash is expected to benefit Musk’s competing chatbot, Grok, which is set to gain access to classified military networks. The move may serve as a warning to other competitors like Google and OpenAI, which also have contracts to provide AI tools to the military. Despite differing stances, Altman acknowledged the safety concerns of Anthropic and emphasized the importance of trust in the company’s commitment to safety. Retired Air Force Gen. Jack Shanahan highlighted the widespread use of Anthropic’s technology within the government and the need for caution in national security applications, particularly pertaining to fully autonomous weapons.