Close Menu
    What's Hot

    Elon Musk’s SpaceX’s $780 million bitcoin stack now down to about $545 million ahead of IPO filing

    March 1, 2026

    Bitcoin STH Holds Steady: No Panic Amid Middle East Conflict – Details

    March 1, 2026

    US Military Used Anthropic AI in Iran Strike Despite Trump Ban: Report

    March 1, 2026
    Facebook X (Twitter) Instagram
    memecoinelinator.com
    • Home
    • Bitcoin
    • Crypto News
    memecoinelinator.com
    Home»Bitcoin»US Military Used Anthropic AI in Iran Strike Despite Trump Ban: Report
    Bitcoin

    US Military Used Anthropic AI in Iran Strike Despite Trump Ban: Report

    March 1, 2026No Comments3 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email

    The US military reportedly used Anthropic during a major air strike on Iran, only hours after President Donald Trump ordered federal agencies to halt use of the company’s systems.

    Military commands, including US Central Command (CENTCOM) in the Middle East, used Anthropic’s Claude AI model for operational support, according to people familiar with the matter cited by The Wall Street Journal. The tool has reportedly assisted with intelligence analysis, identifying potential targets and running battlefield simulations.

    The incident shows how deeply advanced AI systems have become embedded in defense operations. Even as the administration moved to sever ties with the company, Claude remained integrated into military workflows.

    On Friday, the Trump administration instructed agencies to stop working with the company and directed the Defense Department to treat it as a potential security risk. The order came after contract talks broke down, with Anthropic refusing to grant unrestricted military use of its AI for any lawful scenario requested by defense officials.

    Related: Crypto VC Paradigm expands into AI, robotics with $1.5B fund: WSJ

    Anthropic’s Claude AI used for classified operations

    Anthropic had previously secured a multiyear Pentagon contract worth up to $200 million alongside several major AI labs. Through partnerships involving Palantir and Amazon Web Services, Claude became approved for classified intelligence and operational workflows. The system was reportedly also involved in earlier operations, including a January mission in Venezuela that resulted in the capture of President Nicolás Maduro.

    Tensions intensified after Defense Secretary Pete Hegseth demanded the company permit unrestricted military use of its models. Anthropic CEO Dario Amodei rejected the request, describing certain applications as ethical boundaries the company would not cross, even if it meant losing government business.

    In response, the Pentagon began lining up replacement providers, reaching an agreement with OpenAI to deploy its AI models on classified military networks.

    OpenAI faces backlash after reaching deal with US military. Source: Sreemoy Talukdar

    Related: Pantera, Franklin Templeton join Sentient Arena to test AI agents

    Anthropic CEO pushes back on Pentagon ban

    During an interview on Saturday, Anthropic CEO Dario Amodei said the company opposes the use of its AI models for mass domestic surveillance and fully autonomous weapons, responding to a US government directive that labeled the firm a defense “supply chain risk” and barred contractors from using its products.

    He argued that certain applications cross fundamental boundaries, emphasizing that military decisions should remain under human control rather than be delegated entirely to machines.

    Magazine: Bitcoin may take 7 years to upgrade to post-quantum — BIP-360 co-author