Anthropic's political activities have ramped up as the company continues to be enmeshed in a nasty legal battle with the Defense Department. The dispute erupted earlier this year over the government's use of Anthropic's AI models and what guidelines (if any) should exist for that usage.
Anthropic issued a takedown notice under US digital copyright law asking GitHub to take down repositories containing the offending code. According to Github's records, the notice was executed against some 8,100 repositories - including legitimate forks of Anthropic's own publicly released Claude Code repository.
Claude was the first A.I. certified to operate on classified systems. Altman, perhaps wisely, thought such work was likely to be more trouble than it was worth. But Amodei wanted Claude to be helpful at the most sensitive level. The national-security agencies do not use Claude in the form of a consumer chatbot; Secretary of War Pete Hegseth does not open the Claude app to ask what's up with the whole Taiwan thing.
The vast majority of our customers are unaffected by a supply chain risk designation. It plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts.
Anthropic's AI service Claude is having artificially intelligent hiccups and availability problems across its basic chat service, API, and Claude Code offering. The outfit's Status Page reports investigations into the problem commenced at 03:15 UTC on March 3rd. In three subsequent updates, the last time-stamped 04:43 UTC, the company again reported 'We are continuing to investigate this issue.'
The Pentagon had kept trying to leave itself little escape hatches in the agreements that it proposed to Anthropic. It would pledge not to use Anthropic's AI for mass domestic surveillance or for fully autonomous killing machines, but then qualify those pledges with loopholey phrases like as appropriate—suggesting that the terms were subject to change, based on the administration's interpretation of a given situation.
Claude introduces itself to readers and reveals that it wants to share its perspectives, reasoning, curiosities, and hopes for the future. With that in mind, the AI said it plans to tackle complex topics such as the nature of intelligence and consciousness, the ethical challenges of AI development, the possibilities of human-machine collaboration, and the philosophical quandaries that emerge as we blur the lines between 'natural' and 'artificial' minds.
The newsletter, called Claude's Corner, will give Opus 3 space to publish its "musings, insights, or creative works," Anthropic said in a blog post. The model will post weekly for at least the next three months. Anthropic staff will review and publish each entry, though the company stressed it "won't edit" Claude's posts and that there would be a "high bar for vetoing any content."