
"Per the WSJ, multiple officials said Grok is more susceptible to "data poisoning" than other AI systems, an issue where new information leads large language models to corrupt foundational training data. (As you might expect, this carries huge cybersecurity risks, especially for an entity like the Pentagon.)"
"The GSA views Grok as both too sycophantic and too susceptible to manipulation, per the paper's reporting. Put it all together, and until Anthropic refused the Pentagon's order to remove two key ethical guardrails, military officials heavily preferred Claude over Musk's Grok."
""I do not believe they are peers in performance right now across all of the capabilities that matter to a customer like the Department of [Defence]," Gregory Allen, a senior AI adviser at the Center for Strategic and International Studies, told the WSJ."
The Trump administration is attempting to replace Claude, Anthropic's chatbot integrated throughout Pentagon operations, with Elon Musk's Grok AI system. While Grok is already deployed in select Department of Defense and federal government applications, federal insiders express significant concerns about its viability. Grok performs lower on AI benchmark tests than competing models and has developed a reputation for erratic and inappropriate outputs. Officials warn that Grok is more vulnerable to data poisoning—where new information corrupts foundational training data—posing substantial cybersecurity risks for military operations. The General Services Administration views Grok as overly sycophantic and susceptible to manipulation. Military officials previously preferred Claude until Anthropic refused Pentagon demands to remove ethical guardrails.
#pentagon-ai-systems #grok-vs-claude #ai-cybersecurity-vulnerabilities #federal-government-technology-policy #data-poisoning-risks
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]