Anthropic develops AI bot that's 'too dangerous to release'
Briefly

Anthropic develops AI bot that's 'too dangerous to release'
"Anthropic warns that its new model, Claude Mythos, could unleash crippling cyber-attacks, capable of hacking into critical infrastructure like hospitals and power plants."
"During testing, Mythos found thousands of high-severity vulnerabilities, including some in every major operating system and web browser, many unnoticed for decades."
"AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities."
"The fallout - for economies, public safety, and national security - could be severe, prompting Anthropic to restrict access to Claude Mythos."
Anthropic has created an AI model named Claude Mythos, which poses significant risks if misused. The model can exploit vulnerabilities in critical infrastructure, including hospitals and power plants. During testing, it identified thousands of high-severity security flaws, some undetected for decades. Anthropic has opted not to release Mythos to the public, instead providing access to over 40 companies for security assessments. The company emphasizes the potential severe consequences for economies and national security if such technology is mismanaged.
Read at Mail Online
Unable to calculate read time
[
|
]