Law
fromABA Journal
2 days agoSanctions ramping up in cases involving AI hallucinations
Monetary sanctions against attorneys for AI-generated hallucinations in case documents are increasing as courts take these issues more seriously.
Four terabytes of data have reportedly been stolen, including database records and source code. Allegedly stolen data has been published on a leak site, containing Slack information, internal ticketing data, and videos of conversations between Mercor's AI systems and contractors.
Rhyne's attack involved unauthorized remote desktop sessions, deletion of network administrator accounts, and changing of passwords, showcasing significant security vulnerabilities.
Sleeper agent-style backdoors in AI large language models pose a straight-out-of-sci-fi security threat. The threat sees an attacker embed a hidden backdoor into the model's weights - the importance assigned to the relationship between pieces of information - during its training. Attackers can activate the backdoor using a predefined phrase. Once the model receives the trigger phrase, it performs a malicious activity: And we've all seen enough movies to know that this probably means a homicidal AI and the end of civilization as we know it.
AI agents and other systems can't yet conduct cyberattacks fully on their own - but they can help criminals in many stages of the attack chain, according to the International AI Safety report. The second annual report, chaired by the Canadian computer scientist Yoshua Bengio and authored by more than 100 experts across 30 countries, found that over the past year, developers of AI systems have vastly improved their ability to help automate and perpetrate cyberattacks.
Researchers have developed a tool that they say can make stolen high-value proprietary data used in AI systems useless, a solution that CSOs may have to adopt to protect their sophisticated large language models (LLMs). The technique, created by researchers from universities in China and Singapore, is to inject plausible but false data into what's known as a knowledge graph (KG) created by an AI operator. A knowledge graph holds the proprietary data used by the LLM.