Artificial intelligence
fromTheregister
5 hours agoWho is liable when AI agents go wrong in business?
AI agents in business decision-making raise questions about accountability and risk distribution among vendors and users.
"Any exposure of source code or system-level logic is significant, because it shows how controls are implemented. In AI systems, that layer is especially critical. The orchestration, prompts, and workflows effectively define how the system operates. If those are exposed, it can make it easier to identify weaknesses or manipulate outcomes."
Four terabytes of data have reportedly been stolen, including database records and source code. Allegedly stolen data has been published on a leak site, containing Slack information, internal ticketing data, and videos of conversations between Mercor's AI systems and contractors.
AI chatbots have been with us three years and one month (at least the kind that use large language models (LLMs) to communicate with natural-sounding words). Already norms are emerging in some professions for users to disclose how they use AI. For example: Organizations such as the International Committee of Medical Journal Editors created policies for disclosing AI use in scientific manuscripts.
The breakneck pace of AI deployment across enterprises is creating a monumental challenge for executives and company boards. In contrast to traditional IT systems, AI data and related ecosystems, which encompass everything from LLM models and training data to custom prompt data, have emerged as valuable intellectual property. They often represent millions of dollars in investment and months or even years of engineering effort.