Artificial intelligence
fromTheregister
14 hours agoWho is liable when AI agents go wrong in business?
AI agents in business decision-making raise questions about accountability and risk distribution among vendors and users.
The ruling upheld a lower court's preliminary injunction, the latest rebuke to a major shift that advocates warn would push 170,000 people in federally subsidized housing back into homelessness.
Doing so has failed to prioritize agency internal control processes to adequately protect American taxpayer dollars, leading to documented examples of widespread abuse. Prior versions of OMB's guidance have overly deferred to the direction and priorities of external entities whose views are not binding on the Executive Branch, such as the Government Accountability Office.
Research finds that relying on regulations to determine your policies and procedures can result in ethical blindspots, or situations where people might think if there is not a rule for something, that it's permissible. After years of shifting towards values and culture-based compliance, leadership might be heading the opposite direction.
Defense Secretary Pete Hegseth took the unprecedented step of designating a U.S. firm-Anthropic-as a supply chain risk. Anthropic's crime? It refused to violate industry-wide protocols against using AI for mass surveillance or autonomous weapons. Hegseth's designation, which has until now been reserved for foreign firms, bars U.S. military contractors from doing business with the company.
For patents to be born strong, and the public to have confidence that they are, we must ensure strict adherence to USPTO's ethical standards and avoid (real or apparent) conflicts of interest.
Rather than stolen data making headlines, it was business stoppage that triggered attention. Moving into 2026, the board's focus should be on ensuring business continuity and building resilience in the face of emerging risks generated by AI usage and attack vectors, quantum computing and geopolitics.
Since the release of ChatGPT in late 2022, the frequency of court submissions riddled with AI-hallucinated gibberish has increased exponentially. Now, more than three years later, it seems that not a week goes by without a headline about yet another case in which a lawyer has submitted briefs to the court full of AI-hallucinated gibberish.
As audit committees confront a rapidly expanding risk landscape, their role in corporate governance is being reshaped. Boards have often turned to current and former CFOs as independent directors, particularly for audit committees, because of their ability to translate complex operational and financial realities into effective oversight.For example, this month, J. Michael Hansen, former EVP and CFO of Cintas Corporation, was appointed to the audit committee at Paychex.
Because the parties did not dispute this parking system was indistinguishable from the method claimed in the '956 patent, excluding the offer for sale was clear legal error and an abuse of discretion, said the CAFC.
"While not the basis of today's decision, I note that inter partes review may be discretionarily denied on the basis that a petitioner is a sovereign." - USPTO Director John Squires U.S. Patent and Trademark Office (USPTO) Director John Squires on January 15 issued a Director Review decision, which he then designated as informative on January 16, in favor of Micron Technologies, vacating two Patent Trial and Appeal Board (PTAB) decisions granting institution of inter partes review (IPR) for Yangtze Memory Technologies.
Businesses are acting fast to adopt agentic AI- artificial intelligence systems that work without human guidance-but have been much slower to put governance in place to oversee them, a new survey shows. That mismatch is a major source of risk in AI adoption. In my view, it's also a business opportunity. I'm a professor of management information systems at Drexel University's LeBow College of Business,
AI agents are accelerating how work gets done. They schedule meetings, access data, trigger workflows, write code, and take action in real time, pushing productivity beyond human speed across the enterprise. Then comes the moment every security team eventually hits: "Wait... who approved this?" Unlike users or applications, AI agents are often deployed quickly, shared broadly, and granted wide access permissions, making ownership, approval, and accountability difficult to trace. What was once a straightforward question is now surprisingly hard to answer.
It's not only law firms and legal departments that are adopting GenAI systems without fully understanding what they can and cannot do - court systems may also be tempted to adopt these tools to short circuit workloads in the face of limited resources. And that poses some risks and concerns to the rule of law, a notion that hinges on accuracy, fairness, and public perception.
Only 37% of legal leaders trust the use of generative artificial intelligence in high-stakes decisions, showing limited confidence in its ability to interpret complex issues, according to a new study of 500 legal and business leaders. The study by Paragon Legal, a legal services company that advises businesses and corporate legal departments, also reveals that: * 39% say their organizations are adopting AI too quickly. * 36% have used AI-generated insights that they do not fully trust. * 37% have restricted or disabled AI tools because of concerns over compliance.