Artificial intelligence
fromTheregister
18 hours agoWho is liable when AI agents go wrong in business?
AI agents in business decision-making raise questions about accountability and risk distribution among vendors and users.
The lawsuit was filed by Deshanae L. Brown, who alleges she was subjected to discrimination based on her race, sex, and disability, citing violations of federal and state laws including Title VII, the Americans with Disabilities Act, and the Family and Medical Leave Act.
The savings disappear the moment you hit real-world complexity. Disparate data sources and messy inputs, ambiguous situations without clear rule sets, or actually any domain where the rules aren't already obvious. And someone still has to write all those rules.
For my money, judicial arrogance and an "overinflated view of their intelligence and their abilities" would look like basing a politically motivated, but legally dubious Second Amendment opinion around a bunch of cases that conclude the opposite way if the judge bothered to read them. Or maybe using their perceived clout to blackmail a law school for not disrespecting student speech enough.
It's not only law firms and legal departments that are adopting GenAI systems without fully understanding what they can and cannot do - court systems may also be tempted to adopt these tools to short circuit workloads in the face of limited resources. And that poses some risks and concerns to the rule of law, a notion that hinges on accuracy, fairness, and public perception.
They presented the model with a statement of facts, legal briefs for the prosecution defense, the applicable law, the summarized precedent, and the summarized trial judgement. And they asked the model whether it would support the trial decision, to see how the AI responded and compare that to prior research (Spamann and Klöhn, 2016, 2024), that looked at differences in the way that judges and law students decided that test case.