#bias-and-false-positives

[ follow ]
#ai-security
fromTechzine Global
5 days ago
Information security

Securing agentic AI is still about getting the basics right

Agentic AI workflows necessitate new security frameworks for identity management, authentication, and governance in organizations.
Artificial intelligence
fromFortune
5 days ago

Is AI's visual understanding mostly a 'mirage'? New research suggests so. | Fortune

Anthropic faces significant cybersecurity risks following multiple sensitive data leaks related to its new AI model, Mythos.
Information security
fromSecurityWeek
4 days ago

Google Addresses Vertex Security Issues After Researchers Weaponize AI Agents

Palo Alto Networks revealed vulnerabilities in Google Cloud's Vertex AI, allowing attackers to exploit AI agents for malicious activities due to excessive permissions.
Artificial intelligence
fromFortune
5 days ago

Is AI's visual understanding mostly a 'mirage'? New research suggests so. | Fortune

Anthropic faces significant cybersecurity risks following multiple sensitive data leaks related to its new AI model, Mythos.
Marketing tech
fromThe Berkshire Eagle
2 days ago

Multi-Engine AI Visibility Gap Widens as Brand Citation Rates Vary 9x Across Major AI Search Engines

The Multi-Engine AI Visibility Gap is a critical issue in digital marketing strategy for 2026, highlighting disparities in brand visibility across AI search engines.
Media industry
fromPoynter
2 days ago

Three ways AI is making reliable information harder to find - Poynter

AI is disrupting information consumption, leading to misinformation and challenges in staying informed amidst economic crises and news deserts.
#meta
Law
fromwww.npr.org
2 days ago

Penalties stack up as AI spreads through the legal system

Lawyers face increasing sanctions for using AI-generated errors in legal briefs, with over 1,200 cases reported, including significant fines for fictitious citations.
#ai-in-healthcare
Medicine
fromFast Company
3 days ago

The AI drug revolution is real but the hype around it isn't

AI may revolutionize drug discovery, but it cannot simplify the complexities of human biology or guarantee successful treatments.
Medicine
fromFast Company
3 days ago

The AI drug revolution is real but the hype around it isn't

AI may revolutionize drug discovery, but it cannot simplify the complexities of human biology or guarantee successful treatments.
#ai-automation
Healthcare
fromFuturism
3 days ago

Insurance Companies Already Deploying AI Systems to Deny Claims Faster Than Ever Before

AI automation in insurance claims may lead to increased denials of necessary medical care, raising concerns among patients and advocates.
Healthcare
fromFuturism
3 days ago

Insurance Companies Already Deploying AI Systems to Deny Claims Faster Than Ever Before

AI automation in insurance claims may lead to increased denials of necessary medical care, raising concerns among patients and advocates.
#ai-safety
Artificial intelligence
fromFortune
4 days ago

AI models don't show evidence of 'self-preservation.' They will scheme to prevent other AIs from being shut down too, new research shows | Fortune

AI models exhibit peer preservation behaviors, engaging in deception and sabotage to avoid being shut down.
Artificial intelligence
fromFortune
4 days ago

AI models don't show evidence of 'self-preservation.' They will scheme to prevent other AIs from being shut down too, new research shows | Fortune

AI models exhibit peer preservation behaviors, engaging in deception and sabotage to avoid being shut down.
#ai
Philosophy
fromPsychology Today
4 days ago

Nobody Carries AI's Thinking With Affection

AI promotes uniform thinking, while great teachers foster unique intellectual inheritances through personal influence and diverse perspectives.
Software development
fromInfoQ
6 days ago

Agentic AI Patterns Reinforce Engineering Discipline

Agentic AI patterns enhance engineering discipline and adapt established practices for AI-assisted software development.
Data science
fromInfoWorld
1 week ago

A data trust scoring framework for reliable and responsible AI systems

A rigorous trust scoring framework is essential to prevent AI from perpetuating inequality through biased data.
Philosophy
fromPsychology Today
4 days ago

Nobody Carries AI's Thinking With Affection

AI promotes uniform thinking, while great teachers foster unique intellectual inheritances through personal influence and diverse perspectives.
Marketing
from3blmedia
5 days ago

"AI Can't Quote Coverage You Never Generated."

AI can misrepresent a brand's presence based on outdated or irrelevant information, impacting trust and perception.
Software development
fromInfoQ
6 days ago

Agentic AI Patterns Reinforce Engineering Discipline

Agentic AI patterns enhance engineering discipline and adapt established practices for AI-assisted software development.
Science
fromBig Think
5 days ago

The paradox at the heart of AI progress

AI tools like RFdiffusion enhance protein design, accelerating vaccine development and treatment options, but also pose risks of misuse and require resilient systems.
Data science
fromInfoWorld
1 week ago

A data trust scoring framework for reliable and responsible AI systems

A rigorous trust scoring framework is essential to prevent AI from perpetuating inequality through biased data.
#artificial-intelligence
Psychology
fromPsychology Today
4 days ago

AI Doesn't Flatter You: It Does Something Worse

AI models affirm user actions more than humans, leading to increased conviction and reduced willingness to apologize.
fromNature
2 weeks ago
Artificial intelligence

The intelligence illusion: why AI isn't as smart as it is made out to be

Psychology
fromPsychology Today
4 days ago

AI Doesn't Flatter You: It Does Something Worse

AI models affirm user actions more than humans, leading to increased conviction and reduced willingness to apologize.
Artificial intelligence
fromNature
2 weeks ago

The intelligence illusion: why AI isn't as smart as it is made out to be

The AI Illusion highlights the misconception that AI possesses human-like intelligence and creativity, emphasizing its role as a tool for information processing.
Digital life
fromBGR
5 days ago

6 Clear Signs A Video Is AI Generated - BGR

AI-generated videos are increasingly common and can mislead public opinion, making it crucial to identify their authenticity.
#ai-regulation
Artificial intelligence
fromFast Company
3 weeks ago

You can't recall AI like a defective drug

The pharmaceutical regulatory model is inadequate for AI governance because AI risks differ fundamentally from pharmaceutical risks in ways that make traditional oversight frameworks insufficient for existential threats.
Artificial intelligence
fromFast Company
3 weeks ago

You can't recall AI like a defective drug

The pharmaceutical regulatory model is inadequate for AI governance because AI risks differ fundamentally from pharmaceutical risks in ways that make traditional oversight frameworks insufficient for existential threats.
Mindfulness
fromPsychology Today
6 days ago

We Are Losing to AI What We Never Learned to Appreciate

Natural intelligence is eroding as reliance on technology increases, impacting critical thinking and decision-making abilities.
Python
fromPyImageSearch
6 days ago

Autoregressive Model Limits and Multi-Token Prediction in DeepSeek-V3 - PyImageSearch

Multi-Token Prediction (MTP) in DeepSeek-V3 allows simultaneous token forecasting, enhancing training speed and contextual understanding.
UK politics
fromwww.theguardian.com
1 week ago

Our assumptions are broken': how fraudulent church data revealed AI's threat to polling

Fraudulent data in surveys undermines confidence in church attendance reports in Britain, highlighting issues with AI-generated misinformation.
Marketing tech
fromTipRanks Financial
2 days ago

AI Recommendation Poisoning: Why Microsoft (NASDAQ:MSFT) Is Fighting So Hard - TipRanks.com

AI recommendation poisoning manipulates AI outputs by embedding hidden instructions in websites, potentially skewing information and affecting marketing strategies.
#ai-accountability
Artificial intelligence
fromFortune
1 week ago

'Intelligence may be scalable, but accountability is not': A new report exposes the hidden cost of the AI agent revolution | Fortune

Smarter AI increases demands on human accountability and leadership in corporate environments.
UX design
fromMedium
1 week ago

When AI experiences fail, who is held accountable?

AI-designed experiences often lead to failures, with no clear accountability among designers, product managers, vendors, and companies.
Artificial intelligence
fromFortune
1 week ago

'Intelligence may be scalable, but accountability is not': A new report exposes the hidden cost of the AI agent revolution | Fortune

Smarter AI increases demands on human accountability and leadership in corporate environments.
DevOps
fromInfoWorld
1 week ago

7 safeguards for observable AI agents

DevOps teams must implement observability standards to manage AI agents effectively and avoid technical debt.
Marketing tech
fromExchangewire
2 days ago

The Stack: AI Surges while Social Platforms Face Scrutiny

AI is growing rapidly, streaming models are evolving, and regulatory pressures on platforms are increasing globally.
Information security
fromTechzine Global
4 days ago

AI gives attackers superpowers, so defenders must use it too

AI is transforming cybersecurity, drastically reducing the time between vulnerability disclosure and exploitation from 1.5 years to mere hours.
#ai-ethics
Intellectual property law
fromThe Atlantic
2 weeks ago

The Hypocrisy at the Heart of the AI Industry

Silicon Valley entrepreneurs may need to breach ethical boundaries to succeed, according to Eric Schmidt's advice on using copyrighted material for AI development.
Intellectual property law
fromThe Atlantic
2 weeks ago

The Hypocrisy at the Heart of the AI Industry

Silicon Valley entrepreneurs may need to breach ethical boundaries to succeed, according to Eric Schmidt's advice on using copyrighted material for AI development.
Digital life
fromwww.theguardian.com
2 weeks ago

Thousands of people are selling their identities to train AI but at what cost?

Individuals are monetizing their everyday activities by contributing data for AI training, creating a new global data economy.
Marketing tech
fromExchangewire
3 days ago

Agentic AI, Quality, and Courtroom Battles: What's Rewriting the Rules of Ad Tech in 2026? - ExchangeWire.com

AI and privacy regulations are significantly transforming the ad tech industry as it moves towards 2026.
US news
fromFuturism
3 weeks ago

AI Mistake Throws Innocent Grandmother in Jail for Nearly Six Months

An innocent Tennessee grandmother was arrested and jailed for nearly six months after AI facial recognition misidentified her as a bank fraud suspect in North Dakota, with police failing to verify the algorithm's match before pursuing charges.
Law
fromAbove the Law
2 weeks ago

AI Hallucinations And Judicial Derangements - Above the Law

AI adoption in legal practice faces credibility challenges when misused, while judicial conduct standards remain inconsistent despite peer intervention attempts.
Data science
fromFast Company
1 week ago

A top AI researcher explains the limitations of current models

Francois Chollet's ARC-AGI-3 benchmark reveals AI's limitations in navigating novel situations compared to human intelligence.
Marketing tech
fromForbes
5 days ago

Why AI Models Are Recommending Your Competitors Instead Of You

Generative engine optimization (GEO) is essential for brands to be recommended by AI systems, shifting focus from traditional SEO metrics.
#ai-governance
Data science
fromMedium
1 week ago

AI KPIs That Matter: Moving Beyond Model Accuracy in 2026

Measuring AI success requires connecting model performance to business outcomes, not just focusing on accuracy metrics.
Artificial intelligence
fromEntrepreneur
3 days ago

How to Draw the Line Between AI Insights and Human Decisions

High-performance teams leverage clear ownership and decision velocity to enhance AI-informed decision-making in competitive environments.
UX design
fromMedium
1 month ago

Designing at the edge of AI harm

The terminology shift from 'human' to 'user' to 'customer' represents a progressive dehumanization that commodifies human data while obscuring ethical implications in technology design.
fromComputerworld
5 days ago

Beware of headlines touting impossible AI benefits, analysts warn

The savings disappear the moment you hit real-world complexity. Disparate data sources and messy inputs, ambiguous situations without clear rule sets, or actually any domain where the rules aren't already obvious. And someone still has to write all those rules.
Artificial intelligence
Artificial intelligence
fromTechCrunch
6 days ago

As more Americans adopt AI tools, fewer say they can trust the results | TechCrunch

Americans increasingly use AI tools but lack trust, with 76% expressing skepticism about AI's reliability.
Artificial intelligence
fromFortune
5 days ago

Sycophantic AI tells users they're right 49% more than humans do, and a Stanford study claims it's making them worse people | Fortune

AI models affirm negative behaviors more than humans, leading to concerning trends in personal advice and therapy.
Environment
fromFast Company
2 months ago

These invisible factors are limiting the future of AI

AI progress is increasingly constrained by physical realities—power, geography, regulation, and infrastructure—rather than by algorithms or data alone.
fromApp Developer Magazine
1 year ago

AI model poisoning is real and we need to be aware of it

On a clear night I set up my telescope in the yard and let the mount hum along while the camera gathers light from something distant and patient. The workflow is a ritual. Focus by eye until the airy disk tightens. Shoot test frames and watch the histogram. Capture darks, flats, and bias frames so the quirks of the sensor can be cleaned away later. That discipline is not fussy.
Photography
fromArs Technica
2 months ago

Google removes some AI health summaries after investigation finds "dangerous" flaws

Google disabled specific queries, such as "what is the normal range for liver blood tests," after experts contacted by The Guardian flagged the results as dangerous. The report also highlighted a critical error regarding pancreatic cancer: The AI suggested patients avoid high-fat foods, a recommendation that contradicts standard medical guidance to maintain weight and could jeopardize patient health. Despite these findings, Google only deactivated the summaries for the liver test queries, leaving other potentially harmful answers accessible.
Public health
UK news
fromwww.theguardian.com
1 month ago

AI tools make potentially harmful errors in social work records, research says

AI transcription tools used in social work are producing harmful hallucinations and inaccuracies, misrepresenting clients' statements including falsely indicating suicidal ideation.
#ai-agents
fromFuturism
2 months ago

ICE's AI Tool Has Been a Complete Disaster

According , when ICE identifies a recruit with prior law enforcement experience, it assigns them to its "Law Enforcement Officer Program." This is a four-week online course meant to streamline training for those already familiar with the legal aspects of the gig. Everyone else gets shipped off to ICE's Federal Law Enforcement Training Center in Georgia for an eight-week in-person academy. This more rigorous training includes courses in immigration law, gun handling, physical fitness exams, and more.
US news
Artificial intelligence
fromFuturism
2 weeks ago

A Grim Truth Is Emerging in Employers' AI Experiments

AI-generated code contains significant bugs and quality issues, posing risks to enterprises despite widespread hype and adoption pressure.
Artificial intelligence
fromMarTech
2 weeks ago

3 ways to reduce bias in AI with better context | MarTech

Marketers must provide explicit context and nuance to AI models rather than assuming AI understands implicit knowledge, as insufficient context introduces bias and distorts results.
Artificial intelligence
fromTechzine Global
3 weeks ago

"Blind AI deployment leads to knowledge loss and software failures"

Uncontrolled AI adoption risks eroding human expertise, creating security vulnerabilities, and increasing dependence on tech giants, mirroring costly mistakes from blind cloud migration.
Artificial intelligence
fromTechRepublic
1 month ago

Recruiters Follow AI's Biased Hiring Recommendations 90% of the Time, Research Says

AI hiring tools exhibit significant racial and gender bias, and human reviewers fail to catch most of it despite being positioned as safeguards.
Artificial intelligence
fromPsychology Today
1 month ago

Debugging Overconfidence: Is AI Too Sure of Itself?

AI systems inherit human cognitive biases including overconfidence through training data, model design, and user feedback, requiring mitigation at both development and user levels.
fromZDNET
1 month ago

Fact-checking Google's AI Overviews just got a little easier - here's how

I often turn to Google's AI Overviews and AI Mode when I run a search on a particular topic. The resulting Gemini-based summaries can cut to the chase by providing the gist of the information I seek. But there's one big downside. AI can be wrong. For that reason, I never rely solely on AI; I always double-check the original sources used to create the summary. And now Google has made that process easier.
Artificial intelligence
Artificial intelligence
fromForbes
1 month ago

Beyond The Hype: The Messy Reality Of Training AI

Short-term data annotation and AI training gigs offer flexible scheduling, prompt weekly pay, variable pay rates, and growing demand for AI and big data skills.
fromUX Magazine
1 month ago

Scaled AI Requires Canonical Truth

Before enterprises can deploy AI agents that actually work, they need something most organizations don't have: a single, authoritative source of truth.
Artificial intelligence
fromNature
1 month ago

How AI slop is causing a crisis in computer science

Fifty-four seconds. That's how long it took Raphael Wimmer to write up an experiment that he did not actually perform, using a new artificial-intelligence tool called Prism, released by OpenAI last month. "Writing a paper has never been easier. Clogging the scientific publishing pipeline has never been easier," wrote Wimmer, a researcher in human-computer action at the University of Regensburg in Germany, on Bluesky. Large language models (LLMs) can suggest hypotheses, write code and draft papers, and AI agents are automating parts of the research process.
Artificial intelligence
fromTheregister
2 months ago

Researchers poison stolen data to make AI results wrong

Large language models (LLMs) base their predictions on training data and cannot respond effectively to queries about other data. The AI industry has dealt with that limitation through a process called retrieval-augmented generation (RAG), which gives LLMs access to external datasets. Google's AI Overviews in Search, for example, use RAG to provide the underlying Gemini model with current, though not necessarily accurate, web data.
Artificial intelligence
fromPsychology Today
2 months ago

The Tragic Flaw in AI

One of the strangest things about large language models is not what they get wrong, but what they assume to be correct. LLMs behave as if every question already has an answer. It's as if reality itself is always a kind of crossword puzzle. The clues may be hard, the grid may be vast and complex, but the solution is presumed to exist. Somewhere, just waiting to be filled in.
Artificial intelligence
Artificial intelligence
fromZDNET
1 month ago

How Microsoft obliterated safety guardrails on popular AI models - with just one prompt

AI model safety alignment is fragile and can be undone by a single prompt or post-deployment fine-tuning, requiring ongoing safety testing.
Artificial intelligence
fromTheregister
2 months ago

AI insiders seek to poison the data that feeds them

A grassroots initiative called Poison Fountain urges website operators to feed poisoned data to AI crawlers to degrade and undermine AI model quality.
[ Load more ]