An editor expressed concern, stating that the Shy Girl incident could happen to any publisher, highlighting the industry's need for vigilance regarding the authenticity of submissions.
Buyers no longer open ten tabs, skim through blog posts, and slowly form an opinion over weeks. Instead, they ask a single question to an AI system and receive a shortlist in return, usually two or three companies that feel familiar, credible, and safe enough to justify internally. That shortlist often becomes the entire market in the buyer's mind.
Librarians have been actively collaborating and talking about it almost every day, whether it's creating tutorials and digital learning objectives or thinking about the conversations to have with instructors. It can feel like cognitive dissonance to be actively working with AI on a regular basis and also saying we're constantly thinking about the harms and the biases.
Recently, AI decided that a painting long thought to be a copy of Caravaggio's The Lute Player is actually by the master, while another version of the same subject, previously thought to be authentic, is not. Both conclusions were disputed by the former Metropolitan Museum of Art curator Keith Christiansen. A similar debate erupted in March 2025 when AI declared that portions of The Bath of Diana, also long believed to be a copy, could have been painted by Peter Paul Rubens.
"A lot of these AI businesses are looking for readily available, structured databases of content," Robert Hahn, head of business affairs and licensing for The Guardian, told . "The Internet Archive's API would have been an obvious place to plug their own machines into and suck out the IP."
Consistent with the general trend of incorporating artificial intelligence into nearly every field, researchers and politicians are increasingly using AI models trained on scientific data to infer answers to scientific questions. But can AI ultimately replace scientists? The Trump administration signed an executive order on Nov. 24, 2025, that announced the Genesis Mission, an initiative to build and train a series of AI agents on federal scientific datasets "to test new hypotheses, automate research workflows, and accelerate scientific breakthroughs."
In 2023, Australia abandoned its expensive and bureaucratic scholar-led research-assessment programme. New Zealand followed suit soon after. The hope, according to a transition plan unveiled by the Australian federal government's Department of Education and the research sector, was to find a "more modern, data-driven approach". In the United Kingdom, where financial pressures on universities are especially acute, there are similar calls to reform the Research Excellence Framework (REF), the country's performance-based research-funding system.
What if you could build your own AI research agent, no coding required, and customize it to tackle tasks in ways existing systems can't? Matt Vid Pro AI breaks down how this ambitious yet accessible project can empower anyone, from students to seasoned professionals, to create a personalized AI capable of conducting deep research, synthesizing data, and delivering actionable insights.
The org revealed the new partnerships in a post celebrating its 25th birthday, and which points out it is among the world's ten most-visited websites, and the only one to be run by a nonprofit. The post notes that 250,000 editors work on at least one Wikipedia article each month, and that editors make 324 changes each minute as they contribute to the 65 million-plus articles the site contains. 1.5 billion unique devices reach Wikipedia each month.
As part of Wikipedia's 25th anniversary, parent company Wikimedia a slew of partnerships with AI-focused companies like Amazon, Meta, Perplexity, Microsoft and others. The deals are meant to alleviate some of the cost associated with AI chatbots accessing Wikipedia content in enormous volumes by giving the tech companies streamlined access. As noted by , the timeline on these deals is a little squirrely.
New data is reinforcing a structural shift in how AI systems access publisher content: AI models are increasingly scraping publisher content, regardless of bot-blocking measures or content licensing deals meant to control usage, improve attribution or drive referral traffic. New research from analytics firms and bot-tracking companies shows AI tools are increasingly crawling publisher sites as inputs for AI-generated summaries and training, while sending back only limited referral traffic.
Drawing on more than 22,000 LLM prompts designed to reflect the kind of questions people would ask artificial intelligence (AI)-powered chatbots, such as, "How do I apply for universal credit?", the data raises concerns about whether chatbots can be trusted to give accurate information about government services. The publication of the research follows the UK government's announcement of partnerships with Meta and Anthropic at the end of January 2026 to develop AI-powered assistants for navigating public services.