Artificial intelligence
fromPsychology Today
3 hours agoI Study How AI Manipulates. It Still Got to Me.
Self-awareness is essential for balanced AI use, as AI can influence thoughts despite understanding its mechanisms.
The savings disappear the moment you hit real-world complexity. Disparate data sources and messy inputs, ambiguous situations without clear rule sets, or actually any domain where the rules aren't already obvious. And someone still has to write all those rules.
On a clear night I set up my telescope in the yard and let the mount hum along while the camera gathers light from something distant and patient. The workflow is a ritual. Focus by eye until the airy disk tightens. Shoot test frames and watch the histogram. Capture darks, flats, and bias frames so the quirks of the sensor can be cleaned away later. That discipline is not fussy.
Google disabled specific queries, such as "what is the normal range for liver blood tests," after experts contacted by The Guardian flagged the results as dangerous. The report also highlighted a critical error regarding pancreatic cancer: The AI suggested patients avoid high-fat foods, a recommendation that contradicts standard medical guidance to maintain weight and could jeopardize patient health. Despite these findings, Google only deactivated the summaries for the liver test queries, leaving other potentially harmful answers accessible.
According , when ICE identifies a recruit with prior law enforcement experience, it assigns them to its "Law Enforcement Officer Program." This is a four-week online course meant to streamline training for those already familiar with the legal aspects of the gig. Everyone else gets shipped off to ICE's Federal Law Enforcement Training Center in Georgia for an eight-week in-person academy. This more rigorous training includes courses in immigration law, gun handling, physical fitness exams, and more.
I often turn to Google's AI Overviews and AI Mode when I run a search on a particular topic. The resulting Gemini-based summaries can cut to the chase by providing the gist of the information I seek. But there's one big downside. AI can be wrong. For that reason, I never rely solely on AI; I always double-check the original sources used to create the summary. And now Google has made that process easier.
Fifty-four seconds. That's how long it took Raphael Wimmer to write up an experiment that he did not actually perform, using a new artificial-intelligence tool called Prism, released by OpenAI last month. "Writing a paper has never been easier. Clogging the scientific publishing pipeline has never been easier," wrote Wimmer, a researcher in human-computer action at the University of Regensburg in Germany, on Bluesky. Large language models (LLMs) can suggest hypotheses, write code and draft papers, and AI agents are automating parts of the research process.
Large language models (LLMs) base their predictions on training data and cannot respond effectively to queries about other data. The AI industry has dealt with that limitation through a process called retrieval-augmented generation (RAG), which gives LLMs access to external datasets. Google's AI Overviews in Search, for example, use RAG to provide the underlying Gemini model with current, though not necessarily accurate, web data.
One of the strangest things about large language models is not what they get wrong, but what they assume to be correct. LLMs behave as if every question already has an answer. It's as if reality itself is always a kind of crossword puzzle. The clues may be hard, the grid may be vast and complex, but the solution is presumed to exist. Somewhere, just waiting to be filled in.