Psychology
fromPsychology Today
2 days agoPeople Don't Just Update Beliefs, They Test Them
Understanding psychological change requires recognizing the role of control and mastery in actively pursuing change despite familiar limitations.
In antiquity, many opined about "the elements" in combination. Around 2500 years ago, Leucippus and Democritus founded the idea of atoms. Perhaps everything, they opined, was composed of indivisible building blocks. In the late 1700s, hydrogen and oxygen were discovered. Circa 1804, John Dalton revived atomism to explain chemical behavior. Then in 1869, Mendeleev developed the periodic table: organizing the atoms.
In the weeks leading up to September 1891, mathematician Georg Cantor prepared an ambush. For years he had sparred - philosophically, mathematically and emotionally - with his formidable rival Leopold Kronecker, one of Germany's most influential mathematicians. Kronecker thought that mathematics should deal only with whole numbers and proofs built from them and therefore rejected Cantor's study of infinity. "God made the integers," Kronecker once said. "All else is the work of man."
You know that sinking feeling when you realize you've been using a phrase that makes you sound less intelligent than you actually are? I had one of those moments a few years back during a pitch meeting for my startup. I was presenting to potential investors, and I kept saying "I think" before every point I made. "I think our user acquisition strategy will work."
Studies show that people with high fluid intelligence can process multiple "what if" scenarios concurrently, helping them see ahead, identify concealed dangers, and plan their actions. This mode of thinking requires a lot of working memory because the brain isn't looping idly; it's stress-testing every single possibility that comes to mind. This might be why these people seem to be frequently lost in their thoughts, even when they're alone.
By comparing how AI models and humans map these words to numerical percentages, we uncovered significant gaps between humans and large language models. While the models do tend to agree with humans on extremes like 'impossible,' they diverge sharply on hedge words like 'maybe.' For example, a model might use the word 'likely' to represent an 80% probability, while a human reader assumes it means closer to 65%.
What was I thinking? This is not as easy or straightforward a question as I would have thought. As soon as you try to record and categorise the contents of your consciousness the sense impressions, feelings, words, images, daydreams, mind-wanderings, ruminations, deliberations, observations, opinions, intuitions and occasional insights you encounter far more questions than answers, and more than a few surprises.
I am a worrier, and have been for most of my life. At some point, someone dear and smart teased me that I worry about the wrong things. The things that hit me, she noted, were never the things I worried about. For a while that left me feeling like an incompetent worrier-until my research caught up. I realized that the things I worry about often don't end up hurting me precisely because worrying helps me diffuse them ahead of time.
Ever since our ancestors first stood upright and squinted at the horizon, we've been wired to notice patterns. A rustle in the grass might have meant a stalking predator. Dark clouds often meant rain. Those who made these connections and guessed that one thing caused another tended to survive. Over time, this ability to link events became one of our most significant evolutionary advantages. It's how we built tools, tamed fire, and eventually invented Wi-Fi.
A drawn circle is at least something physical. You can see it, touch it, erase it. The skeptic can still say, "Circles are grounded in physical reality. Justice is different; it's just an idea in your head." So let's talk about the number two. Point to it. Not two apples, not two fingers, not a numeral on a page-that's just a symbol.
Walking through a field one day, a 17-year-old schoolteacher named George Boole had a vision. His head was full of abstract mathematics - ideas about how to use algebra to solve complex calculus problems. Suddenly, he was struck with a flash of insight: that thought itself might be expressed in algebraic form. Boole was born on November 2, 1815, at four o'clock in the afternoon, in Lincoln, England.
Take the sur­prise some have expressed in recent years upon find­ing out that the expres­sion to "pic­ture" some­thing in one's head isn't just a fig­ure of speech. You mean that peo­ple "pic­tur­ing an apple," say, haven't been just think­ing about an apple, but actu­al­ly see­ing one in their heads? The inabil­i­ty to do that has a name: aphan­ta­sia, from the Greek word phan­ta­sia, "image," and prefix - a, "with­out."
Many philosophers strike me as like Polish apparatchiks in 1983-they turn up to work and do what they did yesterday just because they don't know what else to do, not because they seriously believe in the system they are maintaining. I think it's not been fully appreciated how much of a blow it is to the confidence of the field's youth that scientific ambitions are increasingly abandoned as untenable.
In January 1986, NASA engineers knew the Space Shuttle Challenger's O-rings had never been tested in freezing temperatures. They recommended delaying the launch. Managers asked: Could the engineers prove it was unsafe? They couldn't-they could only say the system hadn't been designed for these conditions. Under pressure, the engineers withdrew their recommendation. The next morning, Challenger broke apart 73 seconds after launch, killing all seven astronauts.
For the past three years, the conversation around artificial intelligence has been dominated by a single, anxious question: What will be left for us to do? As large language models began writing code, drafting legal briefs, and composing poetry, the prevailing assumption was that human cognitive labor was being commoditized. We braced for a world where thinking was outsourced to the cloud, rendering our hard-won mental skills, writing, logic, and structural reasoning relics of a pre-automated past.
Consistent with the general trend of incorporating artificial intelligence into nearly every field, researchers and politicians are increasingly using AI models trained on scientific data to infer answers to scientific questions. But can AI ultimately replace scientists? The Trump administration signed an executive order on Nov. 24, 2025, that announced the Genesis Mission, an initiative to build and train a series of AI agents on federal scientific datasets "to test new hypotheses, automate research workflows, and accelerate scientific breakthroughs."
I've written about all of these before at greater length, but this is a short post because it's not about the technology or making a broader point, it's about me. These are rules for engaging with me, personally, on this topic. Others are welcome to adopt these rules if they so wish but I am not encouraging anyone to do so. Thus, I've made this post as short as I can so everyone interested in engaging can read the whole thing.
A professional philosopher outside the academy walls can act as a popularizer (the goal here is to make philosophy more accessible to the general public), an applied ethicist (the major task is to offer an analysis of various specific moral issues that arise within a society), and a public intellectual (I limit this role to questions that have political connotation). Of course, there are overlaps between these roles and they certainly do not exhaust all possible forms of public engagement of a professional philosopher.
For the first time, speech has been decoupled from consequence. We now live alongside AI systems that converse knowledgeably and persuasively-deploying claims about the world, explanations, advice, encouragement, apologies, and promises-while bearing no vulnerability for what they say. Millions of people already rely on chatbots powered by large language models, and have integrated these synthetic interlocutors into their personal and professional lives. An LLM's words shape our beliefs, decisions, and actions, yet no speaker stands behind them.
How do you know anything is real? Some things you can see directly, like your fingers. Other things, like your chin, you need a mirror or a camera to see. Other things can't be seen, but you believe in them because a parent or a teacher told you, or you read it in a book. As a physicist, I use sensitive scientific instruments and complicated math to try to figure out what's real and what's not.
In my previous post, I summarized my response to Christian de Weerd, who denied that a Darwinian approach to consciousness is even possible. I argued that consciousness science has unnecessarily insulated itself from the evolutionary tools that revolutionized our understanding of every other biological phenomenon, and that treating human consciousness as the paradigm case distorts our picture of consciousness as a natural phenomenon spanning millions of species across millions of years.