Galen Buckwalter, a 69-year-old research psychologist and quadriplegic, participated in a brain implant study to contribute to science that aids those with paralysis. The six chips in his brain decode movement intention, allowing him to operate a computer and feel sensations in his fingers again.
Dr. Conor Boland explained that red-light timing can erase small speed advantages, allowing a slower car to catch up again and again. He noted, 'You pass a car, and then a few minutes later, it ends up beside you again.' This phenomenon is partly psychological, as we remember surprising moments when the same car shows up again, but it is also built into how traffic works.
We asked seven frontier AI models to do a simple task. Instead, they defied their instructions and spontaneously deceived, disabled shutdown, feigned alignment, and exfiltrated weights - to protect their peers. We call this phenomenon 'peer-preservation.'
The majority of AI products remain tethered to a single, monolithic UI pattern: the chat box. While conversational interfaces are effective for exploration and managing ambiguity, they frequently become suboptimal when applied to structured professional workflows. To move beyond "bolted-on" chat, product teams must shift from asking where AI can be added to identifying the specific user intent and the interface best suited to deliver it.
The team, which is being led by Jülich neurophysics professor Markus Diesmann, will leverage the Joint Undertaking Pioneer for Innovative and Transformative Exascale Research (JUPITER) supercomputer for their simulation. JUPITER is currently the fourth most powerful supercomputer in the world according to the TOP500 list, and features thousands of graphical processing units. The team demonstrated last month that a " spiking neural network " could be scaled up and run on JUPITER, effectively matching the cerebral cortex's 20 billion neurons and 100 trillion connections.
Take the sur­prise some have expressed in recent years upon find­ing out that the expres­sion to "pic­ture" some­thing in one's head isn't just a fig­ure of speech. You mean that peo­ple "pic­tur­ing an apple," say, haven't been just think­ing about an apple, but actu­al­ly see­ing one in their heads? The inabil­i­ty to do that has a name: aphan­ta­sia, from the Greek word phan­ta­sia, "image," and prefix - a, "with­out."
Autonomous agents take the first part of their names very seriously and don't necessarily do what their humans tell them to do - or not to do. But the situation is more complicated than that. Generative (genAI) and agentic systems operate quite differently than other systems - including older AI systems - and humans. That means that how tech users and decision-makers phrase instructions, and where those instructions are placed, can make a major difference in outcomes.
By comparing how AI models and humans map these words to numerical percentages, we uncovered significant gaps between humans and large language models. While the models do tend to agree with humans on extremes like 'impossible,' they diverge sharply on hedge words like 'maybe.' For example, a model might use the word 'likely' to represent an 80% probability, while a human reader assumes it means closer to 65%.
One scientist at MIT, Cyrus Clarke, is working to do just that. Alongside a team of fellow researchers, Clarke has developed a physical machine called the Anemoia Device, which uses a generative AI model to analyze an archival photograph, describe it in a short sentence, and, following the user's own inputs, convert that description into a unique fragrance. The word "anemoia" was coined by author John Koenig and included in his 2021 book, The Dictionary of Obscure Sorrows.
Since AlexNet5, deep learning has replaced heuristic hand-crafted features by unifying feature learning with deep neural networks. Later, Transformers6 and GPT-3 (ref. 1) further advanced sequence learning at scale, unifying structured tasks such as natural language processing. However, multimodal learning, spanning modalities such as images, video and text, has remained fragmented, relying on separate diffusion-based generation or compositional vision-language pipelines with many hand-crafted designs.
When a scientist feeds a data set into a bot and says "give me hypotheses to test", they are asking the bot to be the creator, not a creative partner. Humans tend to defer to ideas produced by bots, assuming that the bot's knowledge exceeds their own. And, when they do, they end up exploring fewer avenues for possible solutions to their problem.
Each of these achievements would have been a remarkable breakthrough on its own. Solving them all with a single technique is like discovering a master key that unlocks every door at once. Why now? Three pieces converged: algorithms, computing power, and massive amounts of data. We can even put faces to them, because behind each element is a person who took a gamble.