Productivity
fromTNW | Artificial-Intelligence
2 days agoWhy probability, not averages, is reshaping AI decision-making
ChanceOmeters measure uncertainty directly, improving decision-making by providing odds rather than relying solely on averages.
Generative AI is now incorporated into the workflow for many scholars across many disciplines, but the broader scientific community would benefit from taking stock of how this technology could truly benefit our work and how it might distract. We hope the symposium can provide clarity.
Weather impacts sales. Every retailer knows it. But for most, the likelihood that it might rain, snow, or sleet on the third of March somewhere in the Midwest is rarely used. Vendors such as Weather Trends have offered accurate, long-range forecasts for more than 20 years. But the opportunity is not predicting the weather; it's knowing what to do with the data. AI might change that.
Time pressure, limited information, confusion, fatigue, and mortality salience combine to set the stage for decision-making errors, sometimes with grave consequences. An example is the downing of Iran Air Flight 655 by a missile launched by the USS Vincennes in 1988, resulting in the death of 290 passengers and crew. In a time of heightened tension between the U.S. and Iran, the captain of the Vincennes misidentified the airliner as an incoming hostile aircraft and ordered his crew to shoot it down.
Imagine you're selecting an influencer to work with on your new campaign. You've narrowed it down to two, both in the right area, both creating the right sort of content. One has 24.6 million subscribers, the other 1.4 million. Which do you choose? Now imagine you could find out the first had 8.7 million unique viewers last month, while the second had 9.9 million. Do you want to change your mind?
When discussing their results, they tell us that Facebook's reporting or Google Analytics show the ad campaigns as barely breaking even. Yet they keep investing in this channel. They reason that Facebook can only see a fraction of the sales, so if Facebook is reporting a 1x return on ad spend (ROAS) then it's probably at least 2x in reality.
A traveler might search for a weekend getaway and still see travel ads weeks later, long after returning home. The data was right. The timing wasn't.AI-driven marketing has the potential to close that gap - but only if it understands context. Personalization built solely on identity or past behavior can reveal who someone is, but not when or why they're ready to act.As AI takes center stage in marketing strategy, context is emerging as the differentiator that turns reactive automation into predictive intelligence.
Every year, poor communication and siloed data bleed companies of productivity and profit. Research shows U.S. businesses lose up to $1.2 trillion annually to ineffective communication, that's about $12,506 per employee per year. This stems from breakdowns that waste an average of 7.47 hours per employee each week on miscommunications. The damage isn't only interpersonal; it's structural. Disconnected and fragmented data systems mean that employees spend around 12 hours per week just searching for information trapped in those silos.
The title "data scientist" is quietly disappearing from job postings, internal org charts, and LinkedIn headlines. In its place, roles like "AI engineer," "applied AI engineer," and "machine learning engineer" are becoming the norm. This Data Scientist vs AI Engineer shift raises an important question for practitioners and leaders alike: what actually changes when a data scientist becomes an AI engineer, and what stays the same? More importantly, what skills matter if you want to make this transition intentionally rather than by accident?
What happens under the hood? How is the search engine able to take that simple query, look for images in the billions, trillions of images that are available online? How is it able to find this one or similar photos from all that? Usually, there is an embedding model that is doing this work behind the hood.
SHAP for feature attribution SHAP quantifies each feature's contribution to a model prediction, enabling: LIME for local interpretability LIME builds simple local models around a prediction to show how small changes influence outcomes. It answers questions like: "Would correcting age change the anomaly score?" "Would adjusting the ZIP code affect classification?" Explainability makes AI-based data remediation acceptable in regulated industries.