Productivity
fromTNW | Artificial-Intelligence
2 days agoWhy probability, not averages, is reshaping AI decision-making
ChanceOmeters measure uncertainty directly, improving decision-making by providing odds rather than relying solely on averages.
We're investing a lot in AI - we're doing a lot, but we're stopping at individual productivity. We're not taking the next step. You can't just screw AI on everything - it only makes you faster. It means you need to think about, 'how are our teams collaborating? How are people collaborating?' You probably need to change the way you work.
For more than two millennia, mathematicians have produced a growing heap of pi equations in their ongoing search for methods to calculate pi faster and faster. The pile of equations has now grown into the thousands, and algorithms now can generate an infinitude. Each discovery has arrived alone, as a fragment, with no obvious connection to the others. But now, for the first time, centuries of pi formulas have been shown to be part of a unified, formerly hidden structure.
Pi is an infinitely long decimal number that never repeats. How do we know? Well, humans have calculated it to 314 trillion decimal places and didn't reach the end. At that point, I'm inclined to accept it. I mean, NASA uses only the first 15 decimal places for navigating spacecraft, and that's more than enough for earthly applications.
Sometimes the reason pi shows up in randomly generated values is obvious—if there are circles or angles involved, pi is your guy. But sometimes the circle is cleverly hidden, and sometimes the reason pi pops up is a mathematical mystery!
Weather impacts sales. Every retailer knows it. But for most, the likelihood that it might rain, snow, or sleet on the third of March somewhere in the Midwest is rarely used. Vendors such as Weather Trends have offered accurate, long-range forecasts for more than 20 years. But the opportunity is not predicting the weather; it's knowing what to do with the data. AI might change that.
Which Algorithm Is This? If you step back, this maps almost perfectly to the Top K Frequent Elements problem.We usually solve it for integers in a list. Here, the "elements" are audience profiles age and body-type combinations. First, define what an audience profile looks like: case class Profile(age: Int, height: Int, weight: Int) What we want is a function like this:
In the weeks leading up to September 1891, mathematician Georg Cantor prepared an ambush. For years he had sparred - philosophically, mathematically and emotionally - with his formidable rival Leopold Kronecker, one of Germany's most influential mathematicians. Kronecker thought that mathematics should deal only with whole numbers and proofs built from them and therefore rejected Cantor's study of infinity. "God made the integers," Kronecker once said. "All else is the work of man."
The NFL is no stranger to innovation. Over the years, teams have adopted new strategies, technologies, and data-driven approaches to stay ahead of the competition. One of the most significant advancements in recent years is the rise of sophisticated analytics and modeling. These tools have become essential for teams seeking to improve player performance, game strategy, and overall team development.
A traveler might search for a weekend getaway and still see travel ads weeks later, long after returning home. The data was right. The timing wasn't.AI-driven marketing has the potential to close that gap - but only if it understands context. Personalization built solely on identity or past behavior can reveal who someone is, but not when or why they're ready to act.As AI takes center stage in marketing strategy, context is emerging as the differentiator that turns reactive automation into predictive intelligence.
Most beginner data portfolios look similar. They include: A few cleaned datasets Some charts or dashboards A notebook with code and commentary Again, nothing here is wrong. But hiring teams don't review portfolios to check whether you can follow instructions. They review them to see whether you can think like a data analyst. When projects feel generic, reviewers are left guessing:
When discussing their results, they tell us that Facebook's reporting or Google Analytics show the ad campaigns as barely breaking even. Yet they keep investing in this channel. They reason that Facebook can only see a fraction of the sales, so if Facebook is reporting a 1x return on ad spend (ROAS) then it's probably at least 2x in reality.
Every year, poor communication and siloed data bleed companies of productivity and profit. Research shows U.S. businesses lose up to $1.2 trillion annually to ineffective communication, that's about $12,506 per employee per year. This stems from breakdowns that waste an average of 7.47 hours per employee each week on miscommunications. The damage isn't only interpersonal; it's structural. Disconnected and fragmented data systems mean that employees spend around 12 hours per week just searching for information trapped in those silos.
The title "data scientist" is quietly disappearing from job postings, internal org charts, and LinkedIn headlines. In its place, roles like "AI engineer," "applied AI engineer," and "machine learning engineer" are becoming the norm. This Data Scientist vs AI Engineer shift raises an important question for practitioners and leaders alike: what actually changes when a data scientist becomes an AI engineer, and what stays the same? More importantly, what skills matter if you want to make this transition intentionally rather than by accident?
What happens under the hood? How is the search engine able to take that simple query, look for images in the billions, trillions of images that are available online? How is it able to find this one or similar photos from all that? Usually, there is an embedding model that is doing this work behind the hood.
OpenAI's GPT-5.2 Pro does better at solving sophisticated math problems than older versions of the company's top large language model, according to a new study by Epoch AI, a non-profit research institute.
SHAP for feature attribution SHAP quantifies each feature's contribution to a model prediction, enabling: LIME for local interpretability LIME builds simple local models around a prediction to show how small changes influence outcomes. It answers questions like: "Would correcting age change the anomaly score?" "Would adjusting the ZIP code affect classification?" Explainability makes AI-based data remediation acceptable in regulated industries.