Productivity
fromTNW | Artificial-Intelligence
4 days agoWhy probability, not averages, is reshaping AI decision-making
ChanceOmeters measure uncertainty directly, improving decision-making by providing odds rather than relying solely on averages.
Super shoes and ultralight gear make a difference, but with new advancements in artificial intelligence (AI) that can look at our running form and compare it to the ideal, analyze our nutrition intake from a simple photo and help us plan our diets, and offer guidance on training and recovery, the interwovenness of technology and running is only set to increase.
We're investing a lot in AI - we're doing a lot, but we're stopping at individual productivity. We're not taking the next step. You can't just screw AI on everything - it only makes you faster. It means you need to think about, 'how are our teams collaborating? How are people collaborating?' You probably need to change the way you work.
Hyperscalers and major data platform vendors offer integrated services across storage, analytics, and model infrastructure. MariaDB's differentiation will likely depend on whether the combined platform can deliver operational speed and simplicity that organizations find easier to run than those larger stacks.
Weather impacts sales. Every retailer knows it. But for most, the likelihood that it might rain, snow, or sleet on the third of March somewhere in the Midwest is rarely used. Vendors such as Weather Trends have offered accurate, long-range forecasts for more than 20 years. But the opportunity is not predicting the weather; it's knowing what to do with the data. AI might change that.
Imagine you're selecting an influencer to work with on your new campaign. You've narrowed it down to two, both in the right area, both creating the right sort of content. One has 24.6 million subscribers, the other 1.4 million. Which do you choose? Now imagine you could find out the first had 8.7 million unique viewers last month, while the second had 9.9 million. Do you want to change your mind?
The NFL is no stranger to innovation. Over the years, teams have adopted new strategies, technologies, and data-driven approaches to stay ahead of the competition. One of the most significant advancements in recent years is the rise of sophisticated analytics and modeling. These tools have become essential for teams seeking to improve player performance, game strategy, and overall team development.
Picture this: a couple walks into a restaurant on a Friday night. They glance around, choose their table, and settle into their seats. Before they've even opened their menus, their server already has a pretty good idea whether they'll leave 10% or 25%. It sounds like mind reading, but after talking with dozens of servers over the years, I've learned it's more like pattern recognition honed by thousands of interactions.
"The job didn't fail. It just... never finished." That was the worst part. No errors.No stack traces.Just a Spark job running forever in production - blocking downstream pipelines, delaying reports, and waking up-on-call engineers at 2 AM. This is the story of how I diagnosed a real Spark performance issue in production and fixed it drastically, not by adding more machines - but by understanding Spark properly.
At that point, backpressure and load shedding are the only things that retain a system that can still operate. If you have ever been in a Starbucks overwhelmed by mobile orders, you know the feeling. The in-store experience breaks down. You no longer know how many orders are ahead of you. There is no clear line, no reliable wait estimate, and often no real cancellation path unless you escalate and make noise.
A traveler might search for a weekend getaway and still see travel ads weeks later, long after returning home. The data was right. The timing wasn't.AI-driven marketing has the potential to close that gap - but only if it understands context. Personalization built solely on identity or past behavior can reveal who someone is, but not when or why they're ready to act.As AI takes center stage in marketing strategy, context is emerging as the differentiator that turns reactive automation into predictive intelligence.
The more attributes you add to your metrics, the more complex and valuable questions you can answer. Every additional attribute provides a new dimension for analysis and troubleshooting. For instance, adding an infrastructure attribute, such as region can help you determine if a performance issue is isolated to a specific geographic area or is widespread. Similarly, adding business context, like a store location attribute for an e-commerce platform, allows you to understand if an issue is specific to a particular set of stores
Every year, poor communication and siloed data bleed companies of productivity and profit. Research shows U.S. businesses lose up to $1.2 trillion annually to ineffective communication, that's about $12,506 per employee per year. This stems from breakdowns that waste an average of 7.47 hours per employee each week on miscommunications. The damage isn't only interpersonal; it's structural. Disconnected and fragmented data systems mean that employees spend around 12 hours per week just searching for information trapped in those silos.
When discussing their results, they tell us that Facebook's reporting or Google Analytics show the ad campaigns as barely breaking even. Yet they keep investing in this channel. They reason that Facebook can only see a fraction of the sales, so if Facebook is reporting a 1x return on ad spend (ROAS) then it's probably at least 2x in reality.
What happens under the hood? How is the search engine able to take that simple query, look for images in the billions, trillions of images that are available online? How is it able to find this one or similar photos from all that? Usually, there is an embedding model that is doing this work behind the hood.
The title "data scientist" is quietly disappearing from job postings, internal org charts, and LinkedIn headlines. In its place, roles like "AI engineer," "applied AI engineer," and "machine learning engineer" are becoming the norm. This Data Scientist vs AI Engineer shift raises an important question for practitioners and leaders alike: what actually changes when a data scientist becomes an AI engineer, and what stays the same? More importantly, what skills matter if you want to make this transition intentionally rather than by accident?
SHAP for feature attribution SHAP quantifies each feature's contribution to a model prediction, enabling: LIME for local interpretability LIME builds simple local models around a prediction to show how small changes influence outcomes. It answers questions like: "Would correcting age change the anomaly score?" "Would adjusting the ZIP code affect classification?" Explainability makes AI-based data remediation acceptable in regulated industries.
Traffic. Focusing on traffic obscures the purpose of AI answers: to satisfy a need on-site, not to generate clicks. AI-generated solutions do not typically include links to branded websites. Google's AI Overviews, for example, sometimes links product names to organic search listings. Thus visibility does not equate to traffic. A merchant's products could appear in an AI answer and receive no clicks.
By replacing repeated fine‑tuning with a dual‑memory system, MemAlign reduces the cost and instability of training LLM judges, offering faster adaptation to new domains and changing business policies. Databricks' Mosaic AI Research team has added a new framework, MemAlign, to MLflow, its managed machine learning and generative AI lifecycle development service. MemAlign is designed to help enterprises lower the cost and latency of training LLM-based judges, in turn making AI evaluation scalable and trustworthy enough for production deployments.