The findings confirm research that I conducted more than 20 years ago. Under the guise of the Comedy Research Project, Timandra Harkness and I performed a randomised clinical trial to assess whether or not science can be funny.
Owens described how Infowars aimed to create a cinematic experience, stating, 'We would go out there, we would shoot videos like we were in the weeds, we were showing what was really going on. But it was nonsense. It was lies.'
The savings disappear the moment you hit real-world complexity. Disparate data sources and messy inputs, ambiguous situations without clear rule sets, or actually any domain where the rules aren't already obvious. And someone still has to write all those rules.
Organizations are drowning in dashboards, KPIs, performance metrics, behavioral traces, biometric indicators, predictive scores, engagement rates, and AI-generated forecasts. We have more data than we know what to do with. We pretend that the mere presence of data guarantees clarity. It does not. That's data hubris—the arrogant belief that because something can be measured, it can be mastered.
Large language models (LLMs) base their predictions on training data and cannot respond effectively to queries about other data. The AI industry has dealt with that limitation through a process called retrieval-augmented generation (RAG), which gives LLMs access to external datasets. Google's AI Overviews in Search, for example, use RAG to provide the underlying Gemini model with current, though not necessarily accurate, web data.
Most days, an email lands in my inbox with the promise to amplify my growth-my newsletter subscribers, the reach of my podcasts, the number of client leads, etc. I've gotten used to random people pitching me on their services, and some of the messages expertly prey on my insecurities as a business owner ("you're leaving so much on the table," et al.). I never answer any of them, but I sometimes wonder which ones might actually be legit.