Data science
fromMedium
1 day agoContext matters... A lot
Large language models excel at tasks but struggle with context, leading to potentially misleading answers despite their capabilities.
Instructions I created. Instructions I am continuing to hone - instructions that required me to study my own old essays, identifying what I do when I write. The sentence rhythms. The way I move between timescales. The zooming in and out from concept to detail. The instructions tell Claude how I would like ideas composed. I pull together concepts and experiences from my lived expertise to formulate a point of view - in this case, on this new AI technology.
LLMs have made AI assistants a standard feature across SaaS. AI assistants allow users to instantly retrieve information and interact with a system through text-based prompts. Mathias Biilmann, in his article " Introducing AX: Why Agent Experience Matters," discusses two distinct approaches to building AI assistants. The Closed Approach involves a conversational assistant embedded directly within a single SaaS product. Examples include Zoom's AI Companion, Salesforce CRM's Einstein, and Microsoft's Copilot. The Open Approach involves external conversational assistants, such as Claude, ChatGPT, and Gemini,
The normative form for interacting with what we think of as "AI" is something like this: there's a chat you type a question you wait for a few seconds you start seeing an answer. you start reading it you read or scan some more tens of seconds longer, while the rest of the response appears you maybe study the response in more detail you respond the loop continues