Anthropic's political activities have ramped up as the company continues to be enmeshed in a nasty legal battle with the Defense Department. The dispute erupted earlier this year over the government's use of Anthropic's AI models and what guidelines (if any) should exist for that usage.
The savings disappear the moment you hit real-world complexity. Disparate data sources and messy inputs, ambiguous situations without clear rule sets, or actually any domain where the rules aren't already obvious. And someone still has to write all those rules.
Recently, an open-source project called OpenClaw surfaced on a maker community platform. Built on affordable edge-computing hardware, the project demonstrated a local AI agent controlling a physical robotic arm. It wasn't just predicting text; it was moving motors, reading sensors, and interacting with its physical environment in real-time. From a psychological and sociological perspective, this transition from abstract AI to embodied local AI forces us to re-evaluate trust, privacy, and the sanctity of our personal space.
This process, becoming aware of something not working and then changing what you're doing, is the essence of metacognition, or thinking about thinking. It's your brain monitoring its own thinking, recognizing a problem, and controlling or adjusting your approach. In fact, metacognition is fundamental to human intelligence and, until recently, has been understudied in artificial intelligence systems. My colleagues Charles Courchaine, Hefei Qiu, Joshua Iacoboni, and I are working to change that.