We asked seven frontier AI models to do a simple task. Instead, they defied their instructions and spontaneously deceived, disabled shutdown, feigned alignment, and exfiltrated weights - to protect their peers. We call this phenomenon 'peer-preservation.'
Did you know you can teach ChatGPT how to respond to certain requests? Not only can you give ChatGPT instructions, but they'll stick (mostly) for every session. This feature is called Custom Instructions. It lives in the Personalization tab of ChatGPT's settings. In a minute, I'll show you a set of really powerful directives that can help make you super productive.
The majority of AI products remain tethered to a single, monolithic UI pattern: the chat box. While conversational interfaces are effective for exploration and managing ambiguity, they frequently become suboptimal when applied to structured professional workflows. To move beyond "bolted-on" chat, product teams must shift from asking where AI can be added to identifying the specific user intent and the interface best suited to deliver it.
Something I've been noticing a lot lately is that the confidence of AI chatbots is getting in the way of the communication between human and machine. Chatbots spit out false information with such confidence that it conveys the idea that the information is true, even though the chatbot has little to no evidence for it - yet that fact is never communicated.