To find the typical example, just observe an average stand-up meeting. The ones who talk more get all the attention. In her article, software engineer Priyanka Jain tells the story of two colleagues assigned the same task. One posted updates, asked questions, and collaborated loudly. The other stayed silent and shipped clean code. Both delivered. Yet only one was praised as a "great team player."
AI systems continued to advance rapidly over the past year, but the methods used to test and manage their risks did not keep pace, according to the International AI Safety Report 2026. The report, produced with inputs from more than 100 experts across over 30 countries, said that pre-deployment testing was increasingly failing to reflect how AI systems behaved once deployed in real-world environments, creating challenges for organisations that had expanded their use of AI across software development, cybersecurity, research, and business operations.
Organizations traditionally approach security risk through a narrow lens, often equating "security" primarily with cybersecurity. While cybersecurity is critically important, it represents only one subset of a much broader security landscape. Cybersecurity focuses on the protection of technologies that collect, store, process and transmit data. By contrast, security-related risk encompasses all forms of loss arising from the failure to protect organizational assets.
A secure software development life cycle means baking security into plan, design, build, test, and maintenance, rather than sprinkling it on at the end, Sara Martinez said in her talk Ensuring Software Security at Online TestConf. Testers aren't bug finders but early defenders, building security and quality in from the first sprint. Culture first, automation second, continuous testing and monitoring all the way; that's how you make security a habit instead of a fire drill, she argued.
This extends to the software development community, which is seeing a near-ubiquitous presence of AI-coding assistants as teams face pressures to generate more output in less time. While the huge spike in efficiencies greatly helps them, these teams too often fail to incorporate adequate safety controls and practices into AI deployments. The resulting risks leave their organizations exposed, and developers will struggle to backtrack in tracing and identifying where - and how - a security gap occurred.