On May 13, major international news outlets focused on the AI security front, highlighting OpenAI's new platform, the fallout from Claude Mythos, and efforts to thwart weaponized AI attacks.
OpenAI Launches Cybersecurity Platform 'Daybreak'
OpenAI has officially entered the cybersecurity market with the launch of its enterprise platform, 'Daybreak.' This platform combines the GPT-5.5 model with OpenAI's proprietary security engine, 'Codex Security,' designed to support everything from software vulnerability detection to patch validation and automation. Unlike traditional security vendor solutions, OpenAI positions Daybreak as a 'control layer for application security infrastructure.'
Industry experts view Daybreak as a direct competitor to Anthropic's 'Project Glasswing and Claude Mythos.' Market research firm Future Group assesses that OpenAI aims to establish a governance role above the app security agent layer.
Ongoing Fallout from Claude Mythos; EU and U.S. Involvement
The impact surrounding Anthropic's cybersecurity-focused model, Claude Mythos, continues to reverberate. While OpenAI has agreed to provide GPT-5.5 cyber access to the EU, Anthropic has yet to finalize its European rollout of Mythos. A month after its launch, the European Commission still has not secured access rights.
In the U.S., the situation is urgent. White House Chief of Staff Suzy Wiles, Treasury Secretary Scott Bessen, and National Cyber Director Sean Kerckhove are directly involved in responding to Mythos, with the White House officially opposing Anthropic's plans to expand Mythos access. Consequently, major AI firms like Google and Microsoft are entering governance collaborations, including pre-launch model review agreements with the U.S. Department of Commerce's AI Standards Innovation Center (CAISI).
Google Preemptively Blocks AI-Driven Zero-Day Attack Attempts
Google's Threat Intelligence Group (GTIG) announced that it has preemptively blocked a 'large-scale exploit operation' where hackers attempted to find and exploit zero-day vulnerabilities using AI models. Google confirmed that its Gemini model was not used in these attacks. This incident marks the official recognition of cyber threats weaponized by AI, indicating that the competition in AI security has escalated beyond mere corporate rivalry to a serious threat response level.
* This article has been translated by AI.
Copyright ⓒ Aju Press All rights reserved.
