The Korea Artificial Intelligence Safety Institute (AISI) is emerging as a global partner for AI safety and security with the American AI company Anthropic. The Ministry of Science and ICT has proposed establishing a security cooperation framework centered on AISI, suggesting that South Korea's AI safety policies and cybersecurity responses may soon align with global AI safety governance.
According to the Ministry of Science and ICT on May 11, the government has suggested a plan to build an AI safety and cybersecurity cooperation system centered on AISI to Anthropic.
AISI, which was launched in November 2024 as an organization under the Korea Electronics and Telecommunications Research Institute (ETRI), is South Korea's sixth dedicated AI safety agency established after the AI Seoul Summit held in May of the same year. It is responsible for AI risk assessment, policy research, and building global cooperation systems, and is currently working towards gradual independence from ETRI.
This proposal is gaining attention as it aligns with Anthropic's global security partnership initiative, Project Glasswing. Project Glasswing is a collaborative security initiative in the industry that uses AI to detect and patch software security vulnerabilities worldwide.
Led by Anthropic, the initiative reportedly includes participation from major global tech companies such as Amazon Web Services (AWS), Google, Microsoft, Apple, and NVIDIA. Among foreign public institutions, the UK AI Safety Institute is the only one involved.
Additionally, companies in security, infrastructure, and finance, including Cisco, CrowdStrike, Broadcom, Palo Alto Networks, Cloudflare, and JPMorgan Chase, are also listed as partners.
At the core of the project is Anthropic's proprietary AI model, Claude Mythos. Anthropic has stated that this model outperforms most humans, except for top security experts, in vulnerability detection.
Mythos has reportedly identified vulnerabilities in OpenBSD that had gone undetected for 27 years and in the open-source video processing software FFmpeg that had persisted for 16 years. Notably, the FFmpeg vulnerability was not discovered despite over 5 million automated security tests.
Experts believe that South Korea's participation in Project Glasswing could enhance its integration into the global AI security cooperation framework, thereby accelerating its response to vulnerabilities.
Eom Heung-yeol, a professor in the Department of Information Security at Soonchunhyang University, stated, "The key to AI-based security is ultimately a race against time in detecting and patching vulnerabilities. By participating in Glasswing, we can receive advance information on vulnerabilities and patches from global tech giants, significantly shortening our domestic response time."
He added, "Given the likelihood that North Korea and others will develop AI-based security and attack models, the nation that discovers vulnerabilities first and establishes a defense system will gain an advantage. It is crucial to reduce the current vulnerability sharing and patch response time from months to a matter of days or weeks."
* This article has been translated by AI.
Copyright ⓒ Aju Press All rights reserved.
