Naver's sovereign AI plan questioned over use of China's open-source technology

By Kim Yeon-jae Posted : January 8, 2026, 17:31 Updated : January 8, 2026, 17:31
Bae Kyung-hoon, deputy prime minister and science and ICT minister, and Ha Jung-woo, senior presidential secretary for AI future planning, pose with other attendees at a briefing on the independent AI foundation project at the COEX Auditorium in Seoul.
A presentation of homegrown AI foundation models at the COEX in Seoul/ Aju Business Daily Yoo Dae-gil

SEOUL, Jan. 8 (AJP) - Naver’s push to develop a so-called “sovereign AI” model has come under scrutiny after disclosures that its government-backed foundation model incorporated components from Alibaba’s open-source Qwen system, raising questions about South Korea’s technological independence and its ambition to build advanced artificial intelligence from scratch.

Industry sources said on Thursday that Naver Cloud’s HyperCLOVA X Seed 32B Sync model used a vision encoder from Alibaba’s Qwen. Vision encoders convert visual data into numerical representations that allow AI systems to process images and other non-text inputs.

Naver Cloud defended the decision, saying the integration was a strategic choice aimed at ensuring compatibility with the global AI ecosystem and improving overall performance. The revelation, however, has challenged the domestic industry narrative that South Korea’s sovereign AI initiatives are fully homegrown.

Several industry officials said reliance on global open-source models was a pragmatic response to tight deadlines and limited resources under government-led projects.

“Given the compressed timelines and performance requirements, developers had little choice but to rely on proven open-source tools to deliver functioning multimodal AI,” one industry official said, declining to be named.

Chinese open-source models have gained traction globally by offering open-weight architectures that allow developers to modify and optimize systems freely. In contrast, leading U.S. models such as OpenAI’s ChatGPT and Google’s Gemini operate largely as closed systems.

Data from developer platform OpenRouter and venture capital firm Andreessen Horowitz show that usage of Chinese-developed open-source models rose sharply, from 1.2 percent in late 2024 to about 30 percent by August last year. Even U.S.-based companies such as Nvidia and Perplexity, as well as Stanford University, have reportedly used Alibaba’s Qwen for specific applications.

At the same time, the performance gap between Chinese and Western models is narrowing. Stanford University’s Human-Centered AI institute said in its AI Index 2025 report that the U.S. lead in benchmarks such as Massive Multitask Language Understanding shrank from double digits in late 2023 to between 0.3 and 3.7 percentage points by the end of 2024.

Research firm Epoch AI estimated that leading Chinese models now trail top Western systems by an average of about three months.

“China is strengthening its position in the open-source AI ecosystem through large-scale state support and regulatory easing,” another industry official said. “Firms developing sovereign models may need selective collaboration to remain competitive.”

Performance data has also tempered expectations. In December, a Sogang University research team tested five government-backed AI models using university entrance exam-style questions, including South Korea’s CSAT mathematics section. While startup Upstage scored 58 points, other domestic models recorded results in the 20-point range.

By contrast, leading global systems such as GPT-5.1 and DeepSeek V3.2 scored above 70, highlighting the gap that remains despite state-backed efforts.
기사 이미지 확대 보기
닫기