Jensen Huang’s remarks were more than a CEO’s political view. They amounted to a signal of how governments and big tech may align in the AI era. Speaking recently at the Milken Global Conference in Los Angeles, he said he “fully trusts the government to use technology properly.” He also said he would not “stand in the way” of a country using technology to protect families.
Those comments land in the middle of a debate roiling the U.S. tech sector: How far should AI companies go in meeting national security demands, and how much distance should private firms keep from government power?
At the center of the dispute is Anthropic. While working with the U.S. Department of Defense, Anthropic has maintained that use of its AI models should be restricted for “mass surveillance of Americans” and “fully autonomous weapons,” arguing that ethical boundaries are needed. The U.S. government viewed that stance as a national security risk. The Pentagon ultimately designated Anthropic a “supply chain risk company,” and President Donald Trump and Defense Secretary Hegseth publicly criticized it.
Nvidia took a different path. Nvidia joined OpenAI, Google, Microsoft and Amazon Web Services in a Defense Department classified work agreement, effectively accepting the U.S. government’s use of its technology for “lawful purposes.”
The broader point is that the United States no longer treats AI as just an industrial technology. Like semiconductors, AI has become a strategic national asset. If nuclear technology defined Cold War power competition, today’s AI race is reshaping military, economic and diplomatic order. That is why Washington is pressing AI companies for stronger cooperation.
The risks begin there. National security matters, but an overly close relationship between government and big tech can create new dangers. AI is not only a weapon. It can enable surveillance, intelligence analysis, behavior prediction and even public opinion manipulation, placing it directly in some of democracy’s most sensitive domains.
Anthropic’s concern reflects that reality. Fully autonomous weapons and large-scale surveillance systems can slip beyond human control. Ethical debate over AI-based weapons is spreading quickly worldwide, and international standards remain unclear on how far to allow systems that do not require a person’s final judgment.
At the same time, proponents argue the United States cannot tie its own hands as China and Russia accelerate AI militarization. Huang’s remarks align with a pragmatic view: private companies may not be able to refuse national security demands outright.
History offers parallels. The internet began as a U.S. Defense Department project, and GPS was also military technology. Much of today’s civilian infrastructure originated in security needs. AI is moving along the same track.
But AI differs from earlier technologies. While the internet and GPS mainly provided connectivity and location, AI can intervene in human decision-making itself — predicting choices, shaping behavior and combining information to produce new conclusions. That makes AI more politically and socially consequential than many past military technologies.
What is needed, then, is not a simple yes-or-no argument but clear standards and principles.
First, the principle of human control should be preserved. Modern battlefields are already moving at extreme speed, with hypersonic missiles, drone swarms and real-time cyber conflict making it difficult for humans to approve every tactical decision. Still, unlimited acceptance of weapons that fully exclude humans carries serious risk. At minimum, the world needs shared understanding that human responsibility must remain in areas such as mass destruction and strategic weapons.
Second, democratic oversight matters. National security depends on secrecy, and military operations and intelligence work cannot be fully public. But placing government-big tech cooperation behind total secrecy also clashes with democratic principles. Limited checks — through legislatures, courts and independent oversight — are needed. Neither full disclosure nor total secrecy is a workable answer.
Third, discussion of international norms should begin. A comprehensive agreement will be difficult amid U.S.-China-Russia competition. Yet during the Cold War, the United States and the Soviet Union pursued the Nuclear Non-Proliferation Treaty and strategic arms limitation even as they competed. Competition and rules can advance together as risks grow. AI should be no different, starting with narrow areas such as banning fully autonomous nuclear strike systems.
South Korea is not insulated from these pressures. AI and semiconductors have already become national security industries, and companies such as Samsung Electronics and SK hynix sit at the core of the global AI supply chain. South Korean firms may increasingly face pressure to choose between the United States and China. The issue goes beyond exports, tying together diplomacy, security and technological sovereignty.
Huang’s remarks, then, are not simply a personal statement. They underscore that in the AI era, technology companies may find it harder to remain purely private actors. They also serve as a warning that governments cannot justify every use of technology under the banner of “security.”
The central challenge of the AI era is where to draw the line between national security and technology ethics, and between military efficiency and democratic control. The world is now entering a more intense phase of competition and debate over that new boundary.
* This article has been translated by AI.
Copyright ⓒ Aju Press All rights reserved.
