Nvidia CEO Jensen Huang’s words were brief but widely noted: “I believe the government will use technology correctly.”
He made the remark at the Milken Global Conference in Los Angeles, in what amounted to more than a routine CEO opinion. It underscored how the global order around artificial intelligence is shifting.
Huang called AI company Anthropic “a great company,” but said he disagreed with efforts to limit AI use for national security purposes. He added that “a CEO is not an elected official,” signaling that companies should not stand in the way if a government seeks to use technology to protect the public.
The attention reflects a deeper split inside the U.S. AI industry.
One camp views AI as a core strategic asset for national security. The other warns that militarizing AI could create uncontrollable risks. Huang has publicly aligned closer to the first view.
For years, Silicon Valley often tried to keep its distance from state power, shaped by a strong libertarian culture and skepticism of regulation and military involvement. A prominent example was employee backlash at Google over the Pentagon’s drone-analysis effort known as Project Maven, when many argued AI should not be used as a tool of war.
That mood has shifted sharply.
One driver has been China. The United States has grown more alarmed as it watches China treat AI not merely as an industrial technology but as a national strategic capability. China is already pursuing a civil-military fusion strategy, effectively erasing the boundary between civilian and military technology. Across drones, facial recognition, surveillance systems, intelligence analysis and cyberwarfare, AI is moving quickly into military structures.
The war in Ukraine has also been a jolt, showing how AI and data can reshape modern battlefields. Satellite-image analysis, drone strikes, real-time information processing and the speed of electronic-warfare responses have changed dramatically. Warfare has become less about raw firepower and more about who can process data and make decisions faster.
The U.S. security establishment is absorbing that lesson. OpenAI, Google, Microsoft, Amazon Web Services and Nvidia have expanded cooperation with the U.S. Defense Department for the same reason.
During the Cold War, U.S. security relied heavily on traditional defense contractors such as Lockheed Martin. Now the structure is changing. Warships and fighter jets alone are no longer enough to sustain dominance. National security is increasingly tied to semiconductors, cloud computing, AI models and data centers — a new “digital security complex” layered on top of the traditional military-industrial base.
Huang’s comments fit within that broader shift.
His stance is closer to realism than idealism: if rivals are arming, the argument goes, the United States cannot tie its own hands. The U.S. government has also begun treating AI as strategic infrastructure. U.S. controls on semiconductor exports to China are not simply a trade issue, the column argues, but part of an AI power competition.
Concerns about militarizing AI, however, have not disappeared — and may be intensifying.
AI differs from nuclear weapons. Nuclear arms are confined to specific facilities, but AI permeates daily life, from search and finance to health care, media, education, transportation and social media. The line between military and civilian AI is also blurring, with the same models used for intelligence analysis and consumer services.
As a result, the debate is not only about weapons. It is increasingly about how to govern entire social systems.
That is also why Anthropic has clashed with the U.S. Defense Department. The company has not rejected military use outright, but has sought limits on mass surveillance of Americans and on fully autonomous weapons. The problem, the column says, is that the U.S. government has grown increasingly uncomfortable with such constraints.
One reason is speed: the pace of the battlefield is beginning to exceed human decision-making.
Modern combat is accelerating. In environments shaped by hypersonic missiles, drone swarms and real-time cyberattacks, relying only on human reaction can be difficult. When thousands of drones move at once, people cannot press an approval button for each action. Militaries, the column argues, will push for more AI automation.
That creates a major ethical collision.
How far should AI be allowed to go in making judgments and deciding to strike? Keeping strict human control can reduce military efficiency. Allowing full automation can collapse accountability.
Who is responsible — the developer, the soldier, the state, or the algorithm?
The world does not yet have an answer.
International debate is shifting from whether militarization can be stopped to how far it should be permitted. The issue is not simply idealism versus realism, the column argues, but a clash between national survival and democratic values.
National security is inherently secretive, while democracy demands oversight. If governments and big tech expand AI military projects in secrecy, public control can weaken. If everything is disclosed, security functions can be undermined.
The AI era is forcing democracies to confront a familiar question: where to draw the line between security and civil liberties.
That dilemma existed in the Cold War, when the United States and the Soviet Union competed to build nuclear weapons while also pursuing arms-control agreements — not out of trust, but because the risks became too great.
AI may follow a similar path, but with key differences. Nuclear weapons were largely monopolized by states; AI is driven by private companies, and models can spread globally through internet connections far faster than nuclear technology.
That is one reason setting AI rules may be harder than nuclear arms control.
Still, the column argues, some boundaries are necessary. That is why discussions are emerging about limiting at least the most dangerous areas, such as fully autonomous nuclear launch systems or mass civilian surveillance. In practice, it says, limited norms focused on what to prohibit may come before any broad AI disarmament.
Silicon Valley is changing along with the debate.
AI companies are moving deeper into national security systems as investment needs soar and data centers and semiconductor supply chains become tied to national strategy. They are no longer just startups, but increasingly part of strategic industries.
That is why Huang’s remark matters, the column concludes.
It was not simply a pro-government statement, but a sign of where power is shifting in the AI era — and of technology companies beginning to align again with the state.
South Korea, the column adds, is not immune. AI is linked not only to platforms but also to semiconductors, defense, finance, health care, media, education and information systems. Seoul is likely to treat AI not only as industrial policy but also as a security strategy.
The key, it argues, is direction, not speed.
As competition intensifies, governments may seek stronger control. If security logic overwhelms everything, democratic freedoms can shrink quickly. If ethics and regulation dominate, anxiety about falling behind in technology will grow.
What matters next is not a simple yes-or-no debate, the column says, but building social standards around how far to allow AI, what risks to accept and who bears final responsibility.
Huang described the reality. Anthropic warned of the risks. The world, the column concludes, is beginning to search for a new balance between the two.
* This article has been translated by AI.
Copyright ⓒ Aju Press All rights reserved.
