US AI Policy Pivots Sharply From 'Safety' To 'Security'
President Donald Trump rescinded former President Joe Biden’s AI Executive Order on day one of his term (disclosure: I served as senior counselor for AI at the Department of Homeland Security during the Biden administration), and Vice President JD Vance opened up the Paris AI Action Summit, a convening that was originally launched to advance the field of AI safety, by firmly stating that he was not actually there to discuss AI safety and would instead be addressing “AI opportunity.” Vance went on to say that the U.S. would “safeguard American AI” and stop adversaries from attaining AI capabilities that “threaten all of our people.”
Without more context, these sound like meaningless buzzwords — what’s the difference between AI safety and AI security, and what does this shift mean for the consumers and businesses that continue to adopt AI?
Simply put, AI safety is primarily focused on developing AI in a way that behaves ethically and reliably, especially when it’s used in high-stakes contexts, like hiring or healthcare. To help prevent AI systems from causing harm, AI safety legislation typically includes risk assessments, testing protocols and requirements for human oversight.
AI security, by contrast, does not fixate on developing ethical and safe AI. Rather, it assumes that America’s adversaries will inevitably use AI in malicious ways and seeks to defend U.S. assets from intentional threats, like AI being exploited by rival nations to target U.S. critical infrastructure. These are not hypothetical risks — U.S. intelligence agencies continue to track growing offensive cyber operations in China, Russia and North Korea. To counter these types of deliberate attacks, organizations need a strong baseline of cybersecurity practices that also account for threats presented by AI.
Both of these fields are important and interconnected — so why does it seem like one has eclipsed the other in recent months? I would guess that prioritizing AI security is inherently more aligned with today’s foreign policy climate, in which the worldviews most in vogue are realist depictions of ruthless competition among nations for geopolitical and economic advantage. Prioritizing AI security aims to protect America from its adversaries while maintaining America’s global dominance in AI. AI safety, on the other hand, can be a lightning rod for political debates about free speech and unfair bias. The question of whether a given AI system will cause actual harm is also context dependent, as the same system deployed in different environments could produce vastly different outcomes.
In the face of so much uncertainty, combined with political disagreements about what truly constitutes harm to the public, legislators have struggled to justify passing safety legislation that could hamper America’s competitive edge. News of DeepSeek, a Chinese AI company, achieving competitive performance with U.S. AI models at substantially lower costs, only reaffirmed this move, stoking widespread fear about the steadily diminishing gap between U.S. and China AI capabilities.
What happens now, when the specter of federal safety legislation no longer looms on the horizon? Public comments from OpenAI, Anthropic and others on the Trump administration’s forthcoming “AI Action Plan” provide an interesting picture of how AI priorities have shifted. For one, “safety” hardly appears in the submissions from industry, and where safety issues are mentioned, they are reframed as national security risks that could disadvantage the U.S. in its race to out-compete China. In general, these submissions lay out a series of innovation-friendly policies, from balanced copyright rules for AI training to export controls on semiconductors and other valuable AI components (e.g. model weights).
Beyond trying to meet the spirit of the Trump administration’s initial messaging on AI, these submissions also seem to reveal what companies believe the role of the U.S. government should be when it comes to AI: funding infrastructure critical to further AI development, protecting American IP, and regulating AI only to the extent that it threatens our national security. To me, this is less of a strategy shift on the part of AI companies than it is a communications shift. If anything, these comments from industry seem more mission-aligned than their previous calls for strong and comprehensive data legislation.
Even then, not everyone in the industry supports a no-holds-barred approach to U.S. AI dominance. In their paper, “Superintelligence Strategy,” three prominent AI voices, Eric Schmidt, Dan Hendrycks and Alexandr Wang, advise caution when it comes to pursuing a Manhattan project-style push for developing superintelligent AI. The authors instead propose “Mutual Assured AI Malfunction,” or MAIM, a defensive strategy reminiscent of Cold War-era deterrence that would that forcefully counter any state-led efforts to achieve an AI monopoly.
If the United States were to pursue this strategy, it would need to disable threatening AI projects, restrict access to advanced AI chips and open weight models and strengthen domestic chip manufacturing. Doing so, according to the authors, would enable the U.S. and other countries to peacefully advance AI innovation while lowering the overall risk of rogue actors using AI to create widespread damage.
It will be interesting to see whether these proposals gain traction in the coming months as the Trump administration forms a more detailed position on AI. We should expect to see more such proposals — specifically, those that persistently focus on the geopolitical risks and opportunities of AI, only suggesting legislation to the extent that it helps prevent large-scale catastrophes, such as the creation of biological weapons or foreign attacks on critical U.S. assets.
Unfortunately, safety issues don’t disappear when you stop paying attention to them or rename a safety institute. While strengthening our security posture may help to boost our competitive edge and counter foreign attacks, it’s the safety interventions that help prevent harm to individuals or society at scale.
The reality is that AI safety and security work hand-in-hand — AI safety interventions don’t work if the systems themselves can be hacked; by the same token, securing AI systems against external threats becomes meaningless if those systems are inherently unsafe and prone to causing harm. Cambridge Analytica offers a useful illustration of this relationship; the incident revealed that Facebook’s inadequate safety protocols around data access served to exacerbate security vulnerabilities that were then exploited for political manipulation. Today’s AI systems face similarly interconnected challenges. When safety guardrails are dismantled, security risks inevitably follow.
For now, AI safety is in the hands of state legislatures and corporate trust and safety teams. The companies building AI know — perhaps better than anyone else — what the stakes are. A single breach of trust, whether it’s data theft or an accident, can be destructive to their brand. I predict that they will therefore continue to invest in sensible AI safety practices, but discreetly and without fanfare. Emerging initiatives like ROOST, which enables companies to collaboratively build open safety tools, may be a good preview of what’s to come: a quietly burgeoning AI safety movement, supported by the experts, labs and institutions that have pioneered this field over the past decade.
Hopefully, that will be enough.
The above is the detailed content of US AI Policy Pivots Sharply From 'Safety' To 'Security'. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Vibe coding is reshaping the world of software development by letting us create applications using natural language instead of endless lines of code. Inspired by visionaries like Andrej Karpathy, this innovative approach lets dev

February 2025 has been yet another game-changing month for generative AI, bringing us some of the most anticipated model upgrades and groundbreaking new features. From xAI’s Grok 3 and Anthropic’s Claude 3.7 Sonnet, to OpenAI’s G

YOLO (You Only Look Once) has been a leading real-time object detection framework, with each iteration improving upon the previous versions. The latest version YOLO v12 introduces advancements that significantly enhance accuracy

ChatGPT 4 is currently available and widely used, demonstrating significant improvements in understanding context and generating coherent responses compared to its predecessors like ChatGPT 3.5. Future developments may include more personalized interactions and real-time data processing capabilities, further enhancing its potential for various applications.

The article reviews top AI art generators, discussing their features, suitability for creative projects, and value. It highlights Midjourney as the best value for professionals and recommends DALL-E 2 for high-quality, customizable art.

Google DeepMind's GenCast: A Revolutionary AI for Weather Forecasting Weather forecasting has undergone a dramatic transformation, moving from rudimentary observations to sophisticated AI-powered predictions. Google DeepMind's GenCast, a groundbreak

OpenAI's o1: A 12-Day Gift Spree Begins with Their Most Powerful Model Yet December's arrival brings a global slowdown, snowflakes in some parts of the world, but OpenAI is just getting started. Sam Altman and his team are launching a 12-day gift ex

The article discusses AI models surpassing ChatGPT, like LaMDA, LLaMA, and Grok, highlighting their advantages in accuracy, understanding, and industry impact.(159 characters)
