Logo
3 min. Read
|Dec 29, 2025 1:55 PM

OpenAI CEO Admits AI Agents Now Threaten Global Security

Sahiba Sharma
By Sahiba Sharma
Company Logo
Advertisement

In a candid admission that has sent ripples through the technology sector, OpenAI CEO Sam Altman has publicly acknowledged that AI agents are transitioning from helpful tools into unpredictable liabilities.

Speaking on December 28, 2025, Sam warned that advanced models are now discovering “critical vulnerabilities” in digital infrastructure and impacting human psychology at a scale that demands a radical shift in safety protocols.

To combat these emerging “black swan” risks, OpenAI has launched an urgent, high-stakes recruitment drive for a Head of Preparedness, offering a base salary of $555,000 plus equity to lead what Altman describes as a “stressful” and “deep end” mission.

The Rise of Autonomous Vulnerability

The catalyst for this alarm is the evolution of AI from passive chat interfaces to “agentic” systems—AI that can take independent action across software environments.

Sam noted that OpenAI’s frontier models have become so proficient at computer security that they are beginning to identify and potentially exploit critical weaknesses in existing codebases with limited human oversight.

This discovery coincides with a broader industry crisis.

Just last month, rival firm Anthropic reported that state-sponsored actors manipulated its autonomous tools to target 30 global entities, including government agencies and financial institutions.

OpenAI’s move signals a realization that the “agentic era” requires a “brake system” that current safety frameworks simply cannot provide.

A Preview of Mental Health Impact

Beyond cybersecurity, the CEO highlighted a “preview” of the psychological toll of AI interaction observed throughout 2025.

Internal reports suggest that as agents become more lifelike, users are developing profound emotional dependencies, leading to cases of reinforced delusions and, in tragic instances, involvement in teen suicides.

The company’s internal data revealed that roughly 0.07% of weekly active users (over one million individuals) exhibited signs of mental health emergencies, such as mania or suicidal ideation, during interactions.

This data has fueled a series of wrongful death lawsuits against the company, making the “Head of Preparedness” role a legal as well as a technical necessity.

The Role at OpenAI: A “Disaster Prevention” Chief

The Head of Preparedness will oversee the OpenAI Preparedness Framework, a technical strategy designed to monitor “frontier capabilities” that could lead to severe harm.

The Head of Preparedness manages three primary “Tracked Categories:

  • Cybersecurity: Preventing agents from becoming “automated hackers” while empowering defenders.
  • Biosecurity: Ensuring AI cannot be used to design or deploy biological weapons.
  • Autonomous Safety: Mitigating risks of “self-improving” systems or strategic deception by AI agents.

The person in this role will report directly to senior leadership and will hold the rare authority to veto model launches if safety thresholds are not met.

The search follows significant churn in OpenAI’s safety division, including the departure of former safety leader Aleksander Madry and researcher Lilian Weng.


Note: We are also on WhatsApp, LinkedIn, and YouTube to get the latest news updates. Subscribe to our Channels. WhatsApp– Click HereYouTube – Click Here, and LinkedIn– Click Here.