
"Prevention" Not Just 'Preparedness'
Here's Why OpenAI Needs a "Head of Prevention", Not Just a Head of Preparedness!
When Sam Altman announced the role of Head of Preparedness at OpenAI, it was widely interpreted as a sign of maturity, responsibility, and foresight. On the surface, that interpretation is reasonable.
But on closer examination, both the title and the job framing reveal a conceptual limitation that matters far more than semantics.
Preparedness is necessary. Preparedness is responsible. Preparedness is… late.
If the goal is to protect humans from the psychological, social, cyber, and biological risks of increasingly powerful AI systems, then preparedness alone is insufficient.
What is required is prevention.
Preparedness Is a Reactive Posture Disguised as Foresight
Preparedness, by definition, assumes that harm is possible and likely, and that the organization’s role is to be ready when it occurs. It emphasizes readiness, response capacity, and mitigation after thresholds have already been crossed.
This framing implicitly accepts:
That certain harms are inevitable
That systems will be deployed first and understood fully later
That human adaptation failures will be addressed once they surface
In public health, emergency management, and clinical psychology, preparedness is never the primary line of defense. It is a downstream function.
Upstream, the focus is prevention.
The absence of a prevention-centered framework in the role description is not trivial. It shapes how risks are conceptualized, which expertise is prioritized, and when interventions occur.
A Public Health Lens Reveals the Gap
Public health does not organize itself around preparedness alone. It operates using a three-tier prevention model:
Primary prevention – stopping harm before it starts
Secondary prevention – detecting early warning signs and intervening quickly
Tertiary prevention – reducing damage after harm has occurred
Applied to AI, this framework exposes the limits of the current framing and points toward a more rigorous, human-centered approach.
Primary Prevention: Designing AI That Does Not Create Harm in the First Place
Primary prevention is about structural design decisions. It asks how systems can be built so that entire classes of harm never emerge.
In the context of advanced AI, primary prevention would include:
Designing models that discourage psychological dependency and emotional substitution
Guardrails against automation bias and over-deference
Limits on persuasive optimization that exploits cognitive and emotional vulnerabilities
Human-in-the-loop architectures that preserve judgment rather than erode it
Explicit protection of agency, meaning, and skill retention
This is not a post-deployment safety exercise. It is an upstream design philosophy.
As I stated in a previous article, the earliest and most dangerous risks of AI are not technical failures but human adaptation failures. Primary prevention targets those risks before they become normalized.
Preparedness does not do this. Prevention does.
Secondary Prevention: Detecting Psychological and Social Drift Early
Secondary prevention focuses on early detection. In medicine, this means screening and early intervention. In AI, it should mean continuous monitoring of human impact signals.
This includes:
Early indicators of cognitive offloading becoming dependency
Rising anxiety, burnout, or identity disruption among users
Changes in decision quality under AI-assisted speed and volume
Cultural or demographic groups experiencing disproportionate harm
Subtle normalization of abdicated responsibility (“the system decided”)
These signals are psychological and behavioral, not merely technical. They do not show up first in system logs. They show up in human behavior.
A prevention-oriented role would institutionalize mechanisms to identify and respond to these signals before they metastasize.
Preparedness tends to notice them only after they become crises.
Tertiary Prevention: Mitigating Harm After It Has Occurred
Tertiary prevention is the domain most people implicitly associate with preparedness: response, recovery, damage control.
In AI terms, this includes:
Crisis response to misuse or mass harm
Remediation for psychological or social damage
Policy changes after public failure
Retrofitting safeguards once trust is lost
This work is necessary. But it is also the most expensive, least effective, and most reputationally damaging stage at which to intervene.
A role overly centered on preparedness risks over-investing here while under-investing upstream.
Why This Distinction Matters Strategically
Words shape systems. Calling the role Head of Preparedness subtly orients the organization toward reaction rather than prevention, mitigation rather than design discipline, and response rather than foresight.
A prevention-centered framing would:
Legitimize psychological, behavioral, and cultural expertise at the highest level
Shift risk assessment earlier in the product lifecycle
Embed human well-being as a design constraint, not an afterthought
Reduce the likelihood that OpenAI is perpetually “prepared” for harms it could have prevented
As I have highlighted in a previous article, the greatest AI risks will not arrive as sudden catastrophes. They will emerge gradually, through convenience, emotional reliance, cognitive erosion, and loss of agency.
Preparedness reacts to that erosion.
Prevention stops it.
The Opportunity OpenAI Is Missing
This role represents a rare opportunity to redefine what responsible AI leadership looks like.
A truly future-facing position would not ask: “How do we prepare for harm?”
It would ask: “How do we design systems so that humans remain psychologically resilient, agentic, and intact in the first place?”
That question belongs as much to public health, psychology, and human development as it does to computer science.
Until prevention becomes the explicit organizing principle, preparedness will remain necessary… but insufficient.
Here’s A Closing Thought
Civilizations rarely fail because they were unprepared for disaster.
They fail because they normalized preventable harm.
If OpenAI’s ambition is not only to build powerful systems but to steward their integration into human life responsibly, then the next evolution of this role is clear.
Preparedness should be a function.
Prevention should be the mission.
