The Me Mastery Blog

Take Actions That Change Your Life For The Better!

Psychologist Not Technologist

Head of Preparedness at Open AI?

December 30, 20255 min read

Here's Why the Head of Preparedness at OpenAI Should Be A Psychologist Not A Technologist

When Sam Altman recently announced the role of "Head of Preparedness" at OpenAI, the title itself carried a quiet but critical message.

This is not a conventional safety role. It is not a narrow research appointment. It is not simply about model robustness, alignment techniques, red-teaming, or security hardening.

It is about anticipating and defending against the human consequences of increasingly powerful artificial intelligence.

The job description explicitly references risks to mental health, cybersecurity, and biological weapons. What is often overlooked is that these domains share a common failure vector.

That vector is human behavior under pressure.

The Central Risk Is Not Model Capability, It Is Human Adaptation

Much of the AI safety discourse remains anchored in technical failure modes: adversarial attacks, distributional shift, misalignment, misuse, and escalation pathways. Yes, these matter. But history shows that transformative technologies rarely cause harm first because they malfunction.

They cause harm because humans adapt to them poorly.

Advanced AI systems introduce:

  • Cognitive offloading at unprecedented scale

  • Automation bias and authority bias reinforced by probabilistic confidence

  • Emotional dependence on responsive, personalized systems

  • Degradation of judgment under speed, abundance, and complexity

  • Identity erosion as expertise, authorship, and competence are outsourced

These are not engineering problems alone. They are psychological, cultural, and ethical problems embedded in sociotechnical systems.

A Head of Preparedness who is trained only to think in terms of computing, benchmarks, threat models, and control layers will miss the earliest warning signs.

Preparedness requires a different lens.

Preparedness Is a Human-Centered Discipline

Preparedness is anticipatory by nature. It is concerned not with what happens after failure, but with how failure incubates long before it is visible in metrics, incidents, or headlines.

True preparedness asks:

  • How do humans behave when systems become faster, smarter, and more persuasive than they are?

  • What psychological capacities weaken when friction is removed from thinking?

  • When does augmentation quietly become substitution?

  • When does optimization undermine meaning, agency, and responsibility?

  • How do fear, dependency, and denial distort decision-making at scale?

These questions sit at the intersection of psychology, behavioral science, ethics, systems theory, and technology governance.

They cannot be answered by technical expertise alone.

Mental Health Is Not a Peripheral Risk, It Is a Primary One

The explicit inclusion of mental health and well-being in the preparedness mandate is not symbolic. It is an acknowledgment that the most immediate harms of powerful AI will manifest internally, not externally.

We are already observing early signals:

  • Heightened anxiety driven by algorithmic comparison and productivity pressure

  • Burnout accelerated by always-on automation

  • Decision paralysis as AI generates infinite options without values

  • Loss of professional identity as cognitive labor is automated

  • Emotional regulation outsourced to non-human agents

As models become more agentic, multimodal, and contextually adaptive, their psychological influence will increase, not decrease.

Preparedness requires the ability to recognize when a system is technically aligned but psychologically destabilizing.

That recognition demands deep training in human cognition, emotion regulation, trauma response, motivation, and resilience, especially under conditions of rapid change.

Cultural and Contextual Sensitivity Is a Safety Requirement

AI systems do not interact with a single, homogeneous population. They are deployed across cultures, histories, belief systems, power dynamics, and social stressors.

Preparedness must therefore account for:

  • Cross-cultural differences in authority, trust, and deference

  • How colonial, racial, economic, and historical trauma shapes technology adoption

  • Divergent norms around autonomy, collectivism, and responsibility

  • Unequal exposure to harm due to structural vulnerability

A preparedness leader must understand that psychological impact is not uniform. The same system can empower one population while destabilizing another.

Experience working with diverse populations, across cultures and contexts, is not a “soft” qualification. It is essential for anticipating asymmetric risk.

Even Cybersecurity and Biosecurity Fail First at the Human Level

In domains often framed as purely technical, the dominant failure modes remain human:

  • Overconfidence in safeguards

  • Complacency under familiarity

  • Groupthink in high-status teams

  • Moral disengagement under abstraction

  • Poor judgment under stress and time pressure

Preparedness requires understanding how intelligent, ethical people rationalize dangerous decisions when incentives, fear, or urgency distort perception.

This is the domain of behavioral science, clinical psychology, and decision theory, not just threat modeling.

The Missing Perspective in AI Preparedness

What is still underrepresented in AI governance is deep expertise in:

  • Psychological resilience under sustained disruption

  • How humans adapt to loss of agency and meaning

  • The emotional and identity costs of automation

  • The difference between perceived safety and lived safety

A preparedness leader grounded in psychology and human development brings a complementary form of intelligence, one that anticipates second- and third-order effects that technical teams often encounter too late.

This perspective does not compete with engineering or science.

It completes them.

Preparedness Is Ultimately About Human Flourishing

The deeper question behind the Head of Preparedness role is not only:
“How do we prevent catastrophic misuse?”

It is:
“What kind of humans will exist alongside these systems?”

A society can be computationally efficient, economically optimized, and statistically safe, while being psychologically fragile, dependent, and disconnected from meaning.

Preparedness, at its highest level, is about ensuring that human agency, judgment, dignity, and resilience are not collateral damage of progress.

That requires leadership grounded not only in algorithms and automation, but in a profound understanding of the human mind.

So… What’s Needed?

The next generation of AI risk will not announce itself as failure.

It will arrive quietly, through convenience, speed, confidence, and emotional comfort.

Preparedness is the discipline of seeing that future early, naming it accurately, and designing systems that protect not just humanity’s survival, but its psychological integrity.

For that task, technical brilliance is necessary.

But it is not sufficient.

The role demands a leader who understands how humans think, feel, adapt, and break, before the breaking becomes irreversible.

OpenAI
blog author image

Dr. Marcus Mottley

Author & Creator, Clinical Psychologist, Executive, Positive Psychology & Neuroscience Coach

Back to Blog

© 2026 All Rights Reserved - Dr. Marcus Mottley