AI Didn’t Decide This
How leaders turn technology into an alibi and why transformations fail when accountability disappears.
The other night, I was watching a clip of Josh Johnson on The Daily Show. He was asked a familiar question from the audience: whether he worried about AI “taking over” comedy.
His answer was, of course, humorous, and, upon deeper reflection, intriguing to me.
He said, in effect: AI isn’t a person with arms and legs that walks into a room and does your job better than you. AI doesn’t fire people. AI doesn’t walk into leadership meetings and decide which roles to eliminate.
In other words, someone else makes that decision.
A leader. An executive. A board. A human. Someone decides that parts of a job can now be done cheaply enough to justify no longer paying for the whole person. And once that decision is made, we describe it as inevitable. Technological. Neutral.
That framing stayed with me. Not because it was about comedy, but because it sounded exactly like how transformation decisions are explained inside large organizations: “AI is replacing jobs”, “The system is changing”, and “The market is forcing our hand.”
Each phrase pulls responsibility one step farther away from the people making the call. Over time, that language does something subtle: it turns leadership decisions into acts of fate.
When Tools Become Alibis
I’ve spent the last several years working in and around transformation inside large, complex, matrixed organizations. And one pattern shows up with striking consistency:
Transformation rarely fails because leaders lack capability. It falters when accountability thins out.
I posit that AI is becoming (or has become, depending on your particular point of view) the newest place to hide.
Not because AI is uniquely powerful, but because it’s abstract enough to absorb responsibility. When leaders say “AI is taking jobs,” they’re not making a technical statement. They’re making a rhetorical one. They’re framing a choice as inevitability.
That distinction matters.
Despite the rhetoric (review the difference between causal vs correlational), research consistently shows that AI and automation tend to reshape tasks rather than eliminate jobs outright. Work is increasingly being redesigned as a partnership between humans and intelligent systems, with judgment, interpretation, and context remaining human responsibilities.¹
Which means that when roles disappear, it isn’t because AI decided they should. It’s because leaders chose which parts of the work—and which parts of the human—they were no longer willing to fund.
Outputs vs. Experience
Most AI conversations reduce work to outputs. Inputs and outputs. Speed and cost.
But many roles—far more than we like to admit—aren’t primarily about output. They’re about experience. Judgment. Trust. Relationship. Sense-making.
A good comedian isn’t just delivering jokes.
An inspiring leader isn’t just issuing direction.
An effective salesperson isn’t just closing deals.
A thorough clinician isn’t just following protocol.
In each case, the value is created in the quality of interaction: in reading the room, adjusting in real time, and building credibility over repeated exchanges.
In short, AI can replicate outputs, but it cannot replicate presence, especially distinct, measurable, quality presence. Research on psychological safety reinforces this point. Leaders’ behaviors—clarity, emotional consistency, inclusion, support for risk-taking—directly affect trust, learning, and performance, especially in periods of uncertainty and change.² These qualities aren’t soft add-ons. They’re structural enablers of adaptability.
Net, when leaders reduce work to tasks alone, they don’t just lower costs. They quietly weaken the system’s ability to think, learn, and recover.
The Pattern Beneath the Technology
This is where AI fits into a much older transformation story.
Every large-scale transformation reaches a moment when leaders must decide what they are willing to disrupt and what they are willing to protect: Incentives. Identities. Power structures. Human capital.
Those decisions are uncomfortable. They generate tension. They require explanation.
So leaders naturally (but not nefariously as some might argue) reach for abstractions:
“The system made us do it.”
“The market left us no choice.”
“This is just how the technology is evolving.”
I believe AI is simply the latest abstraction. It allows leaders to discuss efficiency without discussing experience. To talk about cost without talking about values. To talk about progress without talking about loss. And, most importantly, it allows leaders to avoid saying the quiet part out loud: “This is a trade-off we are choosing.”
Influence, Leadership, and Accountability
In a recent essay, I wrote that influence is the most precise measure of leadership. I still believe that. Influence reveals whether people follow your thinking when hierarchy disappears. But transformation introduces a harder test.
Influence tells you whether you can lead. Transformation tests whether you’re willing to own the consequences of leading.
You can be persuasive, trusted, and respected, and still fail at transformation if you won’t stand behind the trade-offs your decisions create.
Research on transformational leadership supports this distinction. Leadership styles that emphasize empowerment, trust, and psychological support are associated with stronger engagement and organizational citizenship—but only when leaders model responsibility rather than deflect it.³
In my mind, this isn’t a capability gap; it’s an accountability gap.
The Question Leaders Should Sit With
So, maybe the real question isn’t whether AI can do parts of the job; maybe it’s what parts of the work are we still willing to pay humans to do—and why?
That question forces clarity.
It exposes values.
It demands ownership.
And it’s the question many transformation efforts avoid. Because once you answer it honestly, you can no longer pretend the decision was inevitable.
Final Reflection
AI is not taking jobs in the literal sense. Leaders and managers (aka humans) are deciding which parts of people they’re willing to keep and which they’re willing to let go.
How they talk about those decisions matters. Not just for morale, but for trust. Not just for optics, but for whether the system they’re reshaping can sustain the outcomes they want.
Transformation doesn’t fail because technology moves too fast. It falters when leaders shift responsibility elsewhere.
Simple, not easy.
References
McKinsey Global Institute — Agents, robots, and us: Skill shifts in the age of AI
Edmondson, A. et al. — Psychological safety and team performance research
Peer-reviewed studies on transformational leadership and engagement (2023–2024)





