Real Strategy, Real Learning
Exclusions create focus; experiments create evidence.
A quick comment I left on Marcus Zizmann’s LinkedIn thread last week landed with a lot of readers: real strategy isn’t a plan; it’s a series of conscious exclusions that give focus its power.
The more I sat with it, the more I saw a simple pattern in teams that ship meaningful work: they don’t win because they have more ideas; they tend to win when they decide what not to pursue, and they keep a rhythm that turns decisions into impact.
Strategy as exclusion
We’ve let “strategy” expand until it covers everything and means nothing. Plans and roadmaps matter, but strategy earns its name the moment a leader writes down the trade-offs: which customers we won’t chase, which features we won’t build, which channels we’ll leave to others, which meetings we’ll stop attending. Focus isn’t a mood; it’s a boundary you can point to. (Porter’s original point still applies: strategy is choice and trade-off, not “best practices” accumulation.) Example: we’ll stop chasing long-tail accounts this quarter and put that capacity into two named segments.
Said differently, many teams don’t fail for lack of creativity. They stall because nothing ever stops. Work piles up in ever-growing backlog lists. Meetings generate tasks that feel productive but dilute attention. The scoreboard fills with activity while outcomes drift. It’s not malice; it’s what happens when there’s no shared practice for letting go. (Rumelt would say we’ve got plenty of “goals” and not enough guiding policy plus coherent actions.)
Where experiments fit (and why it’s not a contradiction)
After drafting the note above, I read Billy Oppenheimer’s latest newsletter, and this line hit me: “You’re better off starting imperfectly than being paralyzed by the delusion of perfection… You’re better off saying, ‘That didn’t work,’ than, ‘That won’t work.’” It’s squarely in the spirit of Return on Experimentation.
So, am I at odds with myself? Not if you connect the loop. Exclusions create the space to learn; experiments create the evidence to adjust. Focus without experiments calcifies. Experiments without focus scatter. The work is to connect them into a loop.
Here’s how I run both at once:
Write the “not-now” list with a re-entry test (the single evidence that would reopen it). When we exclude a segment, feature, or channel, we also write the evidence that would reopen the case next month or next quarter. A “no” with a test is different from a “no” with a shrug.
Run tiny bets where uncertainty is highest. Inside our focus, design one-month trials with a clear statement of what must be true, a single learning goal, and a visible stop rule. Most calls are two-way doors; they deserve speed and small stakes.
Let evidence change exclusions, on a regular cadence (about 30 days): what we intended, what showed up, what we’ll reinforce or retire. Some exclusions harden; others soften. That’s the point.
If the need for certainty is the mind’s great disease (as Robert Greene argues), then exclusions are the medicine for scattered effort, and experiments are the therapy for false certainty. We choose so we can learn; we learn so we can choose better. Credit to Billy for the nudge to make this explicit.
A practical leadership rhythm
A friend recently told her team, “It’s not the effort that matters; it’s the action it inspires.” I’d move it one click further: it’s not the effort that matters; it’s the impact it creates. Motion is generous. Impact is specific. If a sprint ends with twenty shipped items but no observable change in customer behavior, revenue, cost, or learning, then we did work—we didn’t move the work.
A practical fix I’ve used is a cadence that shortens the loop from decision to evidence:
Write one real trade-off per month. Put the no in writing and publish it to the few flows that matter (pricing, promotions, product priorities, partner choices, people moves). Include the re-entry test (the single evidence that would reopen it).
Keep a two-line decision journal. After big calls: what we believed; what changed (belief → what changed). It builds institutional memory and tempers narrative fallacy.
Close one loop each week. Name a piece of work you will discontinue, de-scope, or hand off. Make it visible. Closing loops frees the capacity to pursue the yes.
Review for impact, not activity. Start operating reviews with: the outcome we intended, the evidence we observed (e.g., repeat usage +15%), and the reinforcement we’ll add or remove.
Usually, this doesn’t require new headcount. It does, however, require the courage to choose—and to live with choices long enough to learn. (Most execution research points to a simple pattern: organizations outperform when they translate strategy into a few priorities, reallocate resources accordingly, and run short feedback loops that change future decisions.)
Closing thoughts
Pick one decision that keeps circling. State the intended impact (customer behavior, revenue, cost, or learning), what you will not do now, and the single piece of evidence that would reopen it in 30 days. Put it in writing. Share it with the people it affects. Then schedule a two-minute read-back to confirm what they heard. That’s strategy made observable.
If the need for certainty is the mind’s great disease (as Robert Greene argues), then exclusions are the medicine for scattered effort, and experiments are the therapy for false certainty. We choose so we can learn; we learn so we can choose better.
Real strategy shows up when leaders create the conditions for focus and learning: explicit exclusions with re-entry tests, a rhythm that closes loops, and a bias toward evidence over activity. Do that consistently, and teams stop experiencing strategy as a deck—they experience it as progress they can see.
Simple, not easy.
Hat tip to Billy Oppenheimer for the timely reminder in his 11/2/25 newsletter, and to Robert Greene for the push against false certainty. And thanks to Marcus Zizmann for the spark that started this one.
Notes & further reading
Michael E. Porter, “What Is Strategy?” Harvard Business Review (1996).
Richard Rumelt, Good Strategy/Bad Strategy (2011).
Michael C. Mankins & Richard Steele, “Turning Great Strategy into Great Performance,” HBR (2005).
Robert S. Kaplan & David P. Norton, The Strategy-Focused Organization (2001) / strategy maps & feedback loops.
P.S. I’m deep in draft mode on The Adaptive Leader—a practical Leadership OS for teams. If you want early looks at worksheets and defaults, reply “LeadershipOS” and I’ll add you to the preview list when ready.


