What Counts as Contribution Now?
In an assisted world, the scarcest skill is framing the work that matters
In my last essay, I spent time sitting with a tension that many leaders recognize but don’t always name: the widening gap between how value is actually created and how it is formally recognized. I posited that most meaningful work now happens through teams, networks, and layered coordination. Yet our performance systems still tend to resolve that shared effort back into individual narratives.
That tension isn’t new. But it is becoming more visible.
Over the past few years, in conversations with peers, operators, hiring managers, and leaders across different industries, I’ve noticed a shift in how people describe the work. Not necessarily more difficult. Not necessarily more complex. But harder to cleanly attribute. The through-line in many of these conversations isn’t confusion about effort—people know they’re working hard—but more ambiguity about where individual contribution begins and ends once work becomes deeply interdependent.
And just as many organizations adapt to that reality (or not), another layer entered the system: assistance is becoming ambient. Tools that accelerate drafting, analysis, synthesis, coding, and content generation are increasingly embedded in everyday work. In many environments, they are no longer novel. They are simply present; another part of the production landscape.
It would be easy to frame this as a sudden disruption. I’m not sure that’s quite right.
I believe another way to see it is this: just as we are trying to resolve the tension between value being created across teams and performance management at the individual level, we are now seeing value emerge across teams and tools simultaneously. I.e., the boundary between individual effort and system-enabled output was already softening; assistance is now making that boundary even more visible.
Which raises a more interesting question than the usual productivity debate. Not whether tools are helping, and not whether output is increasing, but perhaps something more structural:
What does contribution actually mean in an environment where capability is increasingly distributed across people, teams, and intelligent assistance?
That’s the question worth exploring next.
Contribution Was Already Blurry Before AI
Long before intelligent assistance became part of the daily workflow, contributions in modern organizations were already difficult to isolate cleanly.
The other day, I was in a meeting where the facilitator was filling out a simple tracker for a deliverable. One of the header columns was “Owner.” In the box below it, they listed four individuals… and then a separate team.
I chuckled to myself. Twenty-plus “owners” often become nobody's to own it.
The following week, I was filling out a similar tracker. I used “Single Point of Contact (SPOC)” as the header because I prefer it to “Owner” in matrixed organizations. And yet, when it came time to fill in the “single” point of contact, I still typed two names because I knew the deliverable required a true partnership.
My colleague gently nudged me: “So two owners then?”
Forehead slap.
Knowing the work genuinely required two people to drive it, I adjusted the header to “SPOC or Pod” to give the team a bit of flexibility without opening the door to twenty-plus names. I could have added a second column for “support,” but I’ve found that’s often how trackers become a Pandora’s box of half-ownership, where everybody supports everything.
And therein lies the paradox: in a complex system, who’s actually doing the work?
Consider the typical path of meaningful work inside a large, matrixed environment. An insight surfaces in one function. It is pressure-tested in another. Someone else translates it into an executable plan. A partner flags risk early enough to avoid rework. A manager removes friction that would have stalled momentum. A leader creates the air cover that allows the work to move at all.
By the time the outcome becomes visible, the fingerprints are everywhere.
Calling it dysfunction or bureaucracy is reductionist (and cognitively lazy); it’s simply how complex organizations create value. The higher the stakes and the broader the impact, the more likely it is that multiple people shaped the result in ways that don’t show up neatly in a final deliverable.
Even in roles that appear highly individual on the surface, the work has rarely been fully solitary. Marketers rely on agencies (yes, they still do). Product leaders rely on engineering. Strategy teams rely on data partners. Commercial teams rely on operations and supply. Knowledge work, in particular, has long been a layered act of synthesis rather than a single-threaded act of production.
Which means the attribution challenge did not begin with generative tools. It was already present in every cross-functional initiative, every co-created strategy, every piece of work that required translation across org-chart-based boundaries. Many organizations simply learned to live with a workable approximation: that individual performance could be reasonably inferred even when the underlying work was deeply collaborative.
For a long time, that approximation was good enough. The signals were imperfect, but directionally useful. Managers could still triangulate effort, judgment, ownership, and follow-through by observing behavior over time. The system held.
What is changing now is not the existence of shared work. It’s the compression of visible effort.
When assistance accelerates drafting, analysis, or synthesis, some of the traditional proxies leaders used to gauge contribution—time spent, volume produced, visible grind—become less reliable. Output may remain high while the path to that output becomes less observable.
And that is where the attribution question becomes more structurally interesting. If the contribution was already distributed across teams and portions of execution are increasingly augmented, the real signal of individual value may be migrating elsewhere.
Not disappearing.
But moving.
The question is where.
The Shift from Doing the Work to Framing the Work
For much of the last two decades, many performance systems implicitly rewarded the ability to produce: write the deck, build the model, generate the analysis, and draft the plan. Execution artifacts (aka deliverables) served as the most visible proof that work was happening and that someone was driving it forward. That logic made sense in a world where production itself was the primary constraint.
But in an assisted environment, production is becoming less scarce in certain domains. Drafts appear faster. Analyses materialize more quickly. First passes that once took days can now take hours. In some cases, minutes.
In other words, when the cost of producing the artifact falls, the value of merely producing it tends to fall with it. In this context, I offer that a different set of capabilities begins to matter more: The people who create disproportionate value are often the ones who framed the right problem in the first place or asked the sharper question that changed the direction of the work.
They’re the ones who knew what not to pursue, integrated disparate inputs into something decision-useful, or created the conditions for the work to land, stick, then scale.
These moves are less visible than raw production. They rarely show up cleanly in a version history. They’re harder to count, harder to screenshot, and harder to summarize in a self-assessment. But in complex environments, they are often where leverage lives.
You can see this shift in high-functioning teams. The people who consistently move the work forward are not always the ones generating the most pages. They are often the ones who reduce ambiguity, surface the real trade-off, sequence the work intelligently, or help others along.
In other words, in many assisted, knowledge-heavy environments, the center of gravity shifts from doing the work to framing the work that matters.
This does not make execution unimportant. Poor execution still destroys value quickly. But as certain forms of production become more assisted, the relative premium on judgment, discernment, and contextual intelligence increases.
Which creates a subtle but important leadership question: if performance systems still primarily reward visible output, but the highest leverage is increasingly upstream—in how work is defined, shaped, and focused—what exactly are we training people to optimize for?
That tension is still early. It’s uneven across roles and industries. But it’s getting harder to ignore, especially in environments where augmentation is already embedded in day-to-day workflow.
And it sets up the next layer of the conversation. Once a contribution becomes harder to see directly, a natural question follows close behind:
How should organizations actually evaluate and reward it?
What Leaders May Need to Start Rewarding Differently
If you buy into the idea that contribution is shifting (has shifted)—from visible production toward problem framing, judgment, and integration—then current performance management systems may not be fit for purpose.
Many were built for a world where effort was easier to observe, and outputs were easier to attribute. You could point to the model, the deck, the campaign, the code. The line between input and ownership, while never perfect, was at least legible enough to support compensation, development, and promotion decisions.
But, that line is becoming more interpretive (and often highly subjective). Not because individual contribution has disappeared, but because more of the highest-leverage work now happens between the outputs: in how problems are scoped, how trade-offs are surfaced, how teams are aligned, and how ambiguity is reduced early enough to matter.
The, if that is directionally true, leaders may need to broaden what they look for when assessing performance—not replacing the old signals, but supplementing them.
In practice, that shift starts with what leaders choose to notice:
Did this person improve the quality of decisions around them?
Did they reduce meaningful friction for others?
Did they elevate the team's thinking, not just the volume of output?
Did they focus their effort on the work that actually moved the needle?
Did they help the system move more coherently, not just more quickly?
Those are harder signals to capture. They require more judgment. They introduce more interpretation into calibration conversations that many organizations have spent years trying to standardize in an attempt to remove “hard to measure” modifiers.
So, as I argued in my previous essay, the likely path forward is not a sudden replacement of individual performance models. It is a gradual broadening of what “strong performance” is understood to include—especially in roles where coordination, judgment, and contextual leadership now drive a disproportionate share of value.
Some organizations are already experimenting: expanding evaluation language to include enterprise contribution, explicitly recognizing cross-functional value creation, incorporating peer signal more thoughtfully (with mixed success), or separating output metrics from system impact indicators.
I’ll readily admit that none of these is a silver bullet. Each introduces trade-offs of its own. More qualitative judgment can increase bias if leaders are not well-calibrated (often labeled performative optics or, worse, office politics). Moreover, more peer input can create noise if trust is low, and more system-level metrics can blur accountability if poorly designed.
This is why the question is not simply “What should we reward now?” It’s more along the lines of…
How do we evolve recognition systems carefully enough to reflect how value is actually created… without losing the clarity and accountability organizations still require to function?
That is not a design tweak. It is an operating system tension.
And it brings the conversation full circle. If the nature of contribution is expanding—from individual output to system-shaping impact—then both leaders and individual contributors are being asked to update their mental models at the same time. Leaders must learn to see differently, and individuals must learn to signal value differently.
Closing Thoughts: Seeing Contribution More Clearly
If there is a throughline across all of this, it is not that individual contribution is disappearing. It is that the signal is getting harder to read.
Work is more assisted. More interdependent. More distributed across systems and teams. The visible artifacts still matter, but they no longer tell the full story of where value is actually being created.
Which brings us back to the core tension.
Most organizations are still wired to recognize contribution in ways that assume relatively clean lines of ownership. Concurrently, the work itself is becoming more networked, more augmented, and more shaped by moments that do not neatly resolve into a single name.
Both realities are true. Both are likely to persist.
So the practical question—for leaders and individual contributors alike—is not whether this tension will disappear. It is how clearly we are willing to see it.
For individuals, this may mean getting more deliberate about where and how you create distinct value inside increasingly crowded, tool-enabled environments. Not just producing more, but focusing on the moments that actually move the system: framing the right problems, improving the quality of decisions, reducing the friction others are carrying…and showing the evidence of such.
For leaders, it may mean slowly expanding the aperture on what strong contribution looks like—especially in roles where the highest leverage now lives in judgment, integration, and influence rather than pure output.
None of this requires abandoning individual accountability. But it may require updating our interpretation of it. Because the risk is not that performance systems are broken. It is that, over time, they may start to miss where the most meaningful work is actually happening.
And that raises a deeper question—one worth exploring next:
In an assisted world, how should contribution be made visible… and what, if anything, should people be expected to disclose about how the work actually gets done?
That’s where the conversation is heading.
Simple, not easy.





