If AI can automate or outperform parts of my job, then what value will I be providing?
I keep hearing versions of this question in my conversations and research. It rarely shows up this directly. More often, it appears as hesitation. As resistance. As a quiet sense that something fundamental is shifting at work, even though job titles and day-to-day responsibilities may not have changed yet.
Over the past few weeks, I’ve been trying to put words to that shift more clearly.
In earlier posts, I wrote about how the introduction of AI tools into the workplace doesn’t just change how work gets done. It changes how people think about the value they provide. I also explored why resistance to AI often has less to do with age or technical ability and more to do with trust, identity, and relevance.
Which brings me back to relevance and value.
Underneath nearly every conversation about tools, adoption, and productivity runs a quieter theme. People are trying to understand how they’ll continue to contribute in meaningful and valuable ways as work evolves, especially as AI is increasingly framed as “taking over.”
Most roles today include some repetitive or rule-driven tasks. These might involve summarizing information, generating drafts, analyzing patterns, or surfacing options. AI is exceptionally good at this kind of work, particularly when it comes to producing large volumes of content and information quickly.
As professionals begin to recognize this, a natural protective response often emerges, quietly framed as: but these tasks are why I matter.
Here’s the thing. That was never the case.
Over time, routine or mechanical tasks tend to become familiar. They give us a sense of productivity, which often becomes shorthand for value. And while repetitive work matters because it needs to be done, long before AI entered the conversation, it was rarely the most valuable way humans contributed.
The most valuable contributions have always been quieter and more intuitive: knowing which questions to ask, understanding tradeoffs, reading the room, and applying judgment shaped by experience and human nuance, often with incomplete information.
What’s different now is that AI is making this distinction harder to ignore.
AI can generate ideas, options, and information at scale. It can produce drafts, summaries, and variations far faster than any human. But it cannot determine what’s accurate, relevant, or appropriate for a specific situation, audience, or moment. It can’t decide what should be prioritized, what should be discarded, or how something should be shaped for the greatest positive impact.
That work still belongs to humans.
And this is where much of the discomfort around AI actually sits.
Not in what AI can do, but in what this shift begins to reveal and require. Value has never been about producing output alone. It has always lived in judgment, context, and responsibility. What’s different now is that many people are being asked to see their work through that lens for the very first time.
When that realization surfaces, it often collides with how we’ve been taught to think about intelligence and value at work. If intelligence is treated as a hierarchy, something to dominate or compete with, then yes, AI feels threatening. But work has never rewarded intelligence alone. It has rewarded the ability to decide what matters and to live with the consequences of those decisions.
That is precisely where AI consistently falls short.
In deciding what matters.
In understanding nuance.
In taking responsibility for what happens next.
And that’s where the opportunity and the need for humans sit today.
AI can support deeper, more nuanced work. It can’t replace it. When a person’s sense of value has been closely tied to task execution, this shift understandably feels harder. When value has been rooted, even implicitly, in judgment, perspective, and responsibility, AI may feel more useful than threatening.
That difference in reaction makes sense.
For some people, this moment feels destabilizing, especially if their role has never been articulated beyond “what they do.” For others, it feels exciting and full of possibility. Not everyone is comfortable with reinvention, and not everyone needs constant change in order to do meaningful work.
What is changing isn’t whether humans are needed. It’s where human effort makes the biggest difference.
Less time doing.
More time overseeing and deciding.
Less time producing raw output.
More time shaping direction, meaning, and outcomes through critical thinking and choice.
A steadier way to think about AI and the future of work is to consider not only where AI can help, but how your institutional knowledge of your industry, company, and customers, along with judgment and context, shape its use as those skills continue to develop over time.
Those opportunities aren’t shrinking. They’re simply shifting.
AI doesn’t erase your value. It exposes where your true value has been all along.
If you’ve been feeling unsettled, you’re not behind.
You’re in the middle of an evolution.
And before rushing to decide what that evolution should look like, it can be grounding to pause and notice how you’re actually feeling, without judgment. Not to change it. Just to understand it and honor how who you’ve been has served you well, as you prepare to move forward.
In the next post, I’ll explore how to engage with AI in a way that supports clearer thinking and leverages your expertise and experience to create more impactful work and better-informed decisions.
