Why You Feel Worse Since AI Showed Up at Work

I've been having the same conversation with people for about six months now.
A senior manager finishes a quarterly review of their AI rollout. Productivity numbers are up. Throughput is up. Cycle times are down. Every metric the company cares about looks better than it did a year ago.
Then they tell me, quietly, that their team feels worse than they've ever felt about their work.
The numbers are right. The feeling is also right. The disconnect between the two is the most important AI conversation nobody is having out loud.
This article is the one I've been wanting to write for a while. It's not anti-AI. I'm pro-AI, and the rest of my library proves it. But there's a second-order effect of AI deployment that the productivity metrics are missing, and ignoring it is going to cost companies more than they realise.
The strange pattern showing up everywhere
I've now seen the same shape of conversation in financial services, manufacturing, healthcare, professional services, and media. The details change. The pattern doesn't.
What companies report | What employees say privately |
|---|---|
"Productivity is up 15-25%" | "I don't feel like I'm doing real work anymore" |
"Cycle times improved 40%" | "I'm just editing AI output all day" |
"Quality metrics are stable or higher" | "I miss the thinking part of my job" |
"Team capacity expanded 30%" | "I don't know what I'm getting better at anymore" |
"Employee NPS unchanged" | "I'm tired in a different way than I used to be" |
Both lists are accurate. Both come from the same workplaces. The reason this is hard to talk about is that the productivity story is true and the meaning story is true, and they're describing the same change from different angles.
Most company leadership is reading the first list and concluding the AI deployment is working. Most of the people doing the work are living the second list and wondering why they feel hollow.
What's actually happening
The honest mechanism isn't mysterious. It looks like this.
Before AI, knowledge work was a mix of thinking and doing. You analysed a problem. You drafted a solution. You refined the draft. You sent it. The thinking was the work and the doing was the work and you couldn't easily separate them.
AI changed the ratio. The drafting got automated. The doing got compressed. What's left is the thinking and the editing.
That sounds like an upgrade. In productivity terms it is one. But there are two problems hiding inside the ratio change.
Problem one: the doing part wasn't filler. A lot of people's sense of competence came from the doing. Producing the deck. Writing the analysis. Building the spreadsheet. When AI does the doing and you do the editing, you've kept the cognitive load but lost the craft satisfaction. You're tired in the same way you were before. You're not proud in the way you were before.
Problem two: editing is harder than drafting, but feels less valuable. When you draft something from scratch, you can see your fingerprints on it. When you edit AI output, you're doing more difficult cognitive work in some ways (evaluating, correcting, judging), but the output doesn't feel like yours. The cognitive effort goes up. The sense of authorship goes down.
The combination is what people are reporting. Same effort, less satisfaction. Same skill, less pride. Same output, less meaning.
Why this matters for the business, not just the employee
If you're reading this as a leader and your reaction is "this is an employee wellness issue, not an operating issue," I'd push back.
The companies I'm watching most closely right now are not the ones with the biggest productivity gains from AI. They're the ones where productivity gains and meaning loss are roughly balanced. Here's why that matters.
If meaning erodes faster than productivity rises | What you see in 12-18 months |
|---|---|
Your best people are also your most ambitious | They leave for jobs that feel more meaningful |
Discretionary effort drops | The 20% productivity gain becomes a 10% gain in practice |
Innovation comes from craft, not efficiency | Your team stops generating non-obvious ideas |
Quality requires emotional investment | Edge cases start slipping through |
Junior talent loses development paths | You break your pipeline for senior roles in 5 years |
This isn't speculation. Harvard Business School raised the question explicitly in December 2025. McKinsey's most recent workforce research flagged the same pattern. The companies that win the AI deployment race on quarterly metrics may lose the talent race over the next three years if they don't address this.
The productivity dashboard doesn't show this. It can't. By the time it does, you've already lost the people who would have caught it.
What the productivity metrics are missing
Most AI deployment dashboards measure five things: speed, throughput, error rates, cost reduction, and adoption rate.
None of those metrics capture what I'm describing.
Here's what they should also be measuring, in 2026:
Hidden metric | Why it matters | How to measure it |
|---|---|---|
Cognitive load relative to perceived value | High effort, low satisfaction predicts attrition | Quarterly survey, anonymous |
Craft hours per week | The work people would do if AI weren't available | Time-tracking with category tags |
Decision authorship | Whether employees feel they own their outputs | Direct question in 1:1s |
Skill growth trajectory | Whether people are getting better at something they care about | Annual review framing |
Voluntary discretionary effort | Hours spent on work outside the assigned scope | Project participation rates |
These are messier metrics than throughput. They take work to collect. They aren't easy to dashboard. They are also the metrics that predict whether your AI deployment is going to deliver value beyond the first 18 months.
If you're a leader running an AI deployment, the question to ask your team this quarter isn't "is the AI helping?" It's "do you feel proud of the work you're doing now?" The second question matters more than the first.
What I'd tell any leader right now
Four practical things, in order of how cheaply they can be implemented.
One: protect craft hours. Pick the part of each role that the person came into the job to do. Make sure they still get to do it. If a marketer became a marketer to write, make sure they still write things from scratch sometimes, even if AI could do it faster. The craft hours aren't waste. They're how skill stays alive and how the job stays interesting.
Two: name the trade-off out loud. The worst version of this story is the one where employees feel hollow but assume they're alone in feeling it. The best version is the one where leadership says explicitly: "Yes, the work has changed. Some of what you used to do is now done by AI. Here's what we're keeping for you, and here's what we're investing in for you to grow into." Naming the trade-off doesn't solve it. Not naming it makes it worse.
Three: redesign roles, don't just augment them. The companies getting this right aren't keeping the same job descriptions and adding AI on top. They're rewriting the role around what AI can do and what humans should do. The new role descriptions look different. They emphasise judgement, taste, relationship, and creativity. The old role descriptions emphasised execution. If you haven't rewritten the descriptions, you're asking people to do less of what they were hired for without explaining what they're now hired for.
Four: invest in skill development that compounds. AI compresses the time to competence on tactical tasks. That should free up time for deeper skill investment, not eliminate skill investment entirely. The companies that invest in the next layer of capability while AI handles the current layer are the ones that will own the next decade. The ones that pocket the productivity gain and skip the investment will plateau.
The honest framing for 2026
AI is going to keep making knowledge work more productive. That's not in dispute. The dispute is whether the productivity gain comes at a meaning cost, and whether that cost shows up on the balance sheet eventually.
My honest answer: yes, it does, and yes, it will. The companies that pretend otherwise are running a quiet experiment on their best people. Some of those experiments will succeed. Most won't.
The companies that take this seriously are doing three things differently.
What most companies are doing | What the thoughtful ones are doing |
|---|---|
Measuring AI by productivity gains | Measuring AI by productivity gains and engagement signals |
Adding AI on top of existing roles | Redesigning roles around AI from the ground up |
Celebrating cycle-time improvements | Celebrating cycle-time improvements and craft preservation |
Assuming employee resistance is luddism | Asking why employees feel the way they feel and taking the answer seriously |
The second column isn't slower. It produces better results over 24-36 months. The first column produces better quarters and worse years.
The bottom line
Your AI is making your team more productive. It's also making their work feel different in a way that matters more than the numbers admit.
The leaders who win the next three years won't be the ones with the biggest productivity dashboards. They'll be the ones who deployed AI without breaking the parts of work that made people want to do it.
That's a harder problem than throughput. It also has a higher ceiling.
If your AI deployment is producing the metrics but draining the team, you don't have a productivity success. You have a productivity success and a meaning problem, and the second one is going to eat the first one. You can address it now, while it's still measurable in surveys and informal conversations.
Or you can address it later, when your best people resign.
Your call.













