- Mandy's World
- Posts
- AI Raises Wages by 21%
AI Raises Wages by 21%
But Not How You Think

There's a new Stanford working paper making the rounds, and it says something that contradicts almost everything you hear about AI and jobs.
AI is raising average wages by 21%. Not just for tech workers. Not just for people who build AI. For everyone who uses it.
And it's reducing wage inequality at the same time.
If you've been following the AI discourse for the last two years, this sounds completely backwards. The narrative has been: AI automates jobs, people lose work, wages fall, inequality gets worse.
So what's actually happening?
The Mechanism Nobody Talks About
The paper's core finding is simple, but the mechanism is subtle.
AI doesn't raise wages by creating more jobs. It raises wages by flattening the skill curve.
Here's what that means:
Before AI, complex tasks required specialized expertise. Writing legal contracts needed years of law school. Financial modeling needed an MBA and experience. Data analysis needed stats training.
These high barriers to entry meant experts could command premium salaries. If you didn't have the credentials and experience, you couldn't compete for those roles.
AI changes this completely.
Now, a paralegal with ChatGPT can draft contracts that are 80% as good as what a partner would write. A junior analyst with Claude can build financial models that would've taken a senior analyst days. Someone with basic Excel skills and AI can do data work that used to require R or Python expertise.
The barrier didn't disappear. But it dropped from "years of training" to "willingness to learn the tool."
What This Does to Wages
This creates a weird dynamic.
Lower-skilled workers can now compete for higher-paying roles that were previously locked behind expertise walls. They're not replacing experts entirely, but they're encroaching on work that used to be exclusive.
So those workers' value goes up. They can negotiate better pay because they're doing more valuable work.
At the same time, the monopoly that experts had on certain tasks weakens. There's more competition. But here's the key: the total amount of valuable work being done increases, because more people can do it.
The Stanford paper calls this "task simplification." I'd call it "expertise arbitrage."
A Concrete Example
Let's say you're a marketing person at a mid-sized company. Before AI, if you wanted to analyze customer sentiment from 10,000 support tickets, you'd need to:
Hire a data scientist
Wait weeks for them to set up pipelines
Get a report back with charts you half-understand
Hope the insights are actionable
Now? You upload the tickets to Claude, ask it to categorize sentiment and extract themes, export to a spreadsheet, and you have actionable insights in 30 minutes.
You didn't replace the data scientist. But you stopped needing them for this specific task. And because you can now do work that used to require a specialist, your value just went up.
If you're smart, you use this to negotiate a raise. Or you take on higher-level projects. Or you move to a role that pays better because you can now demonstrate skills you didn't have before.
The Second-Order Effect
This is where most people miss the point.
The first-order effect of AI is: "It automates tasks."
Everyone stops there and assumes automation = job loss = lower wages.
But the second-order effect is: "It democratizes expertise."
When expertise gets democratized, skill barriers fall. When skill barriers fall, more people can compete for better-paid work. When more people compete for better-paid work, wages rise on average.
This is counterintuitive. It feels wrong. But the data is showing it's happening.
The catch? You have to actually use the tools.
The Divide That's Opening Up
Here's the uncomfortable part.
The 21% wage increase isn't distributed evenly. It goes to people who adapt.
If you're using AI tools to do work you couldn't do before, you're in the group that benefits. If you're refusing to touch ChatGPT because "it's cheating" or "I don't trust it," you're opting out of the wage bump.
And the gap between these two groups is widening fast.
I see this in my own work. I use Claude every day for tasks I used to struggle with: structuring reports, analyzing data, drafting emails that don't sound robotic. It makes me faster and better at my job.
But I also know people who refuse to use AI at all. They're not bad at their jobs. But they're stuck doing things the hard way while everyone else is accelerating.
In two years, the difference in output between these groups will be massive. And output determines pay.
What This Means for You
If the Stanford paper is right, and I think it is, the next few years will be defined by a simple question:
Are you learning to use AI tools, or are you ignoring them?
Because the people who learn will have more leverage. They'll be able to do work that used to require specialists. They'll negotiate better pay. They'll move into roles that were previously out of reach.
The people who don't will find themselves competing with fewer advantages. The things they used to be good at will matter less. The skills they built over years will be worth less.
This isn't a judgment. It's just how leverage shifts work.
The Uncertainty
I want to be clear about something: this is early data.
The Stanford paper is a working paper, not peer-reviewed yet. The 21% figure could change. The mechanism could be more complicated than I'm describing.
And there are real risks. What happens when AI gets good enough that even with the tools, you don't need as many people? What happens to the people who can't adapt, either because they're in roles that don't translate or because they don't have access to the tools?
I don't know. Nobody does.
But what I do know is this: right now, in 2026, if you're willing to learn AI tools, you can do work that was previously locked behind expertise walls. And that shift is changing wage dynamics in ways most people aren't paying attention to.
The Other Side of the Story
One last thing: there's a second Stanford paper from August 2025 that tells the opposite story.
Erik Brynjolfsson's team at the Stanford Digital Economy Lab analyzed the same ADP payroll data and found that early-career workers (ages 22-25) in AI-exposed jobs saw a 16% relative decline in employment. Meanwhile, older workers (30+) in the same fields stayed stable or even grew 6-12%.
Same data source. Different conclusions.
Which one is right? Probably both.
The Althoff/Reichardt paper I've been talking about models long-term equilibrium. It's asking: "Once everything settles, what happens to wages?" Their answer: wages rise because skill barriers fall.
Brynjolfsson's paper tracks what's happening right now. It's asking: "Who's losing jobs today?" Their answer: younger workers in AI-exposed roles.
Short-term: job losses for people at the bottom of the ladder. Long-term: wage gains as the ladder gets shorter and more people can climb it.
The transition period? That's where we are. And it's messy.
If you're 24 and can't find a software engineering job because companies are using AI instead of hiring junior devs, the 21% wage increase doesn't help you. Not yet, anyway.
If you're 35 and learned to use Claude to do work you used to delegate to juniors, you're in the group that benefits.
The gap between these experiences is what makes the AI conversation so confusing. Both things are happening at once.
Bottom Line
AI doesn't kill jobs by default. It shifts leverage.
People who use it get more bargaining power. People who don't, lose theirs.
The 21% wage increase isn't a guarantee. It's a possibility. But only if you adapt.
And for people just starting out? The road just got harder. Entry-level positions are disappearing faster than senior roles. That's the first-order effect.
But if you can get through that bottleneck—if you can learn to use AI tools instead of competing with them—the second-order effect kicks in. You'll have access to work that used to require years of experience. And that's where the wage bump comes from.
So: what are you learning right now?
Disclaimer: I'm not an economist. I'm a business analyst learning AI tools and trying to understand how this changes work. This is my interpretation of the research, not financial or career advice. Do your own research. Think for yourself.
Sources
Main Paper (21% wage increase):
Althoff, L., & Reichardt, H. (2025). "Task-Specific Technical Change and Comparative Advantage." CESifo Working Paper No. 12403. Available at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6084170
Direct PDF: https://hugoreichardt.github.io/pdf/tstc_compadvantage.pdf
Contradicting Paper (16% employment decline for young workers):
Brynjolfsson, E., Chandar, P., & Chen, S. (2025). "Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence." Stanford Digital Economy Lab. PDF: https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf
TIME coverage: https://time.com/7312205/ai-jobs-stanford/
News Coverage:
Author Pages:
Lukas Althoff (Stanford): https://lukasalthoff.com/
Hugo Reichardt (Barcelona School of Economics): https://www.hugoreichardt.com/research
Erik Brynjolfsson (Stanford Digital Economy Lab): https://digitaleconomy.stanford.edu/
What I read today:
What I saw today:
What I listened to today:
What I liked today:
That’s it for today! ☺️
Disclaimer:
This blog reflects my personal learning journey and experiments with technology. These are my own experiences and observations as I explore the fascinating world of tech and AI.
I love experimenting with AI! Developed with research, image generation and writing assistance using AI. 😊


Reply