How GenAI Boosts Productivity Without Replacing Workers

Three Stanford studies quantify what generative AI actually does to workforce productivity—and the answer is more nuanced than either optimists or skeptics suggest. The gains are real (up to 87% task acceleration for software developers), but they skew toward less experienced workers, and entry-level employment in automation-heavy fields is already declining.
How GenAI Boosts Productivity Without Replacing Workers
Photo by Matheus Bertelli on Pexels

Introduction

The question of whether generative AI destroys jobs or creates better ones has generated more heat than light. A new body of research from Stanford cuts through the noise with something rarer than punditry: actual data. Three separate studies, all rooted in real workforce behavior, point to the same conclusion: GenAI lifts productivity significantly, but the gains are uneven, the risks are real, and the story depends heavily on who you are and what you do.

The Chile Study: 80% of Workers Stand to Gain

Published in early 2025, a Stanford Graduate School of Business study led by Professor Gabriel Weintraub is the most comprehensive task-level analysis of GenAI’s workplace impact to date. Rather than categorizing entire job titles as “at risk” or “safe,” Weintraub and his co-authors—Victor Morales and Alvaro Soto from Chile’s National Center for Artificial Intelligence, and Juan Eduardo Carmach of the trade federation Sofofa—scored each of the more than 200,000 interdependent tasks that make up the country’s 100 most common jobs.

The scoring criterion is precise: can GenAI reduce the completion time of a task by at least 50% without compromising quality? The researchers call this an “acceleration opportunity.” Their finding: nearly half of all tasks across these jobs qualify. More striking still, 80% of Chilean workers are in occupations where GenAI can accelerate at least 30% of their daily work.

The average efficiency gain across AI-boosted roles is 48%. Software developers top the list at 87% acceleration potential, followed by policy specialists at 84% and data analysts at 80%. The study also found that Chile’s public administration sector could unlock over $1.1 billion in annual value if GenAI tools were deployed to handle tasks like document drafting, data entry, and form processing—work that currently occupies a substantial portion of the 84,000 government employees whose roles show high AI acceleration exposure.

Importantly, the study does not claim that these jobs will be eliminated. The word throughout is “accelerated”—tasks done faster, freeing workers to focus on the judgment-intensive work that GenAI cannot replicate. Senior executives and medical professionals, for instance, showed lower acceleration scores precisely because their roles hinge on human context, relationships, and oversight that resist AI substitution.

Experience Levels Matter More Than Job Titles

A parallel strand of research from Stanford’s Digital Economy Lab sharpens this picture considerably. The landmark “Generative AI at Work” study—led by Erik Brynjolfsson and colleagues—observed more than 5,000 customer support agents at a large enterprise software company as they were given access to an AI assistant in a staggered rollout. The average productivity gain was 15%, measured by issues resolved per hour. But the distribution was anything but uniform.

New and lower-skilled agents improved both their speed and the quality of their outputs. Senior agents—the ones who had spent years mastering the craft—saw modest speed gains but small declines in resolution quality. The explanation is structural: the AI was trained on patterns derived from the best-performing agents. When those same agents used it, the tool essentially averaged down their most nuanced instincts. For everyone else, it was like having a highly experienced colleague whispering in their ear.

This dynamic maps directly onto findings we covered earlier this year in our analysis of why AI makes experienced developers 19% slower. The friction isn’t with AI adoption—it’s with the assumption that AI assistance is uniformly beneficial regardless of baseline skill.

The Entry-Level Employment Concern

No honest account of GenAI’s workplace impact can ignore the third strand of Stanford research, published in August 2025: “Canaries in the Coal Mine?” by Erik Brynjolfsson, along with co-authors Kevin Chan and Sophia Chen. Using high-frequency payroll data from ADP covering millions of U.S. workers, the paper documents a 13% relative decline in employment for early-career workers in the occupations most exposed to generative AI—beginning in late 2022, around the time ChatGPT proliferated.

The numbers are concrete. Entry-level workers in high-AI-exposure jobs experienced a 6% employment decline between late 2022 and July 2025. Their peers in low-AI-exposure roles saw 6–9% growth over the same period. Older workers in AI-exposed fields were largely unaffected—or even benefited.

The authors are careful to distinguish between automation and augmentation. Occupations where AI primarily automates tasks (replacing human output) show the entry-level employment declines. Occupations where AI augments work—helping people do more of what they were already doing—do not show the same pattern. The trouble is that automation-heavy use cases disproportionately affect roles that companies have traditionally used as training grounds for early-career workers.

What This Means for Organizations Deploying GenAI

Taken together, these studies paint a picture that resists easy interpretation. GenAI is unambiguously a productivity tool. The gains are real, the mechanisms are understood, and the Weintraub study provides a credible task-level framework for identifying where to deploy it. But the benefits skew toward workers who are newer, less specialized, or in roles with high proportions of structured, document-heavy tasks. The most experienced people in an organization may see limited benefit—or subtle quality degradation—if AI assistance is layered onto work that depends on tacit knowledge and contextual judgment.

For organizations, this creates a practical tension. Deploying GenAI to accelerate onboarding and reduce the experience gap is defensible and evidence-backed. Deploying it in ways that reduce headcount at entry levels, without building alternative development pathways, risks eroding the organizational knowledge base that makes senior expertise possible in the first place.

The Brynjolfsson “Canaries” paper raises a harder question: if companies systematically hire fewer junior employees because AI handles their previous workload, where do future senior experts come from? That’s not a technology question—it’s a talent strategy question that GenAI does not answer.

Conclusion

Stanford’s research makes one thing clear: the binary framing of “AI replaces workers vs. AI helps workers” is too blunt to be useful. The actual picture is a distribution—some workers gain a great deal, some gain modestly, some see subtle quality tradeoffs, and a specific cohort of early-career workers in automation-heavy fields is already experiencing measurable displacement. Organizations that take the task-level evidence seriously, segment their workforce by exposure type, and build deliberate development pipelines for junior talent will be better positioned than those treating GenAI as a uniform productivity lever that lifts all boats equally. The boats are different sizes, and they respond differently to the tide.

Further Reading

Don’t miss on GenAI tips!

We don’t spam! We are not selling your data. Read our privacy policy for more info.

Don’t miss on GenAI tips!

We don’t spam! We are not selling your data. Read our privacy policy for more info.

Share the Post:

Related Posts

AI Tools for Academic Research Workflows in 2026

Systematic reviews that once took 18 months now take weeks. In 2026, Elicit, ResearchRabbit, and Scite.ai have moved from curiosity to core research infrastructure — but using them well requires understanding where they break. Here is an honest account of what each tool does, what the academic evidence says about their accuracy, and where human judgment remains irreplaceable.

Read More

EU AI Act: What August 2026 Means for Your Business

On 2 August 2026, the EU AI Act becomes generally applicable — activating high-risk AI obligations, transparency requirements for chatbots and deep fakes, and fines up to 7% of global turnover. Here is what the deadline means in practice and where most organizations are falling short.

Read More