Why AI Isn't Coming for Your Jobs Anytime Soon

Publish Date
December 27, 2025
Category
AI Expertise and Resources
Author
Ani Bisaria

The AI job apocalypse has been six months away for about three years now. The headlines get more breathless. The LinkedIn gurus get more annoying.

Yet my friends, I am here today, in your feed, as the anti-Paul Revere — not sounding an alarm, but telling everyone to go back to bed.

AI is augmenting work, not replacing workers — and won't for a while. The reasons are technical and financial.

Could that change fast? GPT-3 to GPT-4 was eighteen months. But "could" isn't "is."

Augmentation is here. Replacement isn't — yet.

Let's look at the data, shall we...

Part 1 | The Numbers Don't Support Mass Replacement

PwC's 2025 Global AI Jobs Barometer analyzed nearly a billion job ads across six continents, yet somehow couldn't come up with a shorter name for the report.

They found that job numbers are growing in virtually every type of AI-exposed occupation — even the ones considered most "automatable."

  • Industries heavily exposed to AI are seeing 3x higher revenue growth per employee compared to less-exposed sectors.
  • Workers with AI skills now command a 56% wage premium, up from 25% just a year ago.
  • Skills in AI-exposed jobs are evolving 66% faster than in other roles.

The World Economic Forum's Future of Jobs Report 2025 projects a net gain of 78 million jobs globally by 2030. Yes, 92 million roles will be displaced — but 170 million new ones will be created.

The report surveyed over 1,000 employers representing 14 million workers. The consistent message: technology skills and human skills — creative thinking, resilience, adaptability — are rising in importance together. Not independently.

Part 2 | Why AI Still Struggles with Real-World Complexity

Here's what we say to each other that doesn't make it into the hype cycle — the stuff that makes us feel like we're behind, or missing something, or just not using it right: AI remains quite limited at handling unpredictable, nuanced, real-world tasks.

MIT's Iceberg Index study from November 2025 found that AI can currently perform tasks equivalent to about 11.7% of the U.S. workforce — representing roughly $1.2 trillion in wages. Sounds significant until you realize this measures technical capability, not inevitable job losses. The actual visible impact right now is concentrated in computing and tech roles, accounting for just 2.2% of wage exposure.

AI still struggles with maintaining long-term context, interpreting sarcasm and cultural references, and adapting to dynamic scenarios without hallucinating.

I made up one of those struggles. If you can't tell which — AI might be coming for your job.

Two techniques are supposed to fix this litany of issues most commonly found in HR departments: Retrieval-Augmented Generation (RAG) and multi-agent systems. Fine-tuning, longer context windows, chain-of-thought prompting — all help at the margins. But RAG and multi-agent systems are attracting the most "this will replace humans" funding. So that's what we'll focus on.

RAG: Looking things up doesn't mean understanding them.

RAG gives AI the ability to retrieve external information before answering — essentially letting it look things up rather than relying purely on training data. This reduces hallucinations and allows access to current information. Useful stuff. An approach I wish my friends took before telling me something they saw on TikTok.

For structured queries against clean documentation — legal research, technical lookups, customer support — RAG is genuinely good. The limitation shows up when context gets messy.

Research from Google's ICLR 2025 paper on "Sufficient Context" found something counterintuitive: providing more context can actually degrade performance when it's irrelevant or excessive. The model struggles to prioritize what matters.

RAG's embedding models capture semantic similarity — words that appear in similar contexts. They miss nuance.

"I'm doing great" said sarcastically won't retrieve documents about problems. "Per my last email" won't surface conflict resolution. And "I think we should see other people" pulls up networking events.

Multi-agent systems: More agents often means worse results.

Multi-agent systems distribute tasks among specialized AI agents that collaborate. One handles emotional tone, another does factual retrieval, a coordinator synthesizes everything. Sounds elegant. Sounds clean. Sounds like a consulting deck void of any accountability.

A December 2025 Google/MIT study tested 180 configurations across five architectures, three major AI providers (OpenAI, Google, Anthropic), and four benchmarks:

  • For sequential tasks — where each step depends on the previous one — multi-agent systems reduced performance by 39-70% compared to single agents.
  • Once a single agent hits 45%+ accuracy, adding more agents yields diminishing or negative returns.
  • In "independent" systems where agents work in parallel without communicating, errors were amplified 17.2x compared to single-agent baselines.
  • Even centralized architectures with coordinators still amplified errors 4.4x.

To be fair, multi-agent systems can improve performance by up to 80% on genuinely parallelizable tasks — analyzing separate financial metrics, processing independent documents.

The problem is most jobs aren't parallelizable.

Why this matters for job automation.

Real-world jobs rarely decompose into clean, parallelizable subtasks. They involve sustained context across hours or days, emotional intelligence, social nuance, physical adaptability, and ethical judgment that can't be delegated to retrieval systems.

In fairness, I've met plenty of middle managers who struggle with all four.

When someone tells you AI is about to replace knowledge workers, ask them:

  • Which RAG configuration handles six weeks of accumulated project context?
  • Which multi-agent system navigates the unspoken politics of an organizational decision?
  • Which LLM explains why I, a 3x founder, still can't convert a PDF to Word?

The researchers themselves describe these as interim solutions that "patch limitations in current LLMs without achieving true contextual understanding." Useful for specific applications. Insufficient for the broad cognitive work most jobs require.

Part 3 | The Economics Don't Add Up for Mass Automation

The AI you're using right now is heavily subsidized.

OpenAI lost $5 billion in 2024 on $3.7 billion in revenue—$2.25 out for every dollar in. Only 5% of ChatGPT users pay anything. The free tier loses money on every prompt. A single query on their most advanced models can hit $1,000 in compute.

And that's consumer pricing! Developers pay per token through the API, and the math is brutal at scale. What a Plus user gets for $20/month would cost $45-120 through the API. The subscription isn't a business model. It's a customer acquisition cost.

I was at the University of San Francisco during the Uber/Lyft wars — a rider and a driver at various points. Drivers got wild bonuses. Riders got $4.99 trips across town. Both companies bled billions wooing supply and demand.

The assumption: once they dominated, they'd raise prices and reach profitability. It took Uber over a decade.

  • In H1 2025, AI startups raised $83 billion — 58% of all venture funding globally.
  • OpenAI hit $500 billion with no path to profitability. I've been fundraising recently — apparently, VCs call this business model "conviction."
  • When subsidies dry up, prices rise. The "AI is cheap enough to replace workers" argument assumes current pricing — which won't last.

Again, the data tells a different story: AI adoption is driving wage growth, not headcount cuts. 77% of employers plan reskilling. 94% of employees use AI to enhance their work, not replace themselves.

The ROI favors humans plus AI, not AI instead of humans. That math only improves when subsidies end.

The bull case: costs drop faster than prices rise, and we hit sustainable unit economics before the music stops. It's possible. But it's a bet, not a certainty — and the current numbers certainly don't support it.

Part 4 | The Counterarguments

Some will point to projections of displacement — and they're not wrong to take them seriously.

The WEF notes 41% of employers plan workforce reductions. MIT's Iceberg Index shows AI could handle 11.7% of current work—and unlike earlier estimates, this accounts for economic feasibility, not just technical capability. Amazon explicitly cited AI in cutting 14,000 jobs. Google and Microsoft laid off thousands while ramping up AI deployment. Inference costs dropped 280x in two years.

These aren't hypotheticals. This is happening — but context still matters.

That same WEF report projects a net gain of 78 million jobs. MIT's broader research consistently shows AI adoption correlates with increased employment — not reductions. And displacement tends to happen slower than technologists predict.

Those plummeting inference costs? Usage outpaced the savings. Enterprise AI bills are climbing into the tens of millions. Costs dropped 280x; spending exploded. Agentic AI is as hungry as I am on the drive from AUS to Terry Black's.

The companies replacing workers are mostly tech — the lowest-hanging fruit. The 2.2% visible wage exposure we discussed? That's where the cuts are concentrated. Extrapolating from Amazon to accountants — as some with an Axios "white-collar bloodbath" headline bookmarked like to do — is a leap the data doesn't support.

As for RAG and multi-agent systems rapidly overcoming context limitations: we've seen the research. They introduce their own inefficiencies. For sequential workflows — which describes most jobs — they make things worse, not better.

The displacement figures are real capabilities, not inevitabilities. The path from "technically possible" to "widely deployed" runs through economics, organizational change, and all the messiness of actual implementation.

Part 5 | Conclusion: The Long View

Could deeper disruptions emerge with AGI or advanced robotics by 2040+? Possibly.

The real wildcard is agentic AI — systems that can plan, execute, and iterate autonomously across multi-step tasks. That's where the research gets murkier and the timelines get harder to predict. But agentic systems still hit the same walls: error propagation, context degradation, economics that don't yet pencil. The path from here to there runs through everything I've described.

The jobs aren't going away — they're getting harder. The 59% of workers who need reskilling by 2030 aren't being replaced. They're being asked to do more: cybersecurity analysts defending an attack surface that's expanding faster than headcount, healthcare techs working alongside diagnostic AI, supply chain managers optimizing systems too complex to run manually.

For the next decade, AI is a productivity booster, not a job destroyer.

The panic serves interests — selling "AI readiness" assessments to panicked executives, campaigning on regulating something they can't define, inflating incestuous tech valuations — but it just doesn't reflect the numbers.

The future isn't humans vs. AI.

It's humans with AI vs. humans without it.

The apocalypse can wait.

Note: Full Disclosure | The Ad Spend

After writing all of this, I should mention: I'm building a company to go after that 2.2%.

The Ad Spend is an AI-powered ad analytics platform. We monitor campaigns, surface anomalies, generate reports, answer questions about performance data. The kind of work marketers spend hours on — we do in seconds.

With the self-awareness to recognize this may come across as hubris, hypocrisy, and hot air — but conviction in what we're building — I don't think it contradicts anything I just wrote.

The limitations I described are real. They just apply unevenly. We found a narrow wedge where the constraints don't hold:

  • Ad platforms don't store change history. Meta has none. Google keeps 90 days, then erases it. When performance shifts, you're guessing what changed. We built a system that captures every setting, every tweak, every test — hashed and versioned like Git commits. Structured data an LLM can actually use, efficiently.
  • The work is parallelizable. Monitoring 50 campaigns simultaneously, comparing performance across platforms, detecting anomalies at 3am — these decompose cleanly. No sequential dependencies. No weeks of accumulated meeting context. Data in, insight out.
  • And the economics work because we built for them. The marginal cost of processing another account is negligible. We're not burning VC money hoping unit economics materialize. They already do.

So yes — we're going after some of that 2.2%. The tedious parts. The context-switching between six dashboards. The 2am "something looks wrong" anxiety.

What we're not replacing: the creative spark, the client relationship, the organizational politics no dashboard can decode, the read on the room that doesn't show up in the data.

That's not a contradiction. That's the whole point.

‍—

References

Altman, S. [@sama]. (2025, January 5). Insane thing: we are currently losing money on OpenAI pro subscriptions! [Post]. X. https://x.com/sama

Crunchbase. (2025, December). 6 charts that show the big AI funding trends of 2025. https://news.crunchbase.com/ai/big-funding-trends-charts-eoy-2025/

Google Research. (2025). Sufficient context: A new lens on retrieval augmented generation systems. Proceedings of ICLR 2025. https://research.google/blog/deeper-insights-into-retrieval-augmented-generation-the-role-of-sufficient-context/

Khatib, O. (2025, September). The fragile future of AI: Beyond venture capital subsidies. Medium. https://medium.com/@olikhatib/the-fragile-future-of-ai-beyond-venture-capital-subsidies-46abac932c3b

Kim, Y., Liu, X., et al. (2025). Multi-agent vs. single-agent systems: A systematic evaluation across 180 configurations. Google DeepMind & MIT. https://venturebeat.com/orchestration/research-shows-more-agents-isnt-a-reliable-path-to-better-enterprise-ai

McKinsey & Company. (2025). The state of AI in 2025: Agents, innovation, and transformation. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

McKinsey Global Institute. (2025). Agents, robots, and us: Skill partnerships in the age of AI. https://www.mckinsey.com/mgi/our-research/agents-robots-and-us-skill-partnerships-in-the-age-of-ai

Mollick, E., & Brynjolfsson, E. (2025). How artificial intelligence impacts the US labor market. MIT Sloan Management Review. https://mitsloan.mit.edu/ideas-made-to-matter/how-artificial-intelligence-impacts-us-labor-market

Oak Ridge National Laboratory & MIT. (2025, November). The Iceberg Index: Measuring AI's potential to automate U.S. jobs. https://www.cnbc.com/2025/11/26/mit-study-finds-ai-can-already-replace-11point7percent-of-us-workforce.html

PwC. (2025). The fearless future: 2025 Global AI Jobs Barometer [Report]. https://www.pwc.com/gx/en/issues/artificial-intelligence/job-barometer/2025/report.pdf

Terry, H.P. (2025). AI's brutally concentrated economics: 3% of investments generate 60% of returns. The Low-Down. https://www.thelowdownblog.com/2025/10/ais-economics-are-brutally-concentrated.html

The Information. (2024). OpenAI financial analysis. [As cited in multiple sources regarding $5B losses and spending ratios]

World Economic Forum. (2025). The future of jobs report 2025. https://www.weforum.org/publications/the-future-of-jobs-report-2025/

Join the 40 other companies that trust The Matchbox to distinguish their brand in a crowded market.