INTRODUCTION
Here is a number that should stop you mid-scroll. In 2025, venture capital firms poured 258.7 billion dollars into artificial intelligence companies, according to the OECD. That represented 61 percent of all global venture capital. But the total enterprise spending on generative AI products that same year? Roughly 37 billion dollars, per Menlo Ventures. For every dollar these companies earned, investors bet nearly seven hoping for more. Something fundamental does not add up.
And here is the twist nobody expected. While headlines scream about mass layoffs and robot replacements, the World Economic Forum projects a net gain of 78 million jobs worldwide by 2030 because of AI and related technologies. Germany has 109,000 unfilled IT positions right now, according to Bitkom. India faces a shortfall of hundreds of thousands of AI specialists, per a joint report by Deloitte and NASSCOM.
But if AI is creating jobs and attracting record capital simultaneously, why are the smartest researchers in the field starting to worry? Because AI has developed a problem that sounds like science fiction but was documented in Nature in 2024. It is starting to eat itself. And that changes the entire equation.

The Jobs That Were Supposed to Disappear
In November 2016, Geoffrey Hinton stood before an audience at the Creative Destruction Lab in Toronto and made a prediction that became famous. He said people should stop training radiologists because deep learning would outperform them within five years. That prediction did not age well. Mayo Clinic's radiology department grew by 55 percent since then, expanding to over 400 doctors, as their chair of radiology confirmed to the New York Times. In 2025, Hinton himself acknowledged he was wrong about the timing.
This is not one bad guess. It is a pattern. Sam Altman wrote in his January 2025 Reflections blog post that AI agents might materially change company output that year. By December, analysts said it had not happened at scale. Mustafa Suleyman posted on X in June 2023 that LLM hallucinations would be largely eliminated by 2025. They remain a major unsolved problem. And the widely circulated claim that AI would replace 80 percent of software developers by 2025? That was actually a distortion of a Gartner prediction about upskilling needs, not replacement. The U.S. Bureau of Labor Statistics projects 15 percent growth in software developer jobs through 2034. Much faster than average.
The actual data paints a very different picture from the scary headlines. PwC's 2025 Global AI Jobs Barometer studied close to a billion job postings across six continents and found that occupations with higher AI exposure grew 38 percent between 2019 and 2024. LinkedIn's Economic Graph data, presented at Davos 2026, shows AI has already created roughly 1.3 million new roles globally in just two years. And the Information Technology and Innovation Foundation calculated that in 2024, AI created about 119,900 direct jobs in the United States while displacing only 12,700. A ratio of almost ten to one.

New Jobs Nobody Predicted
AI did not merely protect existing positions. It invented entirely new ones. The job title "prompt engineer" did not exist five years ago. Neither did AI ethics officer, MLOps engineer, AI red teamer, LLM fine-tuning specialist, or Chief AI Officer. These roles pay real salaries at real companies. According to Glassdoor data, the median total compensation for a prompt engineer sits around 126,000 dollars. At Google, that number climbs to roughly 245,000.
The wage premium tells an even sharper story. PwC's barometer found that jobs requiring AI skills command a 56 percent wage premium over comparable positions without them. That is up from 25 percent the year before. The 2024 Work Trend Index from Microsoft and LinkedIn reported that members adding AI skills to their profiles increased 142 times between late 2023 and early 2024. Not 142 percent. 142 times.
But while these new jobs multiply at the edges of the labor market, something else is happening in the corporate middle. And it connects directly to why AI may be consuming its own foundation.
The Disappearing Middle
Something strange is taking shape in the business world. Big companies are getting leaner. Tiny AI startups are reaching extraordinary valuations. And the layer between them is thinning fast.
Consider the extremes. Safe Superintelligence, a startup with about 20 employees and zero revenue, reached a valuation of 32 billion dollars. Cursor, an AI coding tool, hit 500 million in annual recurring revenue with a small team. Mistral AI in France reached a 6.2 billion dollar valuation with just 55 people in its early stages. Gumloop raised 17 million dollars with only two full-time founders.
Meanwhile, Intel is cutting from 125,000 to a target of 75,000 employees. Dell's headcount dropped from 133,000 to 120,000 in a single fiscal year, per SEC filings. Microsoft laid off more than 15,000 people in 2025 while posting record quarterly revenue of 70.1 billion dollars. Fortune reported that 2025 layoffs targeted middle management the most, using the phrase "hollowing-out strategy." Korn Ferry data showed manager headcount dropped 6.1 percent between May 2022 and May 2025.
In Germany, the AppliedAI Institute tracked 935 AI startups by 2025, up 36 percent from the year before. Over two billion euros were invested in German AI startups through July 2025 alone. But here is the critical detail. A PitchBook analysis found that without AI deals, European venture capital deal value actually declined from 45.3 billion to 42.7 billion euros. AI is not just a sector anymore. It is becoming the only sector attracting serious money.
Sam Altman has predicted the rise of ten-person billion-dollar companies. Anthropic CEO Dario Amodei told a conference audience in May 2025 that a one-person billion-dollar company could happen by 2026, with 70 to 80 percent confidence. If AI makes companies so efficient they need fewer people, and if AI startups need almost no staff, then who exactly is buying all these AI products?

The Snake Eating Its Own Tail
Now we arrive at the center of this story. The part that was published not in a tech blog or an opinion column but in Nature, one of the most respected scientific journals in the world.
In July 2024, researchers led by Ilia Shumailov at the University of Oxford published a paper showing what happens when AI models train on data generated by other AI models. The result is something they called model collapse. After multiple rounds of recursive training, their language model produced complete nonsense. A question about medieval church architecture generated random, meaningless words. The researchers demonstrated that this degradation is irreversible. Once the tails of the original data distribution disappear, the model cannot recover them.

This is not a small technical glitch. It is a fundamental limit. And it is already visible in the real world. Graphite's analysis of 65,000 English articles from Common Crawl found that as of May 2025, roughly 52 percent of new articles were AI-generated. Ahrefs studied 900,000 newly created web pages and found that 74.2 percent contained some AI-generated content. Europol has warned that up to 90 percent of online content could be synthetically generated by 2026.
Think about what this means. AI learns from internet data. The internet is now increasingly filled with AI-generated content. So AI is learning from itself. Like a snake eating its own tail.
Ilya Sutskever, co-founder of OpenAI, told the audience at the NeurIPS conference in December 2024 that the industry has reached peak data. He compared data to fossil fuels. It was created over decades of human activity, models consumed it, and now usable supplies are running thin. Researchers at Epoch AI published a peer-reviewed analysis at ICML 2024 estimating that quality-adjusted public text data — roughly 300 trillion tokens — will be exhausted between 2026 and 2032.
The Stack Overflow Problem
Here is a concrete example of AI devouring its own food supply. Stack Overflow used to be the largest source of human-written programming knowledge on the internet. Developers asked questions, other developers answered them, and AI models learned from these exchanges. At its peak around 2014, the platform received roughly 200,000 new questions every month, according to data from the Stack Exchange Data Explorer.
Then AI coding assistants arrived. Developers stopped asking questions on Stack Overflow because they could query ChatGPT or GitHub Copilot instead. By December 2025, new questions had plummeted. Futurism reported the monthly count fell to around 3,600. From a peak of 200,000 to fewer than 4,000 represents a decline of roughly 98 percent over a decade.
The irony is hard to miss. AI coding tools were trained on Stack Overflow answers written by humans. Those humans stopped writing because they switched to AI. When these tools need fresh human-generated code to improve, they find mostly their own previous outputs. The fire is trying to burn fire.
The Stack Overflow 2025 Developer Survey, with over 49,000 responses from 177 countries, revealed a growing trust problem. While 84 percent of developers use or plan to use AI tools, only 33 percent trust their accuracy. That trust figure was 43 percent just one year earlier. The most common complaint? Solutions that are almost right. Close enough to look correct. Wrong enough to break production code.
A randomized controlled trial by researchers at METR, a nonprofit AI evaluation organization, found something even more troubling. They studied 16 experienced open-source developers and discovered that while developers believed AI made them 20 percent faster, they actually took 19 percent longer on average. The perception of speed was real. The speed itself was not. A separate Microsoft study of roughly 200 engineers using GitHub Copilot for three weeks found no statistically significant changes in coding time or pull request activity.

The 259 Billion Dollar Question
Now let us follow the money. Because the financial picture reveals something that should give pause to anyone paying attention.
OpenAI earned about 3.7 billion dollars in revenue in 2024 while losing roughly 5 billion. Their valuation climbed to 500 billion dollars by October 2025. Deutsche Bank analysts estimated that OpenAI would accumulate approximately 143 billion dollars in negative cumulative cash flow between 2024 and 2029 before potentially reaching profitability. No startup in history has operated at losses on anything approaching that scale.

Anthropic announced a 14 billion dollar annualized revenue run rate by early 2026 at a 380 billion dollar post-money valuation. That is a 27 times revenue multiple. Their gross margins sat around 40 percent, revised down from an earlier 50 percent projection, well below the roughly 77 percent analysts typically require to justify such valuations.
The most extreme case may be xAI, Elon Musk's AI company. Bloomberg reported it was burning through roughly one billion dollars per month at a valuation between 230 and 250 billion, against approximately 500 million in annualized revenue. That implies a multiple of roughly 460 times revenue.
Here is the uncomfortable truth. Not a single AI company at the foundation model layer is profitable. Not OpenAI. Not Anthropic. Not xAI. Not Mistral. Not Cohere. Not Stability AI. The only clearly profitable companies in the AI ecosystem are those selling infrastructure to AI builders, mainly NVIDIA with 130.5 billion in revenue and strong margins. The gold miners lose money. The shovel sellers get rich.

Sequoia Capital's David Cahn identified what he called a 600 billion dollar revenue gap between what companies spend on AI infrastructure and what they actually earn from it. The four biggest tech companies — Amazon, Microsoft, Google, and Meta — committed roughly 380 to 400 billion dollars in AI capital expenditure for 2025. Goldman Sachs analyst Ryan Hammond found these same companies took on 121 billion dollars in debt in a single year, roughly four times their five-year average.
A Bank of America survey in October 2025 found that a record 54 percent of global fund managers consider AI stocks to be in bubble territory. Ray Dalio told CNBC that markets are about 80 percent into a bubble on his proprietary indicator, comparing the situation to 1929 and the 2000 dot-com crash. MIT Nobel laureate Daron Acemoglu published a peer-reviewed analysis in Economic Policy arguing that AI would contribute no more than 0.53 to 0.66 percent in total factor productivity growth over a decade — far below the transformative projections that justify current valuations.

Why CEOs Keep Promising the Revolution
If AI is not replacing workers at the pace promised, why do the biggest names in technology keep insisting it will? The answer involves stock prices more than science.
When Oracle announced an AI data center partnership with OpenAI, its stock jumped roughly 40 percent. J.P. Morgan's Michael Cembalest calculated that AI-related stocks accounted for 75 percent of all S&P 500 returns since ChatGPT launched. Every time a CEO announces an AI transformation plan, the market rewards them.
Harvard Business Review published a study in January 2026 based on a survey of more than 1,000 global executives. The findings were striking. Sixty percent of surveyed companies had reduced headcount in anticipation of what AI might do in the future. Only 2 percent had made significant layoffs tied to actual, measurable AI implementation. Companies are firing people for capabilities that do not yet exist.
The SEC has started cracking down on something called AI washing — companies exaggerating their AI capabilities to attract investors. In March 2024, the commission charged Delphia and Global Predictions in the first-ever AI washing enforcement actions, issuing fines of 225,000 and 175,000 dollars respectively. DLA Piper reported that AI-related securities class action filings doubled from 7 in 2023 to 14 in 2024, with 12 more filed through September 2025 alone.
PwC's 29th Annual Global CEO Survey, released in January 2026 after surveying 4,454 CEOs across 95 countries, found that only 12 percent reported both cost and revenue benefits from AI. Fifty-six percent saw no significant financial benefit at all. The MIT NANDA initiative, covered by Harvard Business Review in August 2025, found that 95 percent of enterprise AI pilot projects failed to deliver measurable returns despite an estimated 30 to 40 billion dollars in collective investment.
So Where Does This Leave Us?
Let us assemble the pieces. AI has genuinely created more jobs than it has destroyed. That is not sentiment. That is data from the World Economic Forum, PwC, LinkedIn, the U.S. Bureau of Labor Statistics, and Bitkom. Germany's unfilled IT positions are real. The 1.3 million new AI roles are real. The 56 percent wage premium is real.

But the industry building this technology faces three interconnected problems. First, a scientific wall. Model collapse is not speculation. It is documented in Nature. Training data is finite. More than half the internet is now AI-generated content feeding back into AI training loops. Epoch AI's peer-reviewed analysis projects exhaustion of quality public text data within the decade.
Second, a financial crisis that looks increasingly structural. Nearly 259 billion dollars invested against roughly 37 billion in enterprise revenue. Zero profitable companies at the foundation model layer. Projected cumulative losses of 143 billion for OpenAI alone through 2029. Goldman Sachs documenting a 121 billion dollar debt surge among the largest technology firms in a single year.
Third, a widening credibility gap. Ninety-five percent of enterprise AI pilots failing to produce measurable value. CEO predictions about replacement timelines consistently proving wrong. Developer trust in AI accuracy falling from 43 to 33 percent in one year. Actual coding speed studies showing null results or even slowdowns despite subjective perceptions of improvement.

The most honest way to describe the current situation is this. AI is a real and useful technology that has been wrapped in a financial structure requiring it to be world-changing to justify its price tag. The technology genuinely helps people work better. It does not genuinely replace them at scale. And the gap between those two realities is where hundreds of billions of dollars sit, waiting to see which version of the story survives contact with the real world.
CONCLUSION
AI is not going away. It has already changed how people work, study, and build businesses. The jobs it creates pay well and keep multiplying. The startups it powers are smaller, faster, and occasionally worth billions with fewer than 50 people on payroll.
But AI is also consuming itself. The training data is finite. The models increasingly learn from their own outputs. The companies building foundation models are losing billions. And the bold predictions from industry leaders continue to miss their deadlines.
The future probably looks something like this. AI will keep generating new jobs. AI companies will keep hemorrhaging cash until the investment bubble corrects. And the people who benefit most will be those who treat AI as a powerful tool rather than a magic replacement for human cognition. Because in the end, the most valuable asset in an AI-saturated world is still a person who knows how to think clearly.
Frequently Asked Questions
Is AI actually creating more jobs than it eliminates?
Yes. Data from PwC, LinkedIn, and the Information Technology and Innovation Foundation consistently shows AI creating jobs at roughly ten times the rate it displaces them. The World Economic Forum projects a net gain of 78 million jobs globally by 2030.
What is model collapse in artificial intelligence?
Model collapse occurs when AI systems train on data generated by other AI models rather than original human-created content. Researchers at Oxford documented this phenomenon in Nature in 2024, showing that recursive training causes irreversible quality degradation within several generations.
Are AI companies actually profitable?
No major foundation model company is profitable as of early 2026. OpenAI lost roughly 5 billion dollars in 2024 on 3.7 billion in revenue. NVIDIA, which sells infrastructure to AI companies, is the primary profitable player in the ecosystem.
How much money has been invested in AI compared to its actual revenue?
Venture capital invested approximately 259 billion dollars in AI companies in 2025, while total enterprise spending on generative AI products reached about 37 billion. Sequoia Capital identified this as a 600 billion dollar gap between infrastructure spending and end-user revenue.
What is AI washing and why does it matter?
AI washing is when companies exaggerate their use of artificial intelligence to attract investors or customers. The SEC filed its first enforcement actions against this practice in March 2024, charging two investment firms with making false AI claims. Related securities lawsuits doubled between 2023 and 2024.

