
Cookies ... Yumm!
Our website uses cookies to show you the cookie alert dialog box, save your tokens and your theme. By continuing to use our website, you consent to our use of cookies.
AI Eats AI
13 min read - Feb 26, 2026
4
0
The Truth About Artificial Intelligence, Jobs, and the Biggest Bubble of Our Time

INTRODUCTION
Here is a number that will surprise you. In 2025, companies invested 259 billion dollars in artificial intelligence. But the total revenue of all AI startups combined? Only 25 to 33 billion. That means for every dollar AI companies earned, investors put in almost ten dollars hoping for more. Something does not add up.
And here is the real twist. While everyone worries about AI stealing jobs, the data shows the exact opposite. AI has created more jobs than it has destroyed. The World Economic Forum says the world will gain 78 million net new jobs because of AI by 2030. Germany alone has 109,000 unfilled IT positions right now. India needs 600,000 AI specialists but only has 420,000.
But wait. If AI is creating so many jobs and attracting so much money, why are the smartest people in the room starting to worry? Because AI has a problem that sounds like science fiction but is very real. It is starting to eat itself. And that changes everything.

The Jobs That Were Supposed to Disappear
Remember when Geoffrey Hinton, the godfather of AI, said in 2016 that we should stop training radiologists because AI would replace them within five years? That prediction became famous. It also became famously wrong. Mayo Clinic grew its radiology team by 55 percent since then, from 260 to over 400 doctors. In 2025, Hinton himself admitted he was wrong about the timing.
This is not just one bad prediction. It is a pattern. Sam Altman said in January 2025 that AI agents would change how companies work that same year. By December, analysts said it had not happened at scale. Mustafa Suleyman at Microsoft promised in 2023 that AI hallucinations would be mostly gone by 2025. They are still a major problem. Researchers predicted AI would replace 80 percent of software developers by 2025. Instead, the U.S. Bureau of Labor Statistics projects 15 percent growth in software developer jobs through 2034.
The real numbers paint a very different picture from the scary headlines. PwC studied roughly one billion job ads across six continents and found something remarkable. Jobs most exposed to AI actually grew 38 percent between 2019 and 2024. LinkedIn data shows AI has already created 1.3 million new roles globally in just two years. And the Information Technology and Innovation Foundation calculated that in 2024 alone, AI created about 119,900 direct jobs in America while only displacing 12,700. That is a ratio of almost ten to one.
New Jobs That Did Not Exist Five Years Ago
Here is something nobody talks about enough. AI did not just protect old jobs. It created entirely new ones that nobody imagined before. An AI engineer in the United States now earns an average of 206,000 dollars per year. That is a 50,000 dollar increase from the year before. Prompt engineers, people who write instructions for AI, earn a median of 126,000 dollars. Google pays them around 245,000.
Think about that. Five years ago, the job title "prompt engineer" did not exist. Neither did AI ethics officer, MLOps engineer, AI red teamer, LLM fine-tuning specialist, or Chief AI Officer. These are real jobs at real companies paying real salaries. The wage premium for workers with AI skills hit 56 percent in 2024. That means someone with AI skills earns 56 percent more than someone doing a similar job without them. LinkedIn saw members adding AI skills to their profiles increase 142 times between late 2023 and early 2024.
But here is where it gets interesting. While these new jobs grow, something else is happening in the middle of the corporate world. And it connects directly to why AI might be eating itself.

The Startup Revolution and the Disappearing Middle
Something strange is happening in the business world. The middle is disappearing. Big companies are getting leaner. Small AI startups are becoming incredibly valuable with tiny teams. And the companies in between? They are struggling.
Look at the numbers. Safe Superintelligence, a startup with about 20 employees and zero revenue, reached a valuation of 32 billion dollars. That is roughly 1.5 billion dollars per employee. Cursor, an AI coding tool, hit 500 million in yearly revenue with fewer than 50 workers. Mistral AI in France reached a 6.2 billion dollar valuation with just 55 people. One startup called Gumloop raised 17 million dollars with only two full-time employees.
Meanwhile, Intel is cutting from 125,000 to 75,000 employees. Dell went from 133,000 to 120,000. Microsoft laid off more than 15,000 people in 2025 while posting record revenue of 70.1 billion dollars in a single quarter. Fortune magazine reported that 2025 layoffs targeted middle management the most, calling it a hollowing-out strategy.
In Germany specifically, the AppliedAI Institute tracked 935 AI startups by 2025, up 36 percent from the year before. Over 2 billion euros were invested in German AI startups through July 2025 alone. But here is the critical detail. Without AI deals, European venture capital deal value actually declined from 45.3 billion to 42.7 billion euros. AI is not just a sector anymore. It is becoming the only sector attracting serious money.
Sam Altman has predicted the rise of ten-person billion-dollar companies. Anthropic CEO Dario Amodei said a one-person billion-dollar company could happen by 2026. The age of the lean AI startup is here. But this creates a question nobody is asking loud enough. If AI makes companies so efficient that they need fewer people, and if AI startups need almost no staff, then who exactly is buying all these AI products?
The Snake Eating Its Own Tail
Now we arrive at the heart of the story. The part that sounds like science fiction but is published in Nature, one of the most respected scientific journals in the world.
In July 2024, researchers led by Ilia Shumailov published a paper showing what happens when AI models train on data generated by other AI models. The result is something called model collapse. After nine rounds of this recursive training, their language model produced complete nonsense. A question about medieval church architecture generated random, meaningless words. The model had eaten itself.
This is not a small technical problem. It is a fundamental limit. And it is already happening in the real world. More than 50 percent of internet content is now generated by AI, according to analysis by Graphite. Ahrefs found that 74.2 percent of new web pages contain AI-generated content. Europol warned that 90 percent of online content could be synthetically generated by 2026.
Think about what this means. AI learns from internet data. The internet is now mostly AI-generated content. So AI is increasingly learning from itself. Like a snake eating its own tail.
Ilya Sutskever, co-founder of OpenAI, stood up at the NeurIPS conference in December 2024 and said something that shocked the audience. He said we have reached peak data. He compared data to fossil fuels. It was created, we used it, and now it is running out. Epoch AI estimates that usable public text data will be fully exhausted between 2026 and 2032.

The Stack Overflow Problem
Here is a real example of AI eating its own food supply. Stack Overflow used to be the biggest source of human-written programming knowledge on the internet. Developers would ask questions, other developers would answer them, and AI models would learn from these conversations. At its peak, the site received about 200,000 new questions every month.
Then AI coding assistants arrived. Developers stopped asking questions on Stack Overflow because they could ask ChatGPT or Copilot instead. By December 2025, new questions dropped to just 3,862 per month. A 78 percent decrease in one year.
Now think about the irony. AI coding tools were trained on Stack Overflow answers written by humans. Those humans stopped writing answers because they started using AI instead. So when these AI tools need to improve, where do they find new human-written code to learn from? They find their own previous outputs. The fire is trying to put out fire.
The Stack Overflow 2025 Developer Survey, with more than 49,000 responses, revealed a growing trust problem. While 84 percent of developers use or plan to use AI tools, only 33 percent trust their accuracy. That is down from 43 percent the year before. The most common complaint? Solutions that are almost right but not quite. Close enough to look correct. Wrong enough to break everything.
A Microsoft randomized study found something even more troubling. Despite developers believing they worked 20 percent faster with AI coding tools, actual coding metrics showed no statistically significant improvement. The feeling of productivity was real. The productivity itself was not.
Hey, speaking of smarter ways to use AI for learning. If you are a student tired of spending hours making flashcards, Mindomax turns your PDFs, audio files, and images into study-ready flashcards in seconds using AI. It is free to start, works on any device, and supports LaTeX formulas for STEM subjects plus pronunciation in 14 languages. Over 150,000 ready-made flashcards are waiting for you. No credit card needed. Just sign up and start studying smarter today.
The 259 Billion Dollar Question
Now let us follow the money. Because the financial picture reveals something that should make everyone pay attention.
OpenAI earned about 3.7 billion dollars in revenue in 2024. That sounds impressive until you learn they lost 5 billion dollars in the same period. Their valuation? Somewhere between 300 and 500 billion dollars. Deutsche Bank estimates OpenAI will generate roughly 143 billion dollars in cumulative negative cash flow between 2024 and 2029 before potentially reaching profitability.
Anthropic reached 14 billion dollars in yearly revenue by early 2026 at a 380 billion dollar valuation. That is a 27 times revenue multiple. Their gross margins sit around 40 percent, well below the 77 percent analysts say is needed to justify that price.
But the most extreme case is xAI, Elon Musk's AI company. A valuation of 230 to 250 billion dollars on approximately 500 million in standalone revenue. That is a multiple of 460 times. They burn an estimated one billion dollars per month.
Here is the uncomfortable truth. Not a single AI company at the foundation model layer is profitable. Not OpenAI. Not Anthropic. Not xAI. Not Mistral. Not Cohere. Not Stability AI. The only clearly profitable companies in the AI space are the ones selling tools to AI companies, mainly NVIDIA with 130.5 billion in revenue and massive profits, and Palantir. The gold miners are losing money. The people selling shovels are getting rich.
Sequoia Capital's David Cahn identified what he called a 600 billion dollar revenue gap between what companies are spending on AI infrastructure and what they are actually earning from it. The four biggest tech companies, Amazon, Microsoft, Google, and Meta, committed roughly 380 to 400 billion dollars in AI spending for 2025. Goldman Sachs found these companies took on 121 billion dollars in AI-related debt in a single year, a 300 percent increase from normal levels.
A Bank of America survey from October 2025 found that 54 percent of global fund managers consider AI stocks to be in bubble territory. Ray Dalio called it very similar to the dot-com bubble. MIT Nobel laureate Daron Acemoglu said these models are being hyped up and we are investing more than we should.

Why CEOs Keep Saying AI Will Replace Everyone
So if AI is not replacing workers as fast as promised, why do the biggest names in tech keep saying it will? The answer is simpler than you think. It is about money.
When Oracle announced an AI data center deal with OpenAI, its stock jumped 40 percent, adding roughly 330 billion dollars in market value. JP Morgan found that AI-related stocks accounted for 75 percent of all S&P 500 returns since ChatGPT launched. Every time a CEO says AI will change everything, their company's value goes up.
Harvard Business Review published a fascinating study in January 2026. After surveying more than 1,000 executives globally, they found that companies are laying off workers because of what AI might do in the future, not because of what it can actually do right now. Companies are firing people for capabilities that do not yet exist.
The SEC has even started cracking down on something called AI washing, which is when companies exaggerate their AI capabilities to attract investors. They have already charged companies like Delphia and Global Predictions for making false AI claims. Securities lawsuits targeting AI misrepresentations doubled between 2023 and 2024.
PwC's January 2026 CEO Survey found that only 10 to 12 percent of companies reported real benefits from AI on revenue or cost. Meanwhile, 56 percent reported getting nothing out of it. The MIT NANDA initiative found that 95 percent of enterprise AI pilot projects failed to deliver measurable business value despite 40 billion dollars in investment.
So Where Does This Leave Us?
Let us put all the pieces together. AI has genuinely created more jobs than it has destroyed. That is not opinion. That is data from the World Economic Forum, PwC, LinkedIn, the U.S. Bureau of Labor Statistics, and Bitkom. Germany's 109,000 unfilled IT positions are real. The 1.3 million new AI jobs are real. The 56 percent wage premium for AI skills is real.
But the industry building this technology faces three problems at once. First, a scientific wall. Model collapse is documented in Nature. Training data is running out. More than half the internet is now AI-generated content feeding back into AI training. Second, a financial crisis. 259 billion dollars invested against 25 to 33 billion in revenue. Zero profitable companies at the model layer. Projected losses of 143 billion for OpenAI alone through 2029. Third, a credibility gap. 95 percent of enterprise pilots failing. CEO predictions consistently wrong. Developer trust dropping from 43 to 33 percent in one year.
The most honest way to describe what is happening is this. AI is a real and useful technology that is being wrapped in a financial structure that requires it to be revolutionary to justify its price tag. The technology is good at helping people work better. It is not good at replacing them entirely. And the gap between those two realities is where hundreds of billions of dollars sit, waiting to see which version of the story comes true.
CONCLUSION
AI is not going away. It has already changed how we work, study, and build companies. The jobs it creates pay well and keep growing. The startups it powers are smaller, faster, and sometimes worth billions with fewer than 50 people.
But AI is also eating itself. The data is running out. The models are training on their own outputs. The companies building it are losing billions. And the predictions from industry leaders keep failing to come true.
The future probably looks like this. AI will keep creating jobs. AI companies will keep losing money until the bubble corrects. And the smartest people will be those who use AI as a powerful tool rather than treating it as magic that will replace human thinking. Because in the end, the most valuable thing in an AI world is still a human who knows how to think clearly.
That is why tools like Mindomax focus on making you smarter rather than replacing you. They use AI to save you time so you can spend that time actually learning. Because when the hype fades, knowledge stays.
4
0