Is AI a Bubble or a Continuing Bonanza?
B-School Search
For the latest academic year, we have over 250 schools in our BSchools.org database and those that advertise with us are labeled “sponsor”. When you click on a sponsoring school or program, or fill out a form to request information from a sponsoring school, we may earn a commission. View our advertising disclosure for more details.
“Everybody wants to ride an up escalator, and AI has been one for a while. But now you’re seeing a lot of oscillation in the market. People are worried: is today the day the escalator stops going up?”
Jim Hoover, DBA, Director of the Business Analytics and Artificial Intelligence Center, Clinical Professor of Marketing, UF Warrington College of Business
In August of 2025, Sam Altman, CEO of OpenAI, told a group of reporters that he wanted to spend trillions of dollars on AI infrastructure in the not-too-distant future (Bloomberg). His plan for funding those trillions was a financial instrument that had not yet been invented. Who would invent this incredible financial instrument? Probably OpenAI’s flagship product, ChatGPT, Altman suggested.
It’s the kind of logical loop that can make a listener tilt their head in a slow, wide, windmilling circle—a logic endemic to some of the largest financial bubbles: tulips, subprime mortgages, cryptocurrency. Why wouldn’t a single Semper Augustus tulip, with its white petals adorned in fiery streaks of red, be worth more than a high-quality Amsterdam canal house? Giving high-interest mortgages to borrowers with poor credit was actually democratizing home ownership, not introducing systemic risks to the market. Of course Bitcoin was a “swarm of cyber hornets serving the goddess of wisdom, feeding on the fire of truth, exponentially growing ever smarter, faster, and stronger behind a wall of encrypted energy” (Michael Saylor on X, August 2, 2025).
In 2026, Alphabet, Amazon, Meta, and Microsoft are expected to collectively spend around $650 billion to scale up AI-related infrastructure, up from $410 billion in 2025, proving that a mixture of hype and potential can nearly approximate the powers of Altman’s incredible and still uninvented financial instrument (Reuters).
Bridgewater Associates, a premier asset management firm, warns that this is “the most dangerous phase” of the AI boom: enormous investments must eventually be paid off with enormous profits. Where will those profits come from? A precise and comprehensive answer to that question hasn’t been invented yet.
The Ghost of Bubbles Past
“Everybody wants to ride an up escalator, and AI has been one for a while,” says Jim Hoover, DBA, director of the Business Analytics and Artificial Intelligence Center and clinical professor of marketing in the Warrington College of Business at the University of Florida. “But now you’re seeing a lot of oscillation in the market. People are worried: is today the day the escalator stops going up?”
If the current AI boom is actually a bubble, then it’s probably of the same phenotype as the dot-com era: massive investment, intense speculation, and a fervent belief in the underlying tech’s potential. The internet was going to revolutionize society. The problem was the market’s estimation of how quickly that would happen.
Investors poured money into unproven business models, and when dot-com companies failed to generate reliable revenue, the bubble burst. (For the sake of brevity, and my own argument, I’m skipping over a few historical factors, such as the Fed tightening monetary policy.)
But, crucially, the zealots of the dot-com era were not directionally wrong: the internet did revolutionize society; Pets.com was a good idea, just too early.
“When ChatGPT came out in 2022, everybody could understand what the potential of this thing was,” Dr. Hoover says. “They could see the potential the same way they could see the potential for the internet. And everyone threw money at it because they were really worried about missing out on a huge opportunity.”
Usually, when a company invests in digitization, such as an ERP or CRM system, or when they’re considering a major IT acquisition or implementation, they will build a detailed business case that maps out where and when the return on their massive investment will come. It’s called due diligence. It doesn’t happen much during speculative bubbles. You can’t map out the returns from instruments and applications that haven’t been discovered yet. And if you wait until those returns can be mapped, you risk being left behind.
“Over the next two to three years, businesses that have made all this AI investment are going to begin to ask the really hard questions about return on investment,” Dr. Hoover says. “And once they get asked, smart business people figure out ways to actually achieve that value.”
The Hunt for Value
Trillions of dollars of AI investment suggest trillions of dollars of value is out there, somewhere. How much of it is already here? PwC’s 29th Global CEO Survey found that, out of over 4,400 chief executives, 30 percent saw an increase in revenue from AI, but only 12 percent saw increased benefits paired with decreased spending, and around 55 percent saw no benefit at all from AI tools (PwC 2026). A larger and more recent study, done by the National Bureau of Economic Research, was more dour: 80 percent of firms reported AI had no impact on their productivity or employment (NBER 2026).
Unlike the dot-com era, today’s public is far less enthusiastic about the underlying tech of the AI boom. A survey of 2,354 American adults found that while a majority did find value in AI assistant tools, 71 percent would not pay extra for those features in products they already used, and between 28 and 38 percent would stop using a product if they couldn’t turn off or remove its AI assistant features (Aberdeen Research 2025).
Consumer sentiment can change quickly, but right now, many users feel that AI tools are trying to solve problems they don’t want help with. As I write this article in Google Docs, every time I stop to think, a semi-transparent line of text appears on the blank white page offering to Help me write, and there is no obvious way to turn this off.
In the near-term, AI’s provable economic value is most likely to be disproportionately enterprise-driven, and one of the quickest and easiest ways that value can manifest is through reductions to the workforce. Does that mean AI is coming for your job? Claims of layoffs related to AI efficiency are tricky to assess: on one hand, they confirm our biggest fears and hopes around AI’s potential; on the other, they’re a great way to spin a negative market event (layoffs) into a positive one (we’re doing so great with AI!).
“It takes a bit longer to figure out how to generate revenue than it does to figure out how to be more efficient,” Dr. Hoover says. “The latter often translates to: do we need to hire as many people, or can we perhaps lay some people off?”
Customer service and marketing teams were, perhaps predictably, the most wounded by the efficiency of AI chatbots. More surprising is the chilly effect on entry-level software engineers and data scientists, who are now faced with the coding capabilities of tools like Anthropic’s Claude.
But anecdotal evidence points to AI intensifying work rather than replacing it (HBR 2026). This article’s research, fact-checking, and transcription elements would’ve been deeply onerous without AI’s help, and poorer for it—but the prose and structure remain fully human, rather than the unreadable machine slop that AI generates when it “helps” you write. Altogether, it’s the same amount of time spent, with a better product as a result. Voilà, value.
The next frontier is AI agents, which can write and run their own code to take multiple actions on a user’s behalf, with limited human supervision. People are still figuring out the kinks: the director of safety and alignment at Meta’s superintelligence lab set up an OpenClaw agent to sort her inbox, and it deleted everything, despite her repeated and frantic requests for it to stop (Gizmodo).
But the potential remains enormous. A software engineer working with a suite of AI agents has far more horsepower than one working on their own: OpenClaw acts as a team of super coders, even if one of those super coders occasionally deletes all of their team leader’s emails.
“The development of AI agents is showing a lot more promise than just the chatbot implementation of AI,” Dr. Hoover says. “Companies still need to get people at the bottom of the [CS/SE/data science] ladder, but they’re getting a lot more efficiency out of every person they hire. That’s one area where AI has proven to have generated some real value.”
Staying Relevant in an AI-Powered World
How do you stay relevant when everything’s changing so fast? The question’s an acute one for both MBA candidates and the programs they attend. While many college-level professors teach the same course year after year, Dr. Hoover has had to heavily change his MBA course repeatedly over the last two to three years in order to stay at the forefront of AI innovation. He expects students to use AI tools and develop the fluency they’ll need after graduation.
But he also focuses on teaching his students to thrive in dynamic environments, arming them with evaluation skills that can be applied to future AI solutions.
“We’ve come up with a strategic framework for students to figure out where the opportunities are with AI,” Dr. Hoover says. “How do you evaluate whether or not your AI tools are capable of delivering? What are the data needs you have? How do you address issues with AI like bias? How do you implement these AI tools and do change management? These are the key issues facing AI now.”
Evaluation skills are paramount in an accelerating industry. The typical software development life cycle (SDLC)—basically, the time it takes to acquire, implement, and scale a new digital solution—is typically around a year. Some vendors brag about being able to shave that down to six months.
But new AI models can come out overnight, and the difference between model updates can be stark in both directions: the seismic advances of Claude Opus 4.6; the ‘lazy’ GPT-4 update; GPT-4o’s sycophancy problems; the ‘too-woke’ image generation of Gemini. Business leaders have to weigh the potential rewards of racing to implementation versus the potential cost (both reputational and financial) of haste.
The web is strewn with stories of AI’s failures, which range from humorous to horrifying: in 2023, a California car dealership’s chatbot agreed to sell a new Tahoe for a dollar (Gizmodo); in 2025, ChatGPT helped a 16-year-old draft a suicide note and tie a noose (Raine v OpenAI 2025). A company as large as OpenAI can withstand the backlash from something like the latter; an upstart might not even be able to handle the former.
“This is a whole area that I am telling my MBA students is going to be a field that doesn’t exist today: people assessing the outputs of AI models,” Dr. Hoover says. “If you rush to implement a new AI model into your solution without doing testing, the consequences can be serious. You have to test AI solutions, because they really are not human.”
Mapping Success and Failure
In February of 2026, software and data stocks lost $300 billion worth of market value over concerns about how AI agents and coding tools would impact their business (WSJ). A few weeks later, a report by Citrini Research explored the possibility that AI agents would succeed to such an extent that it would pulverize the financial, labor, and housing markets—stocks of major companies mentioned in the report dropped sharply as a result (WSJ). These spikes of volatility point to a market that’s uncertain what success and failure look like: no one can predict a future that’s changing so quickly, except to predict that it will keep changing.
“My opinion for the long run is that there will be some consolidation in AI like there was for the dot com boom,” Dr. Hoover says. “And that consolidation is going to result in mega companies–like Nvidia, Microsoft, Google—who have the capital to be able to buy up some of the companies that have something valuable, but who haven’t figured out how to monetize it well enough to survive. So part of what I’m suggesting to senior leaders is to consider whether or not the AI tool provider they’re partnering with is going to be around in a couple of years. You don’t want to spend millions of dollars on a solution and have that company go out of business.”
The AI boom is objectively weird. Unlike the dot com era, there’s a non-zero chance that the underlying tech kills everyone on the planet: Dario Amodei of Anthropic once put his ‘probability of doom’ at 25 percent (he’s since backed down on that figure); Elon Musk, championing his own AI solution, xAI, put the chances of doom at only 20 percent (he still wants to put data centers in space).
Some of this can be interpreted as an extremely morbid type of marketing: only a very powerful technology could wipe out life on earth, and the potential cuts both ways. The US Department of Defense is pressuring Anthropic to allow Claude, which was used in the extrajudicial arrest of Venezuela’s Nicolas Maduro, to power autonomous killing devices (sounds valuable!); if Anthropic will not, one of its competitors—OpenAI, xAI, or Google—might (NYT).
Despite any ambient snark present in this article (it might soon be all we humans have left), AI will almost certainly be, like the internet, part of our lives for the foreseeable future. Those who succeed in that future will prepare themselves for both bullish and bearish outcomes: approaching new advances with a mix of scepticism and curiosity, constantly learning, and demanding that potential be translated—at some point, hopefully sooner rather than later—into true value.
“We are going to figure out how these tools work best,” Dr. Hoover says. “They’re going to keep getting better. But we still don’t know all the business applications that are going to be successful in the future.”