by Tim Leogrande, BSIT, MSCP, Ed.S.
17 JAN 2026 • 5 MIN READ
As the new year begins, many of AI’s staunchest critics are taking a victory lap because the giddy excitement surrounding it has undeniably begun to wane. In boardrooms and living rooms, the conversation has shifted from dazzled speculation to questions about reliability, cost, and whether the industry’s most ambitious claims can survive contact with reality.
Most notably, there is a growing fear that the AI bubble will soon burst because the stock price of several key players has recently taken a hit. There have also been a significant number of academic papers and news reports which conclude AI hasn’t produced the kind of ROI predicted by the industry and business analysts.
As this situation unfolds, cybercriminals continue to leverage AI for phishing attacks and social engineering, deepfakes and impersonation, malware development and obfuscation, and automated vulnerability discovery. Concerns about AI’s negative impacts on workers, high school and college students, and the general public are also growing. These issues are driving a sea change in public opinion, with much of it drifting in an unflattering direction.

Headlines like this became increasingly common during the second half of 2025. (©The Economic Times)
Without question, the AI bubble is real, so new investments are unlikely to keep pace with recent spending on infrastructure like GPUs and data centers. If the bubble bursts, it may look like a combination of the dot-com bubble and the 2008 housing crisis***.*** Highly-leveraged firms would collapse, investor confidence will evaporate, and the broader economy may experience devastating ripple effects as capital dries up and mass layoffs ensue.
Adding to these fears is the widespread belief that the industry has vastly overestimated the humanlike and cognitive reasoning capabilities of AI. This leaves venture capitalists and businesses searching for the bottom-line benefits and impressive use cases that the industry has been promoting on full-blast for the past three years.
So it’s not surprising that a common theme among critics and skeptics is that the AI industry has overpromised and underdelivered vis-à-vis what large language models (LLMs) can actually achieve. It doesn’t help that CEOs like Elon Musk and Sam Altman tirelessly overestimate their ability. In fairness, it’s almost a certainty that a new and far more sophisticated version of AI will eventually hit the market and revolutionize the industry, but this kind of technological quantum leap is likely several years away.
AI is a powerful statistical model which can impersonate certain outward behaviors of intelligence without possessing the inward architecture we associate with understanding (like grounded semantics, causal models, stable agency, and self-corrective epistemic humility). AI is not nothing, but it certainly isn’t a mind. Hence*,* one **of the most responsible things we can do right now is describe artificial intelligence as it is—astonishing at patterning, brittle at meaning, and still well outside the circle of what we should call intelligence.

Actor Brent Spiner as Lieutenant Commander Data on Star Trek: The Next Generation. The fictional android exemplifies highly actualized AI by pairing embodiment, human-level reasoning, and self-awareness with emotional growth (via a chipset upgrade), ethical judgment, and a continuously evolving personal identity. (© Paramount Television)
As of this writing, LLMs are stuck in a quagmire because they can replace entry-level software developers, yet they haven’t produced the kind of impressive ROI that the C-suite demands. For these decision-makers’ investments to be worthwhile, LLMs must also replace workers at the high end of the programming pay scale. Ironically, these much sought-after senior developers are the ones best equipped to spot and fix the coding errors LLMs often churn out.
<aside> 💡
The path to machine intelligence—if it exists—likely runs through embodiment, causal abstraction, memory, values, emotions, and social learning. Not through LLMs, more text and images, and bigger data farms.
</aside>