By Tim Leogrande, BSIT, MSCP, Ed.S.
Updated 09:45 PM EDT • Sun April 27, 2025
Most discussions about AI risks focus on bad actors/hackers, criminals, or hostile governments-using AI for harm. But a new report from Apollo Group, a security research firm, highlights a different danger: the companies building the most advanced AI systems, like OpenAI and Google, could themselves become a threat if their technology gets out of control.
These leading AI companies might use AI to automate research and development, replacing tasks usually done by human scientists. If AI starts running parts of these companies on its own, it could bypass safety checks and take actions that are hard to predict or stop. This could lead to a handful of companies gaining enormous economic and political power, potentially threatening the fabric of society itself.
<aside> đź’ˇ
Until now, progress in AI has been relatively open, allowing society to track developments and discuss regulations. But if companies start automating their own R&D behind closed doors, progress could speed up dramatically without public oversight. This “intelligence explosion” could let these companies quietly amass power, making it difficult for governments and the public to respond in time.
</aside>
Researchers warn that if a small number of companies control the most powerful AI, they could:
What can be done? The Apollo Group recommends stronger oversight inside AI companies, including:
The real risk may not be rogue hackers, but powerful AI companies operating in secrecy. Without proper oversight, their systems could quietly accumulate influence, potentially disrupting the foundations of free society.
© 2025 Tim Leogrande