by Tim Leogrande, BSIT, MSCP, Ed.S.

🗓 MAY 7, 2026 • 6 MIN 15 SEC READ


Artificial intelligence is poised to transform every day life in ways most of us can hardly imagine, with changes that may be as profound and disruptive as were brought about by the Second Industrial Revolution. Yet our policymakers remain unprepared, and in some cases unwilling, to ensure this paradigm shift benefits everyone rather than a small group of ultra-wealthy tech elites.

Against this backdrop of rapid progress and limited oversight, the premonitions of leading AI experts have become increasingly stark. The technology is evolving so quickly that the “godfather of AI,” physics Nobel laureate Geoffrey Hinton, resigned from Google after warning that AI platforms could flood public debate with false information and ultimately endanger civilization. Similarly, Stuart Russell, a British computer scientist who co-wrote a leading AI textbook, has asserted that the development of AI is “intrinsically unsafe.”

These dire warnings aren’t limited to academics, but are echoed by people building the technology itself. The CEO of Anthropic, Dario Amodei, recently declared that he won’t permit the company’s technology to be used for fully autonomous weapons, or domestic mass surveillance, after the Department of Defense requested access to the Claude chatbot. Notwithstanding, other firms are jockeying for defense contracts and allowing their platforms to be used for AI-assisted targeting in Gaza, which critics allege have already contributed to civilian casualties.

<aside> đź’ˇ

These divisions — between those urging restraint and those pursuing military and commercial opportunities — have increasingly drawn the attention of policymakers, setting the stage for more aggressive government involvement in how AI is regulated and deployed.

</aside>

As legislators begins to respond, their actions have introduced a new set of challenges and tensions. In December, the president issued an executive order creating a litigation task force to challenge state AI regulations and potentially undermine consumer protection laws, after Congress rejected the administration’s repeated attempts to insert anti-regulation language into federal legislation. The president has also directed federal regulators to withhold funding allocated for broadband infrastructure if states succeed in maintaining their existing AI laws. State attorneys general may successfully defend these regulations in court. But simply rejecting the current administration’s executive overreach isn’t enough.

To understand why more comprehensive action is needed, we must look beyond policy disputes to the broader distribution of power in society. We can start by acknowledging that the American people have lost control over our economy, media, and politics to tech billionaires. These plutocrats are tightening their grip on our future despite mounting public concern about AI. Many Americans believe they have little control over how their lives, and their children’s lives, will be shaped. This has fueled anger, resentment, and pessimism. According to an Economist / YouGov poll conducted in January,

“Half of Americans (52%) say the gap between rich and poor is a very big problem, while 28% say it's a somewhat big problem, 14% that it's a minor problem, and 6% that it's not a problem.”

The concentration of wealth is at its highest point since the 1920s, according to economist Gabriel Zucman, who reports that the 19 richest American households have accumulated approximately $2.6 trillion. This accumulation of resources is not abstract, it is geographically and institutionally grounded. Much of AI innovation is concentrated in the Silicon Valley surrounding Stanford University, where a small group of tech giants now account for 30% of the S&P 500 market capitalization. We can acknowledge that technology entrepreneurs have demonstrated skill and creativity in advancing AI while taking calculated risks. But their success is also built on public investment, as has been true for generations of American business leaders. For example, AI development at Stanford — where the Digital Library Project helped lay the groundwork for Google, and ImageNet contributed to visual object recognition — was supported by taxpayer funding and philanthropic contributions.

This pattern is not new, and neither is the public's claim on what follows from it. The interstate highway system, the Internet, GPS, mRNA vaccines, and the touchscreen technology in every smartphone all began with public dollars and public purpose before private enterprise leveraged them into mostly private fortunes. But in each case, society eventually insisted on terms — safety standards, antitrust enforcement, and universal service requirements — to ensure the benefits of taxpayer-funded breakthroughs reach beyond a single industry or geographic region. AI deserves the same scrutiny. When public investment seeds a transformative technology, the public retains a legitimate interest in how that technology is governed, who profits from it, and whom it is ultimately built to serve.

With that context in mind, the stakes of the current moment come into sharper focus. The AI revolution has the potential to cure cancer and rare diseases, reduce housing costs, enable the creation of new businesses and factories, meet energy needs, and lower healthcare and education costs for working people. But if we leave AI in the hands of what is essentially a billionaire boys’ club, their priorities are likely to focus on eliminating jobs, extracting profits, and maximizing user engagement.

<aside> đź’ˇ

AI platforms that were in-part created with taxpayer dollars should not be used to exclusively serve the ultra-rich. Together, we must guide development in a way that promotes shared prosperity across communities — from small towns to large cities — supports a thriving middle class, and prevents oligarchic dominance.

</aside>

Given these risks and opportunities, the question is how to move forward constructively. We need to widen the lens through which AI is imagined, funded, and deployed. That means investing in tools that solve real problems for working families, small business owners, educators, and local governments — not merely optimizing convenience for the already comfortable. It also involves supporting public-interest technology, expanding access to digital infrastructure, and ensuring the people impacted by these systems have a voice in how they are built.

Achieving this vision will require coordinated effort across multiple sectors. Policymakers can prioritize incentives for companies who develop AI with broad social impact while enforcing guardrails that prevent the excessive consolidation of power or widespread job loss. Universities and community colleges can partner with local industries to train a more diverse generation of AI developers. Entrepreneurs can look beyond obvious markets and design AI technologies for historically underprivileged communities. And as individuals, we can support products, platforms, and leaders who align with a more inclusive vision of progress.

The future of AI is still unwritten, and that’s exactly the point. If we want a future that belongs to all of us, we must participate in shaping it — deliberately, persistently, and collectively.


© 2026 Tim Leogrande. The opinions expressed herein are solely those of the author and do not necessarily reflect the views, policies, or positions of any affiliated organizations or individuals.