History Repeating: What Net Neutrality Teaches Us About AI Governance

If the debates around artificial intelligence regulation feel strangely familiar, there's a good reason. We've been here before—just with different technology.

A decade ago, the internet was the battleground. Today, it's AI. But the fundamental questions remain the same: Who controls access to transformative technology? How do we balance innovation with fairness? And what happens when a handful of powerful companies hold all the keys?

When Google Broke the Internet's Trust

In August 2010, Google and Verizon announced a deal that sent shockwaves through the tech community. Google, the company that had championed an open internet, appeared to abandon its principles in a compromise that critics called a betrayal.

The issue was Net Neutrality—the idea that all internet traffic should be treated equally. The Google-Verizon proposal would have created two internets: one wired and relatively open, another wireless and controlled by telecommunications giants.

This wireless exemption wasn't just a technical detail. It would have allowed carriers like Verizon to charge premium prices for faster service, prioritize their own content, and even block competing applications. Journalist Jeff Jarvis captured the dystopian vision in a single word: "schnet"—a private, corporate-controlled network masquerading as the internet.

The proposal raised three major red flags:

Weakening the regulator: The deal attempted to sideline the Federal Communications Commission, reducing its authority from setting broad rules to merely adjudicating disputes case-by-case.

The loophole that could swallow everything: A vague provision for "additional Network Services" created an escape hatch wide enough for carriers to drive almost any new technology through, exempt from neutrality rules.

Who decides what's "lawful"?: By limiting protections to "lawful content," the agreement gave carriers potential power to throttle or block content at their discretion—a chilling prospect for free speech.

For Google, the backlash was intense. The company's "Don't Be Evil" motto suddenly rang hollow. Had Google sacrificed its ideals to secure favorable treatment from a powerful carrier partner?

The Two-Tier Problem: Then and Now

The nightmare scenario in 2010 was straightforward: a fractured internet where access and quality depended on your relationship with telecommunications gatekeepers. Innovation would stall because upstart competitors couldn't afford premium placement.

Fast forward to today, and the AI landscape looks eerily similar.

A small number of corporations—primarily Big Tech giants—possess the computing power, proprietary data, and capital necessary to build the most advanced AI systems. These foundation models are the new infrastructure, the engines powering everything from chatbots to scientific research.

Everyone else? They're increasingly dependent on those who control the models. The emerging two-tier system looks like this:

The elite tier: Companies with billions to spend on computation, exclusive access to vast datasets, and the ability to customize and optimize AI systems to their needs.

Everyone else: Startups, researchers, and smaller organizations facing high API costs, restrictive terms of service, and performance limitations. They can't train competitive models from scratch, so they must accept whatever terms the gatekeepers offer.

The result is the same bottleneck that net neutrality advocates warned about: concentrated power stifling the diverse, decentralized innovation that makes technology transformative.

New Gatekeepers, Same Problems

In 2010, the gatekeepers were ISPs. They controlled the physical pipes through which information flowed and could theoretically decide what moved fast and what moved slow—or didn't move at all.

Today's gatekeepers are foundation model developers. They control the intelligence itself—the algorithms, training methods, and datasets that determine what AI can and cannot do. Their position gives them extraordinary leverage:

  • They set prices and access terms

  • They can favor their own applications over competitors

  • They decide which use cases are permitted

  • They shape the trajectory of AI development through their architectural choices

Just as ISPs could have throttled Netflix to favor their own streaming services, foundation model developers can prioritize their own AI applications while limiting competitors' access to the underlying technology.

The Innovation Paradox

Both debates feature the same paradox: Does regulation help or hurt innovation?

The case against regulation argues that rules slow progress. AI is evolving rapidly, and today's regulations might be obsolete tomorrow. Heavy-handed oversight could discourage the massive investments needed to advance the technology. Better to let competition and market forces drive development.

The case for regulation counters that without guardrails, innovation actually dies. When a few companies control essential infrastructure—whether internet pipes or AI models—they can lock out competitors and dictate terms. True innovation requires a level playing field, which markets alone won't provide. Monopolies don't innovate; they extract rents.

History suggests the pro-regulation camp has a point. The internet thrived precisely because it was open and decentralized. The most transformative innovations—search engines, social media, streaming services—came from upstarts challenging incumbents, not from the telecommunications companies that controlled the infrastructure.

Shaping the Rules

Another parallel: industry players don't wait passively for regulation. They actively shape it.

The Google-Verizon deal was an attempt to pre-empt FCC action with a "compromise" that protected industry interests. Today, major AI developers are pursuing a similar strategy.

Many advocate for "risk-based" frameworks that classify foundation models as low-risk infrastructure while placing regulatory burdens on end-use applications. This conveniently shifts oversight away from their core business—the proprietary models that generate their competitive advantage—and onto the startups and developers building on top of them.

It's a clever move. But it's also precisely the kind of self-serving regulation that net neutrality advocates warned against: rules written by incumbents to protect incumbents.

Learning From the Past

The Net Neutrality battles weren't just about internet speeds or carrier profits. They were about a fundamental question: Who gets to shape the future of a transformative technology?

We face the same question with AI. The technology's potential is immense—for scientific discovery, economic productivity, and human flourishing. But that potential depends on access. If AI becomes the exclusive domain of a few corporations, we'll get the innovations those corporations want, optimized for their business models.

The 2010 debates teach us that foundational technologies consolidate power quickly if left unchecked. They show us how easily principles ("Don't Be Evil") can bend when business interests intervene. And they remind us that "innovation" arguments often serve those who benefit from the status quo.

As AI accelerates toward an uncertain future, these lessons matter. The choices we make now—about access, oversight, and power—will determine whether AI becomes a force for broad-based innovation or another tool of concentrated corporate control.

History doesn't repeat itself, but it often rhymes. The question is whether we're listening.