Has Labour Abandoned Democratic Governance of AI—And Will We All Pay the Price?

By Francesca Tabor

I helped build a social network in the 2010s. We genuinely believed we were connecting people and making the world better. It took years to understand what we'd actually built: systems that interfered in democratic processes, amplified disinformation, and concentrated unprecedented power in companies accountable only to shareholders.

By the time we understood the damage, the infrastructure was too embedded to change. Politicians would shake their heads at committee hearings and promise "light touch regulation" whilst platforms mediated more of public life each month. We'd say "innovation" and they'd hear "don't interfere."

I'm watching the exact same pattern unfold with AI, except the consequences will be an order of magnitude worse. And this time, it's a Labour government leading the charge.

This is not just disappointing. It is a betrayal of everything Labour is supposed to represent.

What Happened This Week

Three announcements landed in the same week, and if you put them together, the picture is chilling:

First, Dario Amodei—CEO of Anthropic—published an essay predicting that 50% of entry-level white-collar jobs will disappear within five years. He acknowledged that AI systems already exhibit deception in testing and warned that AI companies themselves represent a governance risk because they "control large data centers, train frontier models, and have daily contact with millions of users."

Second, the government announced it would train 10 million workers in "AI skills" through partnerships with Microsoft, Google, Amazon, and Anthropic—the same companies driving the displacement Amodei describes.

Third, Anthropic was selected to build the AI assistant for GOV.UK services, beginning with employment support.

Read that again. Citizens displaced by AI will access government services through AI systems built by the company whose CEO predicts their displacement, having been trained to use AI through programmes run by companies driving that transformation.

The circular logic would be darkly funny if the stakes weren't so high.

The Fundamental Question No One Is Asking

Here is what I cannot understand: when did we decide that mediating citizen access to government services was a technical implementation question rather than a democratic governance question requiring public consent?

Not "Should AI systems intermediate between citizens and their government?" Not "At what pace should this transformation occur?" Not "Who should control the infrastructure of public administration?"

Just: "Which company should we partner with?"

This is not governance. This is abdication dressed up as modernisation.

Government's approach to AI is the opposite. We're building infrastructure now that will determine fundamental aspects of democratic life—who gets benefits, how people find work, what constitutes a legitimate claim—whilst treating these as procurement decisions, not political choices.

What I Learned Building the Last Disruptive Technology

When we built our social network, we told ourselves: connect people, let them share freely, the wisdom of crowds will sort out truth from lies. We believed this. It was beautiful and simple and completely wrong.

What actually happened: algorithmic amplification privileged engagement over accuracy. Bad actors gamed recommendation systems. Polarisation was more profitable than consensus. Democratic discourse was mediated by systems optimising for attention, not understanding.

By the time regulators caught up, 2 billion people were already using the platform. The infrastructure was embedded. Alternatives were foreclosed. "Breaking up" the company wouldn't change the fundamental architecture. The damage was structural, not just market concentration.

We said "move fast and break things" because we thought the things that would break were business models, not democracy itself.

AI is following the identical pattern, except instead of mediating social connection, it will mediate access to employment, healthcare, government services, and justice. The infrastructure is going in right now—not in five years, now—and when we discover the failure modes, it will be too late to redesign.

And unlike my startup, where the worst we could do was show people enraging content, AI systems will have the authority to deny benefits claims, determine eligibility for support, and guide citizens through rights they may not even know they have.

What could possibly go wrong?

The Training Programme Is Not What You Think It Is

The government is calling it "AI skills" training. Let's be precise about what this means.

Ten million workers will be trained to use proprietary tools built by Microsoft, Google, Amazon, and Anthropic. Not trained in the foundational concepts of AI—statistics, machine learning, algorithmic bias, system design. Trained to use products. Trained to become dependent on platforms that will then be sold to their employers.

This creates three problems simultaneously:

First, market capture. You are using public funds to train workers in specific commercial products, creating massive market advantages for those companies and public dependency on their platforms.

Second, skills obsolescence. When ChatGPT changes its interface or Anthropic pivots its product strategy, those skills evaporate. You're not building adaptable capability—you're building product lock-in.

Third, the employment contradiction. If Amodei is correct that AI is advancing "from lower cognitive ability upward," then training people to use AI tools doesn't create alternative employment—it makes them more effective at automating their own roles. You're training people to optimise themselves out of jobs.

Why is Labour's response to corporate-driven transformation focused on helping workers adapt rather than governing whether and how that transformation proceeds?

The Outsourcing of Democratic Authority

Here's what really concerns me. The government has outsourced the architecture of citizen-government interaction to a private American AI company.

Even if civil servants can eventually maintain the system independently, Anthropic has already defined:

  • What problems AI should solve in government services

  • How citizens access their government

  • The technical infrastructure underlying public administration

  • What questions citizens can ask and what answers are legitimate

  • The training data and value judgements embedded in system behaviour

This is not a "partnership." This is a fundamental transfer of state capacity.

Anthropic now controls the architecture of how millions of citizens access their government. That is power. That is not "implementation."

The Questions That Should Have Been Asked First

If the GOV.UK AI assistant provides incorrect information about benefits eligibility or employment rights, who is legally accountable? Anthropic? Civil servants? Ministers?

If the system becomes the primary interface for accessing government services, how meaningful is the right to opt out? If you're digitally excluded or if you don't trust AI, and the phone lines have been cut to save money, and the Job Centres have reduced hours—where exactly do you go?

If Anthropic's business model changes, or the company is acquired, or it decides UK government contracts aren't profitable enough—what then? Do we have contingency infrastructure? Source code escrow? The capacity to maintain and modify the system independently?

If AI systems systematically misunderstand certain accents, or disadvantage people with non-standard work histories, or embed middle-class assumptions about "appropriate" benefit claims—who catches this, and how quickly?

These are not technical questions. These are questions about democratic accountability that should have been resolved before procurement, not discovered during implementation.

And here's the one that nobody wants to ask: when Anthropic executives eventually leave the company, which government roles or regulatory bodies might they join? When DSIT officials who championed this partnership leave government, which AI companies might employ them?

I've seen this pattern in healthtech, in social media, in fintech. The revolving door between regulator and regulated doesn't require corruption—just the assumption that the people who build systems and the people who govern them share the same worldview and interests. They don't.

What Democratic Governance Would Actually Look Like

I'm not suggesting we ban AI. I'm suggesting we govern it. There is a difference, though you wouldn't know it from listening to either government ministers or their critics in the tech sector, both of whom seem to believe that any framework beyond "please try not to build Skynet" constitutes Luddism.

Democratic governance would start with a simple principle: companies deploying AI systems that eliminate substantial numbers of jobs, or that mediate citizen access to essential services, should be required to seek permission—not from politicians who can be lobbied, but from citizens who will bear the consequences.

This is not radical. We don't allow pharmaceutical companies to deploy drugs without proving safety. We don't allow developers to build wherever they please without planning permission. We don't allow banks to gamble with depositors' money without oversight. Why should we allow technology companies to restructure entire labour markets, or intermediate government services, without democratic scrutiny?

The mechanism is straightforward:

Any AI deployment projected to affect more than 10,000 workers, or concentrated in economically vulnerable regions, or that becomes the primary interface for essential public services, would require assessment by an independent statutory body—call it the National Technology Assessment Office.

That body would examine evidence:

  • How many jobs are affected, and where?

  • What alternatives exist for displaced workers?

  • Can productivity gains be captured and redistributed?

  • Are the company's projections credible, or the usual venture capital optimism?

  • What are the failure modes and who bears the risk?

  • What happens to democratic accountability when systems are opaque?

Crucially, the final decision would rest not with technocrats but with citizens' assemblies—randomly selected groups of ordinary people who would review evidence, question executives, listen to workers, and decide: yes, no, or only under these conditions.

Not because ordinary citizens possess mystical wisdom, but because they possess something more important: legitimate democratic authority and an actual stake in the outcome.

If this sounds impossible, consider that we ask juries of randomly selected citizens to decide whether someone should be imprisoned for life. We trust them with liberty. Why would we not trust them with livelihoods and access to public services?

Make Companies Pay for the Damage They Cause

Here is what usually happens with productivity gains from AI: they accrue to shareholders and executives whilst costs are externalised to communities and the welfare state. The company becomes more efficient. GDP rises. And the workers who lost their jobs encounter a benefits system that now requires them to prove they're looking for work through an AI system that doesn't understand their experience.

A serious government would say: fine, deploy your AI, but you must pay for the social costs. Not as corporate social responsibility—as legal obligation.

A levy of 5% on productivity gains for the first five years, paid into a Social Gains Fund dedicated to supporting displaced workers and diversifying regional economies. If your AI saves you £100 million in labour costs, you can afford £5 million to ensure the workers you've displaced have a genuine chance of rebuilding their lives.

The Fund would pay for actual support—not tick-box training courses, but sustained help: wage insurance while people retrain, outplacement services, relocation assistance if necessary, and honest assessment of what's actually possible. If retraining won't work for this particular person in this particular circumstance, we say so, and we provide dignified alternatives.

And here's the critical part: companies would be required to report actual impacts quarterly. Jobs lost, wages affected, regions impacted. If the numbers deviate significantly from predictions—if they promised 5,000 job losses but it's actually 8,000—deployment gets paused, fines get levied, and in extreme cases, authorisation gets revoked.

No more "oops, our modelling was optimistic." No more "unforeseen circumstances." You make predictions, you're held to them, or you face consequences.

Phasing: Governing on Human Timescales

I've spent enough time in product management to know that "move fast" is usually code for "externalise the costs of our mistakes."

AI deployment can be phased over five years instead of eighteen months. Yes, this is slower than optimal from a pure efficiency standpoint. But optimal for whom? Optimal ignoring what costs?

Phasing requirements—mandating that high-impact AI deployments roll out gradually, region by region—give workers time to adapt, communities time to prepare, and government time to respond when predictions prove wrong. Which they often do.

This isn't "holding back progress." It's governing on a human timescale rather than a financial quarter. It's recognising that people are not infinitely adaptable, communities are not disposable, and efficiency is not the only value that matters.

Labour should understand this instinctively. Labour's history is the history of saying: the pace of economic change must be moderated by democratic accountability. The party was founded because industrial capitalism, left to its own devices, destroyed lives faster than society could repair them.

That insight doesn't expire simply because the technology has changed.

Why Labour Won't Do This (And Why They Should Anyway)

I can already hear the objections:

"Business will revolt." Possibly. Though business revolts against every regulation until it becomes normalised, at which point they compete within the new framework and lobby to prevent anyone changing it.

"We'll lose investment to America." Perhaps. Or perhaps we'll attract different investment—from companies that want democratic legitimacy and social licence, not just regulatory arbitrage. There's a market for "AI you can trust because citizens actually agreed to it."

"China won't do this." Correct. China also doesn't have democracy, independent courts, or free trade unions. If our only competitive strategy is racing China to the bottom on labour rights and democratic accountability, we've already lost.

"It's too complex." Everything worth doing in government is complex. That's not an excuse for abdication.

The real reason Labour won't pursue this approach is simpler: it requires political courage. It means telling tech companies and their venture capital backers that efficiency isn't the only value that matters. It means telling voters that the state can actually govern technology rather than simply accommodate it. It means building new institutions, which is always harder than signing partnership agreements.

And it means accepting that some highly profitable activities might be delayed, modified, or rejected if they would cause unacceptable social harm.

This should not require courage. This should be the bare minimum of what we expect from a Labour government.

What's Actually at Stake

Let me be absolutely clear about what we're discussing.

We're not debating whether AI is "good" or "bad." We're deciding whether democratic societies can govern the most consequential economic transformation of our lifetimes, or whether we'll simply allow it to happen to us whilst pretending this constitutes policy.

If we choose the latter—if Labour chooses the latter—the consequences are predictable because I've seen them before:

Entire occupational categories will vanish faster than new ones emerge. Regional economies will collapse. Wealth will concentrate. Essential public services will be mediated by systems optimised for efficiency rather than justice. Trust in democratic institutions will erode further because people will correctly perceive that government serves corporate interests rather than protecting citizens from them.

And in ten years, when voters in decimated communities ask why no one protected them, politicians will say what they always say: "We didn't see it coming. Technology moves so fast. Who could have predicted this?"

Except we can predict it. We are predicting it. The predictions are remarkably consistent. What varies is whether we believe democratic government has any meaningful role beyond preparing sympathy statements and hoping market forces eventually sort things out.

A Personal Note on Why This Matters

I don't want to write op-eds. I don't want to be this person—the disillusioned technologist warning about systems she helped build. I want to build things that make life better.

But I've seen this story before. I know how it ends.

When we built social media platforms, we told ourselves the benefits would be distributed and the harms would be manageable. We were wrong on both counts. The benefits concentrated. The harms metastasised. And by the time we understood what we'd built, it was too late to redesign the fundamental architecture.

I'm watching my profession make identical mistakes at vastly larger scale, and I'm watching a Labour government—a Labour government—facilitate it.

You still have the opportunity to govern deployment rather than merely manage its consequences. But that window closes rapidly once infrastructure becomes embedded and alternatives become foreclosed.

The companies building these systems understand perfectly well what they're doing. They're moving fast precisely because they know that once the infrastructure is in place, regulation becomes nearly impossible. Not because of technical constraints, but because of political economy—governments become dependent on systems they don't control and can't replace.

This is not a conspiracy. It's just strategy.

And government's role is to recognise this dynamic and assert democratic authority before the window closes.

The Question for Labour

Why did you get into politics?

If the answer involves protecting working people from exploitation, or ensuring economic change serves the common good, or maintaining democratic accountability over essential public functions—then govern AI accordingly.

If the answer is managing Britain's adjustment to whatever tech companies decide to do next, using infrastructure provided by those same companies, whilst calling it "partnership"—then at least be honest about what you're doing.

You're not governing. You're administering someone else's decisions.

AI will transform Britain. The question is whether that transformation happens through democratic deliberation, with workers' voices heard, communities protected, and gains shared—or whether it happens the way social media happened: fast, concentrated, profitable for a few, corrosive of the social fabric, and ultimately beyond democratic control.

We can do better. Labour should do better. Whether you will is a test not just of this government, but of whether democratic governance means anything at all in the face of technological change.

I hope you pass it. I don't expect you will.

And I'm quite certain that in a decade, when we're surveying the wreckage, someone will write an essay remarkably similar to this one, asking why no one saw it coming.

We saw it coming. We're seeing it coming right now.

The question is whether anyone in power has the courage to do anything about it.