I recently gave a short speech at a Henry Jackson Society meeting on AI and the Future of Work.
This is what I said.
For some time, including in my book Living with the Algorithm, I have argued that the central question of our time is whether AI becomes our servant or master. Nowhere is that question more concrete, or more urgent, than in its impact on employment.
The Nature of the Disruption
Previous industrial revolutions automated physical labour. They followed a recognisable pattern: displacement of manual work, followed eventually by the creation of new cognitive roles at higher wages. The Fourth Industrial Revolution breaks that pattern. Generative AI is automating cognitive labour. It competes not so much with the factory worker but with the consultant, the lawyer, the analyst, the graduate.
The numbers are stark. McKinsey estimates that up to 30 percent of hours worked globally could be automated by 2030. In the UK, between 10 and 30 percent of current jobs are highly automatable over the next two decades. IBM calculates that 120 million workers worldwide will need retraining as a direct consequence of AI deployment.
Critically, AI does not automate occupations wholesale — it automates tasks. Ethan Mollick, Professor of Management at Wharton School of Business in his landmark study of management consultants found that those using AI completed 12 percent more tasks, 25 percent faster, with 40 percent higher quality scores. Productivity gains of that magnitude are genuinely transformative.
But we must not allow those gains to obscure the distributional question. The gains flow disproportionately to those who own and deploy the technology. Andy Haldane the former Chief Economist at the Bank of England has warned explicitly of the “dark side of technological revolutions” — past transitions created prolonged periods of stagnation for workers even as aggregate wealth grew. AI risks replicating that pattern, but faster, and targeted at white-collar workers historically insulated from displacement.
The net employment effect may prove broadly neutral over twenty years — but that aggregate picture conceals enormous sectoral divergence. Health, education, and professional services should see net job creation. Manufacturing, transport, and public administration face net long-term decreases of up to 25 percent. And the geographic concentration of those losses will not be random — it will deepen existing regional inequalities.
The Societal Consequences
If we fail to manage this transition, three consequences deserve particular attention.
The most immediate is the erosion of the middle-class compact — the assumption that educational investment and cognitive work provide economic security. We are already seeing early signals: unemployment rates among recent graduates in AI-exposed disciplines are rising. The Law Society warns that AI is creating a two-tier profession: large firms able to absorb the cost of legal AI tools, and smaller high-street practices that simply cannot. The hollowing out of professional services is not confined to the City — it threatens the economic fabric of smaller towns across the country.
And it is not only professionals who are exposed. Britain’s creative industries contribute £124 billion to the economy and employ over two million people. They are one of our genuine global strengths — and they are under acute threat. AI companies are training their models on the work of British creators without consent or compensation. Writers, musicians, visual artists, and filmmakers are watching their life’s work absorbed into systems that then compete directly with them. This is not a future risk either. It is happening now, and the legal uncertainty is making it worse: nobody is investing with confidence, and the creative workers who should be benefiting from AI-driven productivity are instead funding it involuntarily.
The second consequence is capital concentration. Left unchecked, the owners of the new machines will capture an ever-larger share of income at the direct expense of labour. This is not a hypothetical — it is the direction of travel visible in current data.
Third is the harm being inflicted within workplaces right now through algorithmic management. AI systems are increasingly used to monitor workers, set targets, and make hiring and firing recommendations — often without meaningful human review. Amazon famously abandoned an AI recruiting tool after discovering it systematically downgraded female candidates. A Harvard Business Review study published just two months ago found that AI did not reduce workload but consistently intensified it — employees used efficiency gains to take on more tasks and work through breaks. The researchers’ verdict was unambiguous: fatigue, burnout, and a growing inability to step away from work. This is not a future risk. It is happening now.
The Government Response
The disruption is already underway. I want to propose a three-part framework.
First, a genuine Future of Work Strategy — not a review, not a taskforce, but a strategy with teeth. This means dedicated ministerial responsibility for automation and workforce transition, coordinated across Treasury, DWP, DSIT, and the Department for Education. It means a place-based industrial strategy that recognises AI displacement will be geographically concentrated. And it requires honesty that voluntary commitments from tech companies are insufficient. The law must shape how this technology is deployed in workplaces, not merely encourage best practice.
Second, an Accountability for Algorithms Act — and alongside it, a single cross-sector AI regulator with genuine technical expertise. The current patchwork of the ICO, Ofcom, the FCA, and the CMA competing for AI territory leaves large companies navigating the complexity with teams of lawyers while small businesses and workers are left entirely unprotected. A coherent regulatory architecture must underpin everything that follows.
Within that architecture, the deployment of AI in employment decisions — hiring, performance management, dismissal — must be subject to statutory oversight. Employers should be required to conduct and disclose Algorithmic Impact Assessments before deployment, with mandatory equality audits to identify discriminatory bias. We must enshrine a human-in-command principle: decisions affecting people’s livelihoods must be taken by human beings, with AI in a supporting rather than determining role. And we need a Digital Bill of Rights — giving every citizen a statutory right to explanation and appeal where AI has a significant impact on their life.
The creative industries require their own specific remedy: confirmed copyright protection and mandatory training transparency for AI models, creating the conditions for an opt-in licensing model that fairly compensates creators. The current uncertainty serves no one. Transparency and fair compensation build the trust that drives adoption — for creators and technology companies alike. We should also introduce new image and personality rights to protect individuals from unauthorised deepfakes.
The EU AI Act already categorises AI hiring tools as high-risk, mandating strict assessments and human oversight. We should go further than the EU, not lag behind it.
Third, retraining at genuine scale. IBM’s figure of 120 million workers requiring retraining globally should prompt an intervention proportionate to its ambition. The Government has made a start — the AI Skills Boost programme targets 10 million workers upskilled by 2030, and the £187 million TechFirst programme extends AI learning into every secondary school. I welcome all of that. But welcome is not the same as sufficient. Only 21 percent of UK workers currently feel confident using AI at work.
We need Personal Learning Accounts that give individuals real purchasing power over their own upskilling — not courses curated by the same tech companies deploying the automation. And we need an educational pivot toward STEAM — adding Arts to STEM — because the skills AI struggles to replicate are precisely creativity, critical reasoning, and social intelligence. As François Chollet demonstrated last month with his ARC-AGI 3 benchmark, puzzles that any untrained person can solve still defeat the leading AI systems. That jaggedness maps almost precisely onto the skills our education system should be prioritising.
OpenAI’s recent blueprint deserves credit for acknowledging that displacement effects are structural, not cyclical. But what is conspicuously absent is any mechanism for holding AI companies to account for the algorithmic management systems already reshaping work today. An Accountability for Algorithms Act would do more to protect workers in the near term than any aspirational wealth fund whose governance remains entirely unspecified. One cannot help noticing that a company racing to build the very technology it warns about has a considerable interest in shaping debate towards redistribution and away from regulation. The question is not whether these ideas are worth discussing. It is whether discussion is a substitute for binding law.
A Word on Geopolitics
Economic vulnerability creates political vulnerability. A workforce experiencing rapid, unmanaged displacement — particularly one that perceives that displacement as benefiting a narrow technological elite — is a workforce susceptible to political dislocation. The governance of AI in the workplace is a question of democratic resilience.
US Senator Mark Warner — a self-described pro-AI, pro-tech voice — has been sounding this alarm with increasing urgency. Speaking at the Hill and Valley Forum last month, he predicted college graduate unemployment would rise from its current 9 percent to as high as 35 percent within two years. Earlier, at the CNBC CFO Council Summit, he warned explicitly that without managed transition, societal backlash from both left and right would follow on a scale that was “unprecedented.” Westminster should be listening.
Conclusion
I am not a technological pessimist. AI, deployed well, can liberate workers from drudgery, expand economic opportunity, and drive productivity gains that benefit society broadly. Regulation, properly designed, creates the certainty and trust that innovation requires.
But the deployment of this technology is not a force of nature. It is the product of decisions made by managers, executives, and policymakers. Every algorithm deployed without adequate oversight is a decision someone made. Every retraining programme unfunded is a decision someone made. Every year we delay binding legislative frameworks is a decision someone made.
Alan Turing observed in 1951 that at some stage we should expect the machines to take control. Stuart Russell has noted, mordantly, that our collective response has been rather like receiving a message from an alien civilisation announcing its arrival in fifty years, and replying that we are currently out of the office.
On the future of work, we cannot afford to be out of the office any longer. The question is whether we will govern this technology, or allow it to govern us. I know which answer I intend to argue for
27th November 2022
AI Governance: Science and Technology Committee launches enquiry
10th September 2021






