I recently took part in the Launcjh of the National Hiring Strategy by the newly formed Association of RecTech Providers. This is what I said.

Good afternoon. It is a real privilege to welcome 200 of the UK’s leading HR, talent acquisition, and hiring professionals to the Terrace Pavilion for the launch of the first National Hiring Strategy.

This is an important moment . This is a collective commitment to make UK hiring fundamentally faster, fairer, and safer. The current state of UK hiring presents both an economic and a social challenge. On average, hiring takes almost 50 days. The outcomes speak for themselves: roughly 40 percent of new hires quit their jobs within three months. This inefficiency costs our economy millions annually and represents human potential squandered.

The National Hiring Strategy aims to tackle these issues head-on. The RecTech Roadmap—a key component of this strategy—provides the strategic blueprint for deploying technology to revolutionise how we hire. I welcome the formation of the Association of RecTech Providers. They will steer this change, set industry standards, and help ensure the UK gains global leadership.

Artificial Intelligence sits at the heart of this transformation..AI offers extraordinary opportunities. The efficiency gains are real and significant. AI tools can handle high-volume, repetitive tasks—screening CVs, scheduling interviews, processing applications—dramatically reducing time-to-hire. Some examples show reductions of up to 70 percent. That’s remarkable.

But speed alone isn’t the goal. What excites me most is AI’s potential to drive genuine inclusion. Technology, particularly AI combined, can enable greater labour market participation for those currently shut out: carers, people with disabilities or chronic illnesses, neurodiverse individuals, older workers, parents..AI can help us match people based on skills, passions, and circumstances—not just past work experience. It can help us create a world where work fits around people’s lives, rather than the other way around. That’s the vision I want to see realised.

However—and this is crucial—AI also has the potential to make hiring more problematic, more unfair, and more unsafe if we’re not careful. We must build robust ethical guardrails around these powerful tools.

 I’ve always believed that AI has to be our servant, not our master..

Fairness must be a key goal. The core ethical challenge is that machine learning models trained on historical data often reproduce past patterns of opportunity and disadvantage. They can penalise groups previously excluded—candidates with career gaps, for instance, or underrepresented minorities.

This isn’t hypothetical. We’ve seen AI systems reduce the representation of ethnic minorities and women in hiring pipelines. Under the Equality Act 2010, individuals are legally protected from discrimination caused by automated AI tools..

But we need proactive auditing. Regular, detailed bias assessments to identify, monitor, and mitigate unintended discrimination. These audits aren’t bureaucratic box-ticking—they’re critical checks and balances for ethical use.

While we don’t yet have specific AI legislation in the UK, recruiters must comply with existing data protection laws. Data minimisation is essential.. Audits have raised concerns when AI tools scrape far more information than needed from job networking sites, sometimes without candidates’ knowledge.

Transparency matters profoundly. Recruiters must inform candidates when AI tools are used, explaining what data is processed, the logic behind predictions, and how data is used for training. If this processing isn’t clearly communicated, it becomes “invisible”—and likely breaches GDPR fairness principles. Explanations should be simple and understandable, not buried in technical jargon.

And then the human touch should always maintained.  AI should complement, not replace, the human aspects of recruitment.

This should the case despite more nuanced provisions introduced under the Data Use and Access Act. Now the strict prohibition on significant decisions based solely on automated processing now applies only to decisions involving special category data (e.g. health, racial origin, genetics, biometrics but of course  recruiters will have some of that kind of information. 

But even where personal data is not “special category,” organisations must provide specific safeguards. Of :

  • Individuals must be informed about the automated decision, have the right to make representations and contest the decision and  intervention must be offered upon request or as required by law.

Judgment, empathy, and responsible innovation should remain at the core of how we attract and engage talent.

Businesses also need clear policies for accountability and redress. Individuals must be able to contest decisions where their rights have been violated..

The launch of this National Hiring Strategy provides a critical opportunity. The firms that succeed will be those that blend machine efficiency with human empathy. They will recognise that technology is a means to an end: creating opportunities, unlocking potential, and building a labour market that works for everyone.

They ensure we reach a faster, fairer, and safer UK labour market—without taking destructive shortcuts that leave people behind.

We stand at a moment of genuine possibility. The technology exists. The expertise is in this room. The Strategy provides the framework.. Let’s embrace AI’s potential with optimism but the end of the day, hiring isn’t about algorithms or efficiency metrics—it’s about people, their livelihoods, and their futures. Thank you.