An update on our approach to regulating artificial intelligence

Published: 30 April 2024

Last updated: 30 April 2024

Summary

The Equality and Human Rights Commission is the independent equality regulator for England, Scotland and Wales and a UN-recognised ‘A’ status National Human Rights Institution. We have a statutory mandate to advise government and Parliament on matters relating to equality and human rights, and to promote and protect equality and human rights across Britain.

Artificial intelligence (AI) has been a priority for us since 2022. It is widely recognised that, while AI has great potential to benefit society, it also comes with a wide range of risks, including risks of bias and discrimination as well as risks to human rights. We therefore have an important role in supporting responsible and fair innovation and use of AI.

The Post Office Horizon computer system scandal that resulted in the prosecution of innocent postmasters shows clearly the risks from over-reliance on the outputs of computer systems. Today’s AI systems are many times more powerful and complex than the Horizon system. Strong, effective and sufficiently resourced regulation of AI is therefore essential to mitigate the risks and build trust in AI systems.

However, we are a small strategic regulator. Our budget has remained static at £17.1million since 2016, which amounts to more than a 30% cut over that time. Our ability to scale up and respond to the risks to equality and human rights presented by AI is therefore limited. While we do have an important role in regulating AI, we have to prioritise our work. Our approach to regulating AI set out here is reflective of these limitations.

Introduction

The Equality and Human Rights Commission is a small strategic regulator in comparison to many of the other existing regulators being called upon to play a part in regulating AI. Our remit is broad. Our wider approach as the regulator of the Equality Act 2010, since we were established, has been to take a focused and strategic approach to taking regulatory action, prioritising a small number of important and strategic issues rather than seeking to address each and every breach of the law.

Our size and resource constraints have meant that we have taken a similar approach to our developing work to regulate AI, initially prioritising understanding the application of our more unique regulatory levers like the Public Sector Equality Duty.

We recognise the potential benefits for society represented by the advent and use of AI. For instance, it is already showing huge dividends in health in supporting cancer diagnosis. But AI also comes with risks, both in the outputs it produces, as well as in irresponsible use. That is why we introduced a specific focus on AI in our current strategic plan for 2022–25.

We have learned a great deal in the first two years of our focus on AI, at the same time as witnessing an explosion in AI innovation.

We have sought to work constructively with government and other regulators to identify and establish a robust and cohesive framework for regulating AI. In our response to the government’s White Paper ‘A pro-innovation approach to regulating AI’, we set out our ambition to be an effective regulator of AI, supporting responsible and fair innovation and use of AI. But we were also clear that this ambition, and the expectations from government, must be matched by additional funding to allow us to scale up our work and meet the challenge.

Our regulatory responsibilities

We have a remit that has the potential to cut across all sectors, resulting in areas of regulatory responsibility where we are the sole regulator and others where our regulatory remit interacts with other regulators. There will be instances where, even when we are not the sole regulator of a duty bearer, we may have the most appropriate regulatory tools to take action to address an issue. There is a leadership and convening role for us to play in relation to the fairness principle set out in the government’s White Paper, where there is clearly strong alignment between our remit and the ambitions of the White Paper.

The use of AI has the potential to result in breaches of the Equality Act 2010 and the Human Rights Act 1998 in many ways. Since we identified AI as one of our strategic priorities in 2022, we have significantly developed our understanding of the potential risks AI presents in our specific regulatory remit. However, of necessity, this understanding is largely based on a principles-led approach, rather than a detailed examination of the specific risks across all sectors.

Working with other regulators

We have developed strong relationships with other regulators in this space both individually and through the Digital Regulators Collaboration Forum (DRCF). There is a unique role for us to play in providing expertise on equality and human rights, supporting other regulators to understand and mitigate the equality and human rights risks posed by AI and playing a leading role as convener of the fairness principle, as detailed in the government White Paper. We also have a particular role to play in supporting, guiding and enforcing compliance with the Public Sector Equality Duty, as well as other areas where we are the sole regulator or where our regulatory tools are the most impactful to ensure compliance.

In February 2023, we signed a memorandum of understanding with the Information Commission’s Office (ICO) specifically to facilitate closer cooperation on our respective AI work. We engage regularly to provide advice and support on guidance and other materials, for example, our work together on the Fairness Innovation Challenge led by the Department for Science Innovation and Technology.  

Our approach to date

Since prioritising AI in our 2022–25 strategic plan, we have focused on developing our understanding of the key issues relevant to our broad remit, establishing working relationships with key actors and identifying opportunities for us to take action across our regulatory remit.

We have issued guidance on AI and the Public Sector Equality Duty and have undertaken a deep-dive compliance monitoring exercise with Local Authorities and selected other public bodies. We plan to build on this guidance by updating our Public Sector Equality Duty and data protection guidance, as well as other supplementary materials, such as good practice case studies.

We have also engaged extensively with the UK government as it has developed its proposals to create a regulatory framework for AI. This resulted in a greater appreciation of equality and human rights in the White Paper and the fairness principle. We continue to have constructive engagement regarding the central functions to be played by government, as the wider approach to regulating AI develops.

We have undertaken more focused work on a limited number of specific issues, exploring how our particular regulatory tools can be used. This includes:

  • engaging with police forces and oversight bodies on improving their compliance with the use of Facial Recognition Technology (FRT)
  • working with the College of Policing in their development of new guidance (Approved Professional Practice) for all new data driven technology
  • exploring the potential discrimination as a result of online recruitment through social media platforms
  • supporting litigation to challenge the potential discriminatory use of FRT in the workplace
  • progressing our work with the Local Government Association, ICO, Responsible Technology Adoption Unit (RTAU) and others to support local authorities (and more broadly the public sector) to better factor in equality considerations when procuring AI-based technologies
  • publishing guidance, such as ‘Artificial intelligence: meeting the Public Sector Equality Duty’

Lastly, we are partnering with the Responsible Technology Adoption Unit (formerly the Centre for Data Ethics and Innovation, part of the Department for Science Innovation and Technology), Innovate UK, and the ICO to drive the development of new socio-technical solutions to address bias and discrimination in AI systems.

We have looked internally to consider where and how we might use AI, either now or in the future. We developed an internal policy to guide our exploration and create a framework for the responsible use of AI, incorporating the principles set out in the government’s White Paper.

Our capability

We are a small regulator with a large and broad remit. As such, our regulatory approach must be tight and focused. Our current strategic plan sets out six strategic priorities that form the basis of our key programmes of work. We have a small programme team of eight full time staff working on issues related to AI, with approximately a further ten members of staff working on AI-related projects at any one time. It should be noted that these staff are equality and human rights specialists and not technology experts. Given our current resourcing we are unable to increase our capacity to regulate AI or to introduce the technical roles that we might wish. While we believe there are significant opportunities for us to achieve impact through the use of our regulatory tools in this space across a range of issues, we must limit our efforts to a small number of issues that we believe present the greatest risks to equality and human rights.

While we have wide-ranging expertise on equality and human rights, we do not have the technical expertise on AI that other regulators have. While proposals from the government have set out shared resources for regulators to draw on to fill this gap, this will be unlikely to close the capability gap we currently face in the short or medium terms. In addition, we face significant resource challenges when seeking to use our regulatory powers against large multinational technology firms whom we may seek to hold to account.

Over the coming 12 months, we will continue to work on the projects detailed in our 2024–25 business plan, themselves drawn from the priorities set in our strategic plan for 2022–25. We are in the process of developing our strategic plan for 2025–28, and as part of this process, we will continue to develop our strategic approach to regulating AI. To effectively regulate AI in line with the duties set to be placed on us by the government, we require additional resources in the short and medium terms to support us to better understand our long-term resourcing needs in order to fulfil our role as an effective regulator in this space. We have made this clear in both our engagement on the White Paper and in correspondence with government.

Implementing the principles

The government’s ‘A Pro-innovation Approach to AI Regulation’ White Paper set out that regulators should apply five principles in their regulation of AI. The principles are:

  • safety, security, robustness
  • appropriate transparency and explainability
  • fairness
  • accountability and governance
  • contestability and redress

Government also set out expectations that regulators should develop or update guidance to take account of these principles and provide clarity to business.

In our response to the consultation on the White Paper, we offered broad support for the principles-based approach, notwithstanding our own view that the principles lacked sufficient emphasis on equality and human rights. We also made it clear that the expectation to implement the principles was additional to our own stated commitments in our strategic plan. These additional expectations from government must be matched by additional resourcing to allow us to scale up our work. As yet, government has not provided any additional resources.

While all principles are relevant to the equality and human rights frameworks, we have a clear and unique role in supporting the fairness principle, both with regulated bodies and across the regulatory community. While there are other definitions of fairness in law, fairness is a core principle underpinning equality and human rights and drives our work.

We are determined to ensure equality and human rights are central to the development and use of AI. That is why are participating in the Fairness Innovation Challenge, alongside the ICO and the RTAU. We have also taken part in a workshop with the DRCF to explore fairness across regulatory remits. We will also incorporate the principles within our planned compliance and enforcement work and within our role in advising governments and parliaments.

However, going beyond this would require a significant reallocation of our resources. This is not possible within our current strategic commitments and existing budget. We therefore have no plans for the remainder of our current strategic plan to develop dedicated guidance around the White Paper principles.

Our strategic plan for 2025–28 will set out our priorities for the longer term, including our approach to regulating AI. This will set out how we intend to balance our limited resources and prioritise the work that we do. We will be consulting publicly on a draft in the summer of 2024.

Regulating artificial intelligence and tackling digital exclusion

Artificial intelligence is a fast-developing technology. While there are big opportunities there are also significant risks of discrimination. Government and others are increasingly looking to us for support and we are developing our approach to regulation of the equality and human rights implications of the spread of technology. The AI Regulation white paper places significant expectations on regulators. We will continue to develop our approach to regulation in this space, improving our understanding, engaging with other regulators and making use of our powers.

In 202425 we will focus predominantly on our role in reducing and preventing digital exclusion, particularly for older and disabled people in accessing local services, the use of AI in Recruitment Practices, developing solutions to address bias and discrimination in AI systems, and police use of facial recognition technology (FRT). We are concerned about the use of FRT becoming ingrained and normalised in a way that it will not be possible to move away from once established, both in policing and elsewhere. We will also partner with the Centre for Data Ethics and Innovation (CDEI) on the fairness innovation challenge to develop tools for tackling algorithmic bias and discrimination.

Page updates