Blog

UK’s civil and political rights record under review: challenges of new digital technologies

Published: 12 March 2020

Later this month, the UN will be reviewing evidence about the UK’s compliance with the International Covenant on Civil and Political Rights – an international human rights treaty that protects a range of human rights, from fair trial rights to the right to privacy.

In its submission to the UN, published today, the Equality and Human Rights Commission has raised wide-ranging concerns about the UK’s civil and political rights record. These include a number of long-standing issues – such as the adequacy of steps taken to tackle violence against women and girls, and concerns around immigration detention – as well as emerging issues, such as the human rights challenges posed by new digital technologies. This blog focuses on one such issue: the use of automated facial recognition technology and predictive policing programmes by the police.

Imagine you’re a 14-year-old child on your way home from school one afternoon, walking down your local high street, when suddenly you’re jumped on by several men in hoods and hats. They grab you by your arms and pull you over to a side street, surrounding you and holding your arms. They say they’re police officers and they suspect you of carrying a knife. They question you, demand your ID and your phone, and then fingerprint you to check your identity. After several minutes of intensive questioning, they say you can go on your way and they melt away into the crowded high street.

It’s a frightening scenario. But this is a real life example of what happened when a black child was misidentified by the police’s live facial recognition surveillance in London. This surveillance technology has been used to scan millions of people at protests, street carnivals, football matches, high streets, shopping centres and transport hubs in a number of locations across the UK.

We believe facial recognition surveillance infringes people’s fundamental right to a private life, and that its use in public spaces has a chilling effect on people’s rights to freedom of expression and assembly.

We welcome the EHRC’s submission to the UN for the upcoming review of the UK’s civil and political rights record, and its analysis that the legal framework supposedly authorising the use of live facial recognition is insufficient and that the surveillance is inherently disproportionate. As the EHRC has also acknowledged, there is evidence documenting gender and racial bias within facial recognition technology.

Technology, bias and threats to fairness in the justice system

Bias is a common theme in new data-based technologies, as they are designed and trained using historical data that represents structural inequalities and biases – especially within the criminal justice system.

Yet several emerging technologies are being used in the criminal justice system to profile and predict people’s supposed ‘criminality’. This carries clear risks of discrimination – let alone entrenching privacy-intrusive mass data surveillance, reversing the presumption of innocence and potentially infringing people’s right to a fair trial.

Predictive policing systems attempt to predict future criminality, either of individuals or within neighbourhoods, usually based on police records.

These systems are often focused on street-based crimes, rather than financial or white-collar crimes.

Police records represent the people who are policed, not simply people who commit crimes, and that data reflects the historic and institutionally biased over-policing of black and poor communities. When black people are almost 10 times more likely to be stopped and searched than white people across England and Wales, the risk of perpetuating bias via these new predictive policing systems requires urgent attention.

Police have even attempted to assess individual people’s risk of committing a crime in the future, using artificial intelligence (AI) fed by crude commercial data profiles containing racist stereotypes: for example, people profiled in the ‘Asian Heritage’ category were described as ‘generally in low-paid, routine occupations in transport or food service’. It was only after we uncovered this that the police force in question dropped the discriminatory data profiles and the commercial data provider renamed its crude stereotypes. However, the AI software is still in use and is becoming more widely adopted, despite frequently evading scrutiny.

We need to be proactive in identifying not only the opportunities but the threats, harms and risks posed by emerging technologies to our fundamental rights, and take swift action to protect them.