Imagine you’re a 14-year-old child on your way home from school one afternoon, walking down your local high street, when suddenly you’re jumped on by several men in hoods and hats. They grab you by your arms and pull you over to a side street, surrounding you and holding your arms. They say they’re police officers and they suspect you of carrying a knife. They question you, demand your ID and your phone, and then fingerprint you to check your identity. After several minutes of intensive questioning, they say you can go on your way and they melt away into the crowded high street.
It’s a frightening scenario. But this is a real life example of what happened when a black child was misidentified by the police’s live facial recognition surveillance in London. This surveillance technology has been used to scan millions of people at protests, street carnivals, football matches, high streets, shopping centres and transport hubs in a number of locations across the UK.
We believe facial recognition surveillance infringes people’s fundamental right to a private life, and that its use in public spaces has a chilling effect on people’s rights to freedom of expression and assembly.
We welcome the EHRC’s submission to the UN for the upcoming review of the UK’s civil and political rights record, and its analysis that the legal framework supposedly authorising the use of live facial recognition is insufficient and that the surveillance is inherently disproportionate. As the EHRC has also acknowledged, there is evidence documenting gender and racial bias within facial recognition technology.
Technology, bias and threats to fairness in the justice system
Bias is a common theme in new data-based technologies, as they are designed and trained using historical data that represents structural inequalities and biases – especially within the criminal justice system.
Yet several emerging technologies are being used in the criminal justice system to profile and predict people’s supposed ‘criminality’. This carries clear risks of discrimination – let alone entrenching privacy-intrusive mass data surveillance, reversing the presumption of innocence and potentially infringing people’s right to a fair trial.
Predictive policing systems attempt to predict future criminality, either of individuals or within neighbourhoods, usually based on police records.
These systems are often focused on street-based crimes, rather than financial or white-collar crimes.
Police records represent the people who are policed, not simply people who commit crimes, and that data reflects the historic and institutionally biased over-policing of black and poor communities. When black people are almost 10 times more likely to be stopped and searched than white people across England and Wales, the risk of perpetuating bias via these new predictive policing systems requires urgent attention.
Police have even attempted to assess individual people’s risk of committing a crime in the future, using artificial intelligence (AI) fed by crude commercial data profiles containing racist stereotypes: for example, people profiled in the ‘Asian Heritage’ category were described as ‘generally in low-paid, routine occupations in transport or food service’. It was only after we uncovered this that the police force in question dropped the discriminatory data profiles and the commercial data provider renamed its crude stereotypes. However, the AI software is still in use and is becoming more widely adopted, despite frequently evading scrutiny.
We need to be proactive in identifying not only the opportunities but the threats, harms and risks posed by emerging technologies to our fundamental rights, and take swift action to protect them.