Artificial Intelligence (AI) is becoming a powerful force in everyday life. But one of the most important, and least understood, areas where AI is growing fast is policing.
At BLAM UK, our AI Accountability Project is focused on ensuring that Black communities understand how these technologies work, how they are used, and what rights and protections we need as they become more embedded in policing and surveillance.
What Is AI? A Simple Explanation
AI describes computer systems that can perform tasks that normally require human intelligence.
That includes things like:
- recognising faces
- spotting patterns in large amounts of data
- predicting what might happen next
- sorting people into categories
- making recommendations or decisions
AI works by “learning” from data. But if that data reflects racial bias, inequality, or over-policing of certain communities, AI can learn those same patterns and repeat and reinforce them at scale.

How Is AI Being Used in Policing?
Across the UK, police forces increasingly use data-driven systems and algorithmic tools to support decision-making. While some of these tools are marketed as “smart”, “efficient”, or “objective”, in practice they often deepen existing inequalities.
Here are the main ways AI shows up in policing today:
1. Live Facial Recognition (LFR)
LFR scans people’s faces in real time and matches them against police watchlists. It has been used at:

Notting Hill Carnival (for the first time in 2024).
Stratford Westfield


Oxford Circus
Concerns include:
- Racial bias: Black faces are more likely to be misidentified
- Lack of consent: People are scanned without knowing
- Over-policing of Black and migrant communities
2. Predictive Policing Tools
These tools make predictions about where crime is likely to happen or who might be involved. They use past policing data, which is already racially skewed.
If Black communities were over-policed in the past, predictive tools simply reinforce that pattern.

3. “Heat Maps” and Risk Scoring Systems
AI tools can score areas or individuals as “high risk”, which influences how police deploy officers.
Issues include:
- Labelling young Black people as “high risk” based on postcode, school exclusions, or previous contact with police
- Reinforcing negative stereotypes
- Lack of transparency about how scores are calculated
4. Gang Databases & Social Media Monitoring
AI is increasingly used to monitor posts, photos, group chats, and online activity. It can flag people based on:
- Music lyrics
- Clothing
- Friend networks
- Location data

This can lead to young Black people being labelled as “gang-associated” without evidence of criminal behaviour.
5. Policing Children and Families Using Data
Policing doesn’t only affect adults. Data-driven systems are now used to:
- predict which children might be “at risk”
- monitor pupils in schools
- track families through multi-agency databases
Children with special educational needs, neurodivergence, or those from marginalised communities are disproportionately affected. This is why BLAM is also developing a child-friendly AI survey to understand young people’s experiences and needs.
Why This Matters for Black Communities
Our research shows that AI policing often repeats the same patterns of racialised over-surveillance Black communities have faced for decades, but now with the speed and scale of technology.
Risks include:
- wrongful arrests
- discriminatory stop and search
- increased profiling of Black children
- reduced trust in public institutions
- long-term impacts on opportunities, safety and wellbeing
AI is not neutral. It reflects society, including its inequalities. This is why community knowledge and accountability are essential.
What we, at BLAM UK, are doing
Our AI Accountability Community Survey is gathering insights from Black people across England about:
- public experiences of policing
- views on AI technologies
- levels of trust and awareness
- hopes and concerns for the future
Your voice will help shape policy recommendations, community education, and advocacy for safer, fairer systems.
► Take the Survey (10–12 minutes):
https://qualtricsxmsmz4ftqrz.qualtrics.com/jfe/form/SV_51qvjQ9To6lwYLA
► Share the survey with your community
Final Thoughts
AI is reshaping policing right now, whether we know it or not. Understanding it is the first step toward protecting our rights, challenging harmful systems, and ensuring technology serves communities rather than harms them.
At BLAM UK, we are committed to building community power, transparency, and accountability around AI in policing. If you’d like to stay updated or collaborate, follow our social media channels and keep an eye out for upcoming workshops.
Further reading
- Big Brother Watch: “Facial Recognition in the UK”
https://bigbrotherwatch.org.uk/campaigns/stop-facial-recognition/ - RUSI: “Machine Learning Algorithms and Police Decision-Making: Legal, Ethical and Regulatory Challenges”
https://www.rusi.org/explore-our-research/publications/whitehall-reports/machine-learning-algorithms-and-police-decision-making-legal-ethical-and-regulatory-challenges - StopWatch: “The Gangs Matrix”
https://www.stop-watch.org/what-we-do/campaigns/the-gangs-matrix/
Get Involved:
Amnesty International: “19,403 people called on the UK to ban “crime predicting” technology”
https://www.amnesty.org.uk/19000-people-called-uk-ban-crime-predicting-tech
