Face Scanning at London Bridge Station Now Live

New facial recognition cameras have been installed at one of the UK’s most heavily trafficked train stations, capturing the faces of millions of people as they pass through. These live cameras, deployed at London Bridge station, are part of a trial by the British Transport Police (BTP) to evaluate the effectiveness of this technology in a railway setting.

The system uses artificial intelligence to scan and compare faces against a database of serious criminals. The station, which handled over 54 million passengers last year, is now under increased surveillance. If the system identifies a match, it will trigger an alert for an officer, who will then manually review the potential suspect before taking any further action.

During the trial, passengers who do not wish to be scanned by the cameras will be provided with alternative routes to avoid them. However, privacy and civil liberties groups have raised concerns about the use of such technology. They argue that its deployment by police forces across the country lacks proper oversight.

Big Brother Watch has described the “mass biometric surveillance” as “disturbing.” According to Chief Superintendent Chris Casey from BTP, the trial aims to assess how well the technology functions in a railway environment. He emphasized that the initiative is part of the force’s commitment to using innovative tools to make the railways less welcoming to individuals wanted for serious criminal offenses, ultimately enhancing public safety.

He explained that the cameras work by scanning faces and matching them against a watchlist of offenders. If a match occurs, an alert is generated, and an officer reviews the situation to determine if further action is necessary. Passengers who choose not to enter the recognition zone will have access to alternative paths, and images of those not on the authorized database will be deleted immediately and permanently.

“We want to make the trial as effective as possible and we welcome your feedback,” said Casey. “You can scan the QR codes on the posters and share your thoughts.”

Additional trials at other stations are expected to be announced before they take place. Facial recognition technology has also been adopted by supermarkets and local councils to address anti-social behavior. However, its implementation has sparked controversy, particularly after instances of misidentification, including a case where a Londoner was mistakenly accused of being a criminal by staff using the cameras at a Sainsbury’s store.

Matthew Feeney from Big Brother Watch stated that while safety is a priority, subjecting law-abiding citizens to mass biometric surveillance is an excessive and alarming response. He highlighted that facial recognition remains unregulated in the UK, with police forces creating their own guidelines on its use and the criteria for adding individuals to watchlists.

Feeney expressed concern that the British Transport Police are proceeding with facial recognition deployments even before the Home Office completes its consultation on a legal framework for its use. He pointed out that the technology’s deployment is especially troubling in a democracy where neither the public nor Parliament has voted on its use.

“The UK stands out among democracies for the widespread use of live facial recognition,” he added. “The Government must take immediate steps to regulate police use of this technology.”

Recent legal challenges have questioned the Met Police’s use of live facial recognition (LFR) tools. Silkie Carlo of Big Brother Watch and anti-knife crime activist Shaun Thompson challenged the police’s use of LFR after Thompson was wrongly identified when passing a camera van in 2024. Lawyers representing the pair argued that the police’s use of LFR is increasing rapidly.

According to Dan Squires KC, the Met Police used facial recognition 231 times and scanned approximately 4 million faces last year. A judicial review is currently underway to examine the expansion of facial recognition across other police forces.

Ruth Ehrlich, director of external relations at Liberty, a human rights organization, criticized the government for continuing to deploy facial recognition technology while consultations are still ongoing. She noted that the technology has led to flawed outcomes, such as children being incorrectly placed on watchlists and Black individuals facing a higher risk of being wrongly identified.

“This has caused real harm to people’s lives,” she said. “It is the result of giving complex, powerful technology to police who lack the expertise to manage it safely.”

Ehrlich called for an immediate halt to the rapid rollout of facial recognition technology and urged the government to implement safeguards to protect individual rights. She emphasized the need for transparency, meaningful oversight, and a system of strong guardrails before handing police further AI tools.

How live facial recognition cameras work?

The process begins by identifying a face in a still image or video, distinguishing facial features from the background or body. It then maps the face, measuring distances between key features to create a numerical representation. This data is quickly compared to large databases to find potential matches.

For more information or to contact the news team, email webnews@TUSER PARABOLA.co.uk. To read more stories like this, visit our news page. Join the conversation by commenting below. Add TUSER PARABOLA as a preferred source on Google to stay updated with the latest news. Sign up for TUSER PARABOLA's News Updates newsletter to receive the most talked-about stories directly in your inbox.