Tech

An AI camera powered by Amazon is being used to detect the emotions of unwitting train passengers in the UK


Network Rail did not respond to questions about the trials sent to WIRED, including questions about the current state of AI use, emotion detection and privacy concerns.

A Network Rail spokesperson said: “We take the security of our rail network extremely seriously and use a range of advanced technology across our stations to protect our passengers, our colleagues and rail infrastructure from crime and other threats.” “When deploying technology, we work with the police and security agencies to ensure that we are taking appropriate action and that we always comply with relevant legislation on the use of surveillance technology. close.”

It is unclear how widely deployed emotion detection analytics has been, with documents sometimes stating that the use case should be “considered more cautiously” and reports from stations stating that “it is not possible to confirm exact reality”. However, Gregory Butler, CEO of computer vision and data analytics company Purple Transform, which is working with Network Rail on the trials, said the capability had been discontinued during the process. tested and no images are stored when it works.

Network Rail’s documents on AI trials describe a variety of use cases involving the ability for cameras to send automatic alerts to staff when they detect certain behaviours. There are no systems in use Facial recognition technology is controversialaims to match people’s identities with those stored in a database.

“The main benefit is faster detection of compromise incidents,” Butler said. He added that his company’s analytics system, SiYtE, is being used at 18 locations, including train stations and along the tracks. In the past month, Butler said, there have been five cases of serious trespassing the system has detected at two locations, including a teenager taking a ball from the tracks and a man “spending more than five minutes trying to Pick up golf balls along the high-speed rail.” line.”

At Leeds railway station, one of the busiest outside LondonButler said there are 350 CCTV cameras connected to the SiYtE platform. “Analytics are being used to measure people flow and identify issues like platform crowding and of course trespassing — where technology can filter out traces of workers through traffic,” he said. through their PPE uniforms.” “AI helps operators, who cannot monitor all cameras continuously, promptly assess and resolve safety risks and issues.”

Network Rail documents claim that cameras used at one station, Reading, have allowed police to speed up investigations into bike thefts by accurately identifying the bikes in the footage . “It has been demonstrated that, while analytics cannot confidently detect theft, they can detect a person using a bicycle,” the filing said. They also added that the new air quality sensors used in the tests could help staff save time performing manual checks. An AI version uses data from sensors to detect “sweaty” floors that have become slippery due to condensation, and alerts staff when they need to be cleaned.

While the documents detail some elements of the trials, privacy experts say they are concerned about the overall lack of transparency and debate about the use of AI in public spaces . In a document designed to assess the system’s data protection issues, Big Brother Watch’s Hurfurt said there appeared to be a “dismissive attitude” towards those who may have privacy concerns. One question asked: “Are some people going to object or find it intrusive?” “Usually not,” one employee wrote, “but there is no accounting for some people.”

At the same time, similar AI surveillance systems that use this technology to monitor crowds are increasingly being used around the world. During the Paris Olympic Games in France later this year, video surveillance AI will monitor thousands of people and try to track Pick out the rising crowd, use weapons and abandoned objects.

Carissa Véliz, associate professor of psychology at the Institute for AI Ethics at the University of Oxford, said: “Systems that do not recognize humans are better than systems that do, but I worry about the possibility slide”. Véliz points out the same thing Testing AI on the London Underground initially blurred the faces of people who might be evading fares, but later changed its approach, blurring the photo and keeping it longer than originally intended.

“There is a very instinctive drive to expand surveillance,” Véliz said. “People like to see more, see further. But surveillance leads to control, and control leads to loss of freedom, threatening liberal democracies.”

News7f

News 7F: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button