The “Google of CCTV”: surveillance in the age of AI

Facial recognition is looking to be the most promising field in artificial intelligence for the foreseeable future, and it’s being applied in many different ways. Take video surveillance, for example, which is shapeshifting in its ability to Intelligently analyze video streams. But for now, it’s not without a few false-positives…

vidéo surveillance

While the idea of putting the urban landscape under the authorities’ permanent “watchful eye” was born in the 18th century, it was only in the 90s that video surveillance (recently re-baptized as video protection) became widespread throughout cities. However, before being rolled out, it first had to wait for (recent) multiplexing technology to catch up, which allows several cameras to film simultaneously.

This “Eye of Power”, which philosopher Michel Foucault spoke of, is becoming increasingly sophisticated. Manufacturers are indeed falling over themselves to use artificial intelligence in security cameras, hoping to achieve automated recognition of people and events filmed. Recent experience in London certainly produced many false-positives (people wrongly identified as suspect by the system), however, the technology is getting sharper. At CES, Somfy announced a summer launch for an outdoor surveillance camera that can detect intrusions “intelligently” and “without a false alarm”, as well as other functions that favor a more general use, like being able to watch live video feed from a smartphone, alerts sent in real time, and videos automatically stored in the cloud… Apart from algorithmics, “smart” CCTV relies on a combination of edge computing (camera-based intelligent image analysis functions) and distributed architectures mixing public clouds and private clusters.

From detection to raising the alarm

The French startup XXII recently presented a platform that analyzes images from camera models already in circulation. Thanks to machine learning, they promise to make it possible to detect “bodily dynamics”, and also any hidden emotions. While an app that can recognize objects, faces, falls or emotions is already available on the market, XXII are working towards much more elaborate future uses. We’re talking about detecting the undetectable, but also to automate alert detection – without the need for intermediary communication, rendering surveillance HQ obsolete. XXII is also based in China, where the market is more promising: biometric data processing is regulated less, and the use of facial recognition is already widespread.

Across the pond, the “Google of CCTV” came out at the end of 2017, and it launched at CES. IC Realtime are marketing software they’ve named Ella, which enables you to search information from video footage across a network of surveillance cameras. With Ella, it’s not so much the real-time surveillance, but the creation of a gigantic database in the cloud. Basically, it’s a search engine checking through a catalogue of images. For example, following a burglary in an industrial park, you can search through footage by simply typing the term “a man dressed in red” or “a Jeep vehicle”, and all relevant footage appears in a few seconds. Other players, like Boulder AI, offer a camera and a analytical platform both at the same time, which ensures viewing can continue, even when internet connection is down.

New fields of application

Such technology brings to mind other applications, and certain industry names, like Deepomatic, are repositioning themselves elsewhere in the facial recognition game. Google, Facebook and IBM are the ones to follow in this emerging market, as they have the computing power required and could put their algorithms to work on huge video datasets.

The next frontier includes predictive analysis. For example, in a high school or a prison, smart cameras could sound an alarm before a fight breaks out. Beyond the technological challenges, companies are faced with other ethical and behavioral challenges. People are looking to hide from video surveillance, and obfuscation techniques are doing the rounds. Most importantly, it’s how personal biometric data will be processed: possible algorithmic bias likely to create sexists or racist cameras, for instance, will put  new highly sensitive social debates on the horizon.