fbpx

Embedded Vision: Primed To Take A Bite Out Of Crime

cross-walk-sign-flickr-roboppy

As I've mentioned with past regularity, video surveillance and analytics technology is increasingly being used by law enforcement agencies worldwide to assist in the identification and prosecution of wrongdoers; via facial recognition, for example, or emotion discernment, or database searches for clothing matches, or license plate optical character recognition. And other, not-yet-discussed implementations of the general concept also exist. To wit, this particular post was first prompted an article which appeared late last month on Wired Magazine's site, entitled "Unique Gait Can Give Crooks Away."

The piece discusses how, by measuring passersby's stride intensities, spans, speeds and cadences, it was possible (in a study conducted by an international team of bioengineers) to identify a specific individual within a data set of 104 candidates with "99.6 percent accuracy." Granted, in this particular case pressure sensors beneath subjects' feet were employed, specifically a technology called plantar pressure imaging. But I'm presuming that image data captured by one, or ideally a series of cameras from different viewing angles, could also find use in actualizing this aspiration.

Speaking of emotion, I've been amazed since joining the Embedded Vision Alliance in mid-July how many times I've encountered references to the movie "Minority Report", used to describe various nascent embedded vision technologies such as personalized digital signage and gesture-based user interfaces. Now you can add the central 'pre-crime' premise of the movie's plot to the list. Without giving away too much away, for those of you who haven't yet seen it, divers technologies find use in the future in identifying individuals who are about to commit a crime, so that they can be proactively apprehended in advance.

Well, as reported by DailyTech last week, the future is now. Specifically, according to documentation obtained by EPIC (the Electronic Privacy Information Center), the U.S. Department of Homeland Security is currently testing a pre-crime system called FAST (Future Attribute Screening Technology):

FAST is made up of algorithms that use factors including gender, age, ethnicity, heart rate, body movements, occupation, voice pitch changes, body heat fluctuations and breathing patterns to identify clues as to whether the individual(s) will commit a crime in the future. The idea behind FAST is to prevent crimes from happening before individuals even have a chance to commit them based on the factors listed above. It is able to do this through the use of sensors that collect audio recordings, video images and psychophysiological measurements.

Finally, thanks to my co-worker Jeremy, I'm able to pass along a press release from yesterday, entitled "Notre Dame computer vision experts develop questionable observer detector". Admittedly, the headline partly caught my eye because I grew up only a few miles from the University of Notre Dame campus and, although I went to Purdue, still harbor a fondness for the Fighting Irish. But the embedded vision aspects of the story are equally intriguing.

Kevin Bowyer and Patrick Flynn, biometrics expertis in Notre Dame's Computer Science and Engineering Department, began with the assumption that "criminals always return to the scene of the crime. And law enforcement officials believe that perpetrators of certain crimes, mostly notably arson, do indeed have an inclination to witness their handiwork. Also, U.S. military in the Middle East feel that IED bomb makers return to see the results of their work in order to evolve their designs." As such, the QuOD (Questionable Observer Detector) system they developed doesn't attempt to explicitly identify individuals in a scene (by matching them up against a database of suspects, for example).

Instead, it strives to log how many times various individuals show up in a given period of time, no matter how much they might try to alter their appearance in-between visits. And as the press release states, "An individual is considered suspicious if he or she appears too frequently in the set of videos. The "too many" number is determined by law enforcement officials based on the number of crimes and videos available."

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top