A decade ago, computer face recognition usually involved little more than detecting faces in a crowd and maybe being able to match them to faces in a database.
“The first stage was sort of thinking about faces as nouns, as it were,” said Simon Lucey, associate research professor at Carnegie Mellon University Computer Vision Group. “But now we’re launching into this very interesting space in terms of, what are faces doing, sort of verbs. So, rather than who is that person or where is that person, it’s, how is that face moving?”
Much of the facial recognition research currently being done at CMU is focused on figuring out how to read those movements as if they were a language. New advances pair phoneme, or the fundamental units of speech, with action units, the units of expression.
“All of a sudden there’s been a lot of very good algorithms out there that can recognize those action units, pair them together and all of a sudden you can start doing these interesting things,” Lucey said.
The ability to “read” faces begins with the ability to quickly and precisely map a face in real time. Researchers at CMU have been able to that for a few years. Right now, that kind of facial mapping technology is used in photo apps, similar to the way Snapchat allows users to swap faces or add dog ears and a snout to their faces. It’s also being used for things like online shopping, such as virtually testing out a pair of glasses to see how those new Ray-Bans will look before buying them.
Initially, the data crunching had to be done in the cloud. But Lucey said the key is to cut down the computing power, so the code can be run on smart phones without draining the battery.
“We’re not sure what it could be used for, but you need to start measuring things before you can start extending and creating,” Lucey said. “I think that’s the exciting point where we’re at with the technology.”
Lucey said those same tools could be used for very different applications.
“You can start doing things like … a diagnoses tool for perhaps people with autism or Asperger’s, or these types of things,” he said.
Lucey thinks of it as “reading” a face to see what is happening in the brain. A sort of facial MRI.
It starts with loading thousands of known facial responses into a database to build a dictionary. Then, the latest artificial intelligence tools can begin reading and analyzing other faces.
Researchers at CMU are also looking to use the precise facial mapping as a kind of DNA test.
The basic science might even have use in the entertainment industry. As a user watches a video on their phone, the distributor could get real-time feedback of how the viewer is reacting to the material.
Lucy said the same precise measuring technology used in facial mapping can also be used to build three dimensional images and maps of just about anything. Those images could open the door to new 3-D printing capabilities, according to Lucey. They could also be used to help an online shopper know for certain that the couch on Craigslist is actually going to fit in their apartment.
In this week's Tech Headlines:
- The University of Pittsburgh is teaming up with pharmaceutical giant Pfizer to develop a computational model that would help identify the drivers of schizophrenia, Alzheimer’s disease and other related brain diseases. Kayhan Batmanghelich, assistant professor in Pitt’s Department of Biomedical Informatics, will house the one-year study. In addition to using genotype data to look for predictive biomarkers, measurements from magnetic resonance brain images will be used to characterize abnormal brain variations.“By studying brain images and relating the variations of each brain region to the genetics and clinical observations of patients, we provide deeper insight about the underlying biology of the diseases,” said Batmanghelich.The study will use the publicly available datasets of ADNI (Alzheimer’s Disease Neuroimaging Initiative) and private datasets of the GENUS (Genetics of Endophenotypes of Neurofunction to Understand Schizophrenia) Consortium
- Using a fingerprint to unlock a phone or computer may not be a safe as originally thought. In a rush to do away with problematic passwords, Apple, Microsoft and other tech companies are nudging consumers to use their fingerprints, faces or eyes as digital keys. But there are drawbacks: Hackers could still steal a fingerprint — or its digital representation. And police may have broader legal powers to make someone unlock their phone. Anil Jain, a computer science professor at Michigan State University, said, "We may expect too much from biometrics. No security systems are perfect." Jain helped police unlock a smartphone by using a digitally enhanced ink copy of the owner's fingerprints. Apple's iPhone 5S, launched in 2013, introduced fingerprint scanners to a mass audience, and rival phone makers quickly followed suit. Microsoft's Windows 10 allows users to unlock a PC by briefly looking at the screen. Samsung is now touting an iris-scanning system in its latest Galaxy Note devices. All those systems are based on the notion that each user's fingerprint — or face, or iris — is unique. But that doesn't mean they can't be reproduced.
The Associated Press contributed to this report.