Facial Recognition software gets better by the day, with machine-learning and artificial intelligence propelling the tool exponentially. Recent advances in AI technology has enabled systems to generate realistic images of human faces completely from scratch. These systems use datasets comprised of millions of images of real people to “learn” over a period of time how to autonomously generate original images of their own. At at the annual Black Hat USA event, this new technology was put to the test.
Tricking The System
Black Hat USA hosted a virtual event in August 2020 that brought together many cybersecurity and technology experts. Researchers from McAfee showed how they were able to use AI technology to successfully trick a facial recognition system into misclassifying one individual as an entirely different person. As an example, researchers showed how at an airport an individual on a no-fly list could trick a facial recognition system used for passport verification into identifying him as another person (view video at the end of the article). Following the event, Steve Povolny, head of advanced threat research at McAfee mentioned:
“The basic goal here was to determine if we could create a fake image, using machine learning models, which looked like one person to the human eye, but simultaneously classified as another person to a facial recognition system”
Researchers at McAfee built a machine-learning system, feeding it 1,500 photos of two separate individuals. Images were captured from live video and sought to accurately represent valid passport photos of the two people. The system continuously created and tested fake images of the two individuals by blending the facial features of both subjects. Over hundreds of training loops, the machine-learning system eventually generated images that fooled the facial recognition system. A valid passport photo of one individual was mistakenly identified as that of the second target individual.
Real-World Threats
Povolny, head researcher, mentioned the passport verification system attack scenario, though not the primary focus of the research, is possible to carry out. Due to digital passport photos now being accepted, an attacker can produce a fake image of an accomplice, submit a passport application, and have the image saved in the passport database. If a live photo of the attacker later gets taken at an airport at an automated passport-verification kiosk, for instance, the image would be identified as that of the accomplice.
“This does not require the attacker to have any access at all to the passport system; simply that the passport-system database contains the photo of the accomplice submitted when they apply for the passport. The passport system simply relies on determining if two faces match or do not match. All it does is verify if a photo of one person is identified against a saved photo in the back end. So such an attack is entirely feasible, though it requires some effort to pull off. It’s less likely that a physical passport photo that was mailed in, scanned, and uploaded to this database, would work for the attack,”
– Steve Povolny, McAfee Head Researcher
McAfee’s Generative Adversarial Networks
The research done around the facial recognition involved the use of a “Generative Adversarial Network” (GAN) tool known as CycleGAN. GANs are AI neural networks capable of creating data that’s similar to data input into them. For example, a GAN can use a set of real images of human faces to autonomously generate completely fake, but real-looking images of human faces. GANs use generative networks to create the synthetic data, and discriminative networks to continuously assess the quality of the generated content until it reaches acceptable quality.
CycleGAN itself, according to McAfee, is a GAN for image-to-image translation: translating an image of zebras to an image of horses, for example. One feature of the GAN is that it uses significant features of an image for translation, such as eye placement, shape of head, body size, and other attributes.
CycleGAN Risks
Facial recognition technology is increasing in use, playing a role in Law Enforcement agencies to identify suspects in photos or videos. McAfee wanted to do this research to understand the potential implications of facial recognition software and how it can be used against us.
What Can I Do As An Individual?
Facial recognition is a difficult technology to regulate. Many are concerned about the potential privacy concerns coming with the software. The tool is being used in ways many can’t fathom; uploading photos from the Internet and blending it with facial recognition to create a powerful identification tool. Clearview AI created an application that allows users (usually law enforcement) to upload and analyze a photo of an individual within their application. In a matter of minutes, the requestor can view every public photo in the database of that same individual (searching more than 3 billion images). These new photo’s are displayed alongside links to where those images rest online (social media).
Although there aren’t many actions you can take, you can reach out to companies like Clearview AI via their Privacy Page links and request that your data be deleted. You could also write to your congress delegate and request that the US Senate pass comprehensive privacy reform legislation that protects all citizens of the US instead of each state creating a Patchwork quilt of different privacy legislation. Technology is rapidly outstripping our ability to regulate it with the existing laws we have on the books. A holistic approach to privacy protection is needed. Reading this article and others like it to stay informed, deleting your data where you don’t want it stored, and fighting for privacy rights is what we need to do to protect ourselves.
What Can I Do As A Business Owner?
Additionally, Povolny noted actions that can be taken by AI organizations working to create facial recognition technology to reduce the likelihood of becoming a victim of attacks:
“Anomaly testing, adversarial input, and more diverse training data are among the ways that vendors can improve facial recognition systems. Additionally, defense-in-depth, leveraging a second system, whether human or machine, can provide a much higher bar to exploitation than a single point of failure.”
Moving Forward
Businesses and individuals benefit from facial recognition technology by improving authentication security. Facial recognition software is widely used on mobile devices to secure devices and on applications within the device itself. Security experts argue that Facial Recognition technology is better than most users poorly constructed passwords and so is an improvement in authentication and identity verification. However, know that hackers and security researchers alike are seeking ways to fake out facial recognition tools to gain access to the accounts, devices, and applications faces are protecting.
By becoming more aware of the benefits and challenges to facial recognition authentication and some of the law enforcement uses of it, you can become more secure.