Metrics that matter – How to evaluate identity verification technology

Face biometrics is rapidly gaining acceptance with consumers and businesses alike as a convenient and secure method of identity verification. The technology closes security gaps that are frequently exploited in solutions that rely on something that can be lost or stolen, such as a password or the answer to a “secret” question, as well as new hacks such as SIM card fraud. Simply showing one’s face for a selfie is also far less frustrating for users. Face recognition technology has advanced dramatically in recent years with advancements in artificial intelligence, the widespread availability of high quality yet inexpensive cameras, and the subsequent creation of a huge amount of publicly available data for training face recognition algorithms. Continuous improvements in computational power, including graphical processing units (GPU) and their availability, have made it possible to apply sophisticated machine learning algorithms such as Convolutional Neural Networks and Deep Neural Networks to these systems and run them in everyday devices. In addition to being highly accurate, today’s algorithms are fast enough to be implemented in large commercial authentication systems – even those with multiple millions of users. Offering the ability to strengthen security and improve the user experience, face biometrics has found its way into use cases ranging from unlocking mobile devices, to securing financial transactions and health records, to improving digital onboarding processes
Real-life applications of facial biometrics for authentication raises the question of security. If a potential fraudster can easily access a representation of a person’s face and present it as their own, can we rely on this method of authentication? In order for face biometrics to truly gain mainstream adoption as a better mode of authentication, it is essential to distinguish between a genuine (bona fide) live face and an attempt to spoof the system with an artificial representation of a face. Thus, automated detection of presentation attacks, and specifically liveness detection, has become a necessary component of any authentication system that is based on face biometric technology where a trusted human is not supervising the authentication attempt.
Facial recognition works by comparing mapped features of an enrolled user — like the distance between their eyes or length of the jaw line – to a biometric template in order to verify identity. It examines the image it sees and makes measurements. What it does not do is recognize the physical presence of a user vs a quality print or digital representation. A photograph or image on a screen works just as well as the actual person! When using face biometrics for authentication, a bad actor can exploit this limitation to trick the system into thinking it sees the authorized user. We call this a presentation attack and these attacks have become easier for fraudsters due to ready online access to high-definition photos, screen images, masks and videos that can be used to spoof a facial recognition system. Liveness detection works with a biometric system to measure and analyze physical characteristics and reactions in order to determine if a biometric sample is being captured from a living subject who is present at the point of capture. The technology doesn’t perform any matching functionality, but instead detects presentation attacks.
  • +1 (470) 816-1970
  • 190 Bluegrass Valley Pkwy,
    Alpharetta, GA 30005
  • info@armatura.us