Low bias, trust, and ethics
“Automated facial recognition abdicates a very human function, with all of its subjectivity, foibles, and partiality, to a machine. We want machine-based FR to be fairer and more objective, not hardwired with the very fallibility, unfairness, and prejudice prevalent in society.”
– Reza Rassool, CTO RealNetworks

Low Bias
Together with accuracy, speed and compactness, a critical component in the development of the SAFR platform is its ability to address inherent problems that could lead to bias.
The SAFR platform, which is built on machine learning, reduces bias significantly with an algorithm that is trained with a globally diverse set of faces. This well-balanced data set includes a strong representation by age, gender, skin tone and geographic origin. SAFR underwent rigorous testing by the National Institute of Standards and Technology (NIST) where it was compared against other top algorithms, specifically for signs of bias.
NIST Testing for Bias

SAFR showed the lowest bias compared to 103 models tested

SAFR measured least likely to return a false match based on any specific facial feature or face type

SAFR tested with an accuracy rate of 99.87% for all demographic groups
Guiding Principles
The SAFR team abides by a set of guiding principles as we design, develop, and distribute computer vision AI programs. These principles make up our code of conduct that values both our customers and the public, recognizes the inherent dignity and equal worth of every person, helps us earn trust and keep trust, makes it easy for customers to protect data, and pushes us to take reasonable steps to work with trustworthy customers. Learn more about our Guiding Principles.
Privacy by Design
Smarter, Safer Airports Start Here
The best airports don’t just keep up with industry standards—they set them.
Learn how you can employ our cutting-edge technology to improve security and operations at your aviation facility.