IREX
IREX Blog

The Bias in Face Recognition and What We Do about It?

Demos AI Ethics
All facial recognition algorithms vary in their performance across different face types. In other words, their recognition accuracy depends on gender, age group, skin color, viewing angle, lighting conditions of the person being identified. This is also true for the human brain.

What is causing the bias?

  • Training Datasets may contain insufficient examples of faces of ethnic minorities, younger or older individuals due to the population distribution or photo availability in the public domain.
  • Appearance Variability: there is a "natural" bias in how people change appearance. For example, the accuracy in recognizing females is lower than that of males because on average females apply makeup and change hairstyle more frequently. Males and females age differently.
  • Photo Quality: The performance depends on the quality of the original photo from the watchlist database and CCTV cameras. For example, a photo with inadequate quality added to a watchlist may cause false alerts potentially leading to false arrests and unnecessary encounters with police.
  • Imaging Physics: black skin reflects fewer photons than white skin, so the resulting photo of a black person may not have sufficient dynamic range. In poor lighting conditions, the face detector may completely ignore a black person.

To learn more about different types of AI bias, read A Draft Proposal for Identifying and Managing Bias in Artificial Intelligence by the NIST (page 14). 

What does IREX do about the face recognition bias?

The IREX service is engineered with system constraints and safeguards to provide a complex solution for the bias issue:
  • Bias Awareness: We ensure our clients are aware of the bias by sharing IREX Ethical AI Best Practices and by providing training.
  • User Warning:  The IREX interface provides to authorized users a clear warning about the potential bias in recognition accuracy. For example, IREX uses 3 colors -- green, yellow, red -- to indicate the confidence level. IREX automatically recognizes demographics groups with increased bias risk (e.g. black females). 
  • Bias Evaluation: IREX regularly submits its algorithms to the Face Recognition Vendor Test (FRVT) by the National Institute of Standards and Technology (NIST) in order to provide an independent evaluation of the biases. The NIST reports are publically available free of charge.
  • Dataset Balancing: The IREX AI team continuously optimizes the training dataset to compensate for any performance bias across different face types.
  • Training Algorithm Optimization: IREX continuously improves the training algorithms to minimize bias caused by the datasets.


Watch the IREX Demo from the AI Ethics Webinar