Addressing bias in modern AI facial recognition systems

Patrica Raffaele

Feb 18, 2026

Addressing a critical challenge of bias in modern AI systems, Carnegie Mellon University Africa’s (CMU-Africa) Jema David Ndibwile, assistant teaching professor, along with two research assistants, recently published a paper focused on ensuring that facial recognition security technologies perform reliably and equitably across diverse populations. 

The paper, Fairness-Aware Face Presentation Attack Detection Using Local Binary Patterns: Bridging Skin Tone Bias in Biometric Systems,” was published in the Journal of Cybersecurity and Privacy.

The work focuses on when Presentation Attack Detection (PAD) is used for facial recognition in applications such as mobile phone authentication, online services, banking, travel, border security and military security, among others. PAD is a security layer that protects face recognition systems from spoofing attacks, such as photos or replayed videos.

The research demonstrates how fairness can be integrated into the security layer without sacrificing performance. Ndibwile noted that a key motivation for this research is that AI systems deployed globally often exhibit uneven behavior across regions and demographics, causing bias. Facial recognition systems are built in the US, Asia, or Europe and deployed in Africa with a data set from people who have lighter skin.

“These systems don’t perform well for authentication because of differences in skin tone that impacts how facial recognition systems work for African faces when detecting attacks,” he explained.

The team, including Ntung Ngela Landon (MSIT ’26) and Floride Tuyisenge (MS ECE ’26), started with a literature review and searched for a data set that could be tested, trained, and validated. The team worked with a large data set from the Chinese Academy of Sciences.

Tuyisenge explained that the research then focused on using a simple algorithm for 2D attacks that could detect “fake” faces. She focused on cleaning, collecting, and preparing data while Landon focused on pre-processing and using the data to train the model. They worked together on analyzing and writing about the results.

“In this research, we achieved what we set out to do: We eliminated the bias,” Ndibwile said. “There’s no statistically significant difference between ethnic groups in terms of efficiency. And it is thousands of times smaller and cheaper than modern deep learning systems.”  The system adapts image brightness and contrast, so darker skin tones are completely clear, and it improves the accuracy of facial recognition.

The study contributes to CMU’s broader leadership in responsible, inclusive, and globally aware AI, highlighting how technical design choices can directly influence trust, safety, and ethical outcomes in real-world systems.

Jema David Ndibwile, Assistant Teaching Professor, CMU-Africa

Tuyisenge said she used her focus in artificial intelligence and interest in security for the project and Landon credits her courses in cybersecurity, information security, ethical hacking, and security operations for providing her with the background to work on the project.

Both appreciate Ndibwile giving them the opportunity to work on the initial research and continue their work. The team is expanding this semester as the research expands, focusing on 3d technology and use cases.

“The study contributes to CMU’s broader leadership in responsible, inclusive, and globally aware AI, highlighting how technical design choices can directly influence trust, safety, and ethical outcomes in real-world systems,” Ndibwiile said.