A team of researchers at MIT’s Media Lab have confirmed a shocking discovery. As per their recent findings, facial recognition technology is subject to biases based on the data sets provided. Moreover, there is a specific set of conditions that the software’s algorithm has been designed to function along with as such.
“Joy Buolamwini, a researcher at the MIT Media Lab, recently built a dataset of 1,270 faces, using the faces of politicians, selected based on their country’s rankings for gender parity,” notes The Verge. Buolamwini then went on to test the accuracy of the three facial recognition systems. The three were created by Microsoft, IBM, and Megvii of China.
The results were originally published in the New York Times and suggested numerous inaccuracies in gender identification dependent upon a person’s skin color. “Gender was misidentified in less than one percent of lighter-skinned males; in up to seven percent of lighter-skinned females; up to 12 percent of darker-skinned males; and up to 35 percent in darker-skinner females,” reports The Verge.
“Overall, male subjects were more accurately classified than female subjects replicating previous findings (Ngan et al., 2015), and lighter subjects were more accurately classified than darker individuals,” Buolamwini wrote in a paper about her findings, which was co-authored by Timnit Gebru, a Microsoft researcher. “An intersectional breakdown reveals that all classifiers performed worst on darker female subjects.”
This discovery isn’t the first time that facial recognition technology has proven to be inaccurate or shockingly biased. Increasing number of evidence points towards the need for diverse data sets and diversity among people who create such techniques. Back in 2015, Google was accused by an unidentified software engineer for accidentally identifying his black friends as gorillas. This news immediately went viral and didn’t work out in Google’s favor.