Speech recognition tools are known to be flawed systems that make occasional errors in comprehension. However, a new study revealed that those errors aren’t so occasional for black users.
A group of investigators, mostly from Stanford University, found consistent racial gaps in understanding by automated speech recognition tools. These platforms power not just virual assistants, but also services such as closed captioning and hands-free computing. The research was published in the journal Proceedings of the National Academy of Sciences.
The team reviewed five automated speech recognition systems from Amazon, Apple, Google, IBM, and Microsoft. The tools transcribed of interviews with 42 white speakers and 73 black speakers. The results showed “substantial racial disparities,” with an average word error rate of 0.35 for black speakers, compared with 0.19 for white speakers.
“Our work illustrates the need to audit emerging machine-learning systems to ensure they are broadly inclusive,” the researchers wrote.