Voice assistants have been a growing topic of interest for the audio industry, and businesses are still finding ways to further develop the technology. Last month, Spotify received a patent on a new way to make a voice assistant that can be responsive to human emotions.
The patent was originally filed in 2018. The concept is that the AI would be able to register the tone, cadence, volume, and pitch of a voice and turn that into a reading of emotion. If a person sounds sad, it could offer enouragement to cheer up, to acknowledge the sadness, or offer tough love. It could also adjust music playback and stop playing songs that spark anger, sadness, or other negative feelings in the user.
“It’s an emerging field; I don’t think there is something widely used in the market today,” Rahul Telang, a professor of Information Systems and Management at Carnegie Mellon University’s Heinz College, of speech emotion recognition. “Two or three years down the line, it could become mainstream.”
Telang also noted that it’s unclear at this stage if or how Spotify’s analysis of emotions might be connected to a user’s data. Feelings could be a new level of privacy concerns for cautious listeners.
“If the user worries about the information being shared, they might be spooked rather than fall in love with [Spotify],” Telang said. “It’s a double-edged sword.”
Spotify also isn’t the only one exploring emotion in voice AI. Amazon and Google have also published patents in a similar vein over recent years.