Human review of voice assistant recordings has been a fiery topic of late, and actions are already being taken to improve privacy for their users.
Apple announced that it will globally suspend its grading program, where contractors conduct quality control reviews of recordings made with by Siri AI. The company was targeted in a feature by The Guardian that found those contractors were hearing personal data, identifying information, and accidentally recorded conversations in the audio snippets they reviewed.
In addition to re-assessing its own processes, Apple said that it will be making a software update allowing users to choose whether they participate in the grading process.
Google faced a similar blow-up after a whistleblower contacted a Dutch broadcaster and revealed similar situations for the people reviewing the accuracy of Google Assistant recordings. Although it seems that Google also decided to stop its human review of audio recordings in July after it learned of the data leak, it is only suspending the practice in Europe.
That action came out because a privacy watchdog in Germany told the tech giant that it would be beginning an “urgency procedure” in response to the reports under Article 66 of the General Data Protection Regulation rules. This would allow the watchdog to order data processing to cease if it determines that there is “an urgent need to act in order to protect the rights and freedoms of data subjects.”
How the tech companies handle this issue will likely have a big impact in how much consumers continue to trust voice AI platforms. If both business and regulators make strong moves to protect privacy, it could mean a new phase in increased adoption of the service. Whether the voice assistants can improve enough to have fewer accidental activations is another question that remains unanswered.