YouTube intros new AI governance policies. AI helps enforce them.

“AI will introduce new risks and will require new approaches.” — Jennifer Flannery O’Conner and Emily Moxley, Vice Presidents, Product Management, YouTube

 

YouTube has introduced new policies governing how AI-produced videos are presented. In a blog post authored by company executives, two new actions are laid out:

  • Labeling: YouTube will label AI content as “artificial or synthetic.” That label could appear in the description panel, or on the video player for very sensitive topics.
  • Removal: this will happen when “a label alone may not be enough to mitigate the risk of harm.” This is an editorial gray area, of course, and the announcement gives one example: “For example, a synthetically created video that shows realistic violence may still be removed if its goal is to shock or disgust viewers.”

Inevitably, making executive judgment calls on the world’s largest crowdsourced content site invites controversy. YouTube acknowledges that the new policies resulted from “continuous feedback from our community, including creators, viewers, and artists, about the ways in which emerging technologies could impact them.”

In addition to new editorial policies and governance, YouTube will set up a user complaint system: “In the coming months, we’ll make it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using our privacy request process.”

The authors of this manifesto clarify that the ultimate decision is with YouTube: “Not all content will be removed from YouTube, and we’ll consider a variety of factors when evaluating these requests.” Parody and satire, for example, could allow controversial AI content to remain. Public figures are vulnerable in this policy; YouTube says they face a “higher bar” for removal.

Interestingly, and one might say ironically, AI will power content moderation about AI … at least to some extent. AI can start a review process, which then gets kicked over to the 20,000-strong review team of humans for final decision. “AI is continuously increasing both the speed and accuracy of our content moderation systems,” the announcement says.

AI reviewing of content is helpful in another way, beyond sheer scale: “When new threats emerge, our systems have relatively little context to understand and identify them at scale. But generative AI helps us rapidly expand the set of information our AI classifiers are trained on, meaning we’re able to identify and catch this content much more quickly.”

 

“We’re still at the beginning of our journey to unlock new forms of innovation and creativity on YouTube with generative AI.”


Brad Hill