YouTube Will Soon Require "Altered or Synthetic" Content Disclosures

The household name in video is making AI-generated content easier to spot on the platform.

If you're using AI to lend an artificial hand in your YouTube videos — especially if those videos have realistic imagery — you’ll soon have to disclose that detail per the company’s new policies on AI content. YouTube visitors can also expect to see disclaimers on these videos, informing them that content is “altered or synthetic.”

The Google-owned video titan recently announced the new guidelines and a commitment to curbing harmful misinformation in a new blog post from Jennifer Flannery O’Connor and Emily Moxley, YouTube vice presidents of product management. 

“We have long-standing policies [its community guidelines] that prohibit technically manipulated content that misleads viewers and may pose a serious risk of egregious harm,” the VPs wrote. “However, AI’s powerful new forms of storytelling can also be used to generate content that has the potential to mislead viewers—particularly if they’re unaware that the video has been altered or is synthetically created.”

Over the coming months, creators will be prompted to disclose whether their content has realistically altered or synthetic material made using AI tools. 

“This could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn't actually do,” Flannery and Moxley wrote. 

The distinction is important because these policies apply only to realistic content someone could interpret as true. The disclosure doesn’t apply to all work made via AI (i.e. work that’s obviously synthetic or unrealistic.) However, any synthetic media could also be removed, whether it’s marked or unmarked as such, if it violates the community guidelines of realistically depicting violence to shock or disgust viewers. 

Creators who consistently fail to disclose that their videos were realistically altered or made synthetically could get their content pulled, their accounts suspended from the YouTube Partner Program or face other penalties.

The blog post then details what the new in-video content disclosures will look like.

“We’ll inform viewers that content may be altered or synthetic in two ways,” it says. “A new label will be added to the description panel indicating that some of the content was altered or synthetic. And for certain types of content about sensitive topics [like elections, ongoing conflicts and public health crises, or public officials], we’ll apply a more prominent label to the video player.”

In September, YouTube announced its own generative AI creation tool called Dream Screen. It will allow users to create AI-generated video or image backgrounds via text prompts for YouTube Shorts. The new synthetic content rules will apply to that feature as well. 

“Content created by YouTube’s generative AI products and features will be clearly labeled as altered or synthetic,” the blog post reads. 

Using someone’s likeness in realistic content that’s unapproved, misleading, and potentially harmful — aka a deepfake — is a growing concern in the wake of generative AI. Deepfakes are already affecting some day-to-day lives and they have the potential to do widespread harm as the U.S. heads into an election year.

YouTube’s latest policy adds protections for people from having their likeness used improperly. 

“In the coming months, we’ll make it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using our privacy request process,” the VPs wrote. 

However, before removing content, the company will consider a number of factors such as “whether the content is parody or satire [the lines of which are blurred and will need defining], whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar.” 

YouTube will also provide its music partners the right to request the removal of AI-generated music that clones an artist's voice. The decision to approve a removal request will hinge on several factors, including if the content serves news reporting, analysis or critique of the synthetic vocals. This new initiative is a first step in the company’s efforts to pay music rights holders for AI-generated tunes

Lastly, content moderation on the platform is getting an AI boost, according to the blog post. Content moderators for YouTube and across other sites like Meta are notoriously and unfortunately prone to seeing the worst humanity has to offer in their duties. YouTube wants to use AI to not only reduce harmful content on the platform as a whole but what their heroic reviewers are exposed to as well.

“Generative AI helps us rapidly expand the set of information our AI classifiers are trained on, meaning we’re able to identify and catch this content much more quickly,” the post said. “Improved speed and accuracy of our systems also allows us to reduce the amount of harmful content human reviewers are exposed to.”

More transparency in AI is already being called for by consumers, companies and providers. And the gravity of that transparency will only increase as AI tools grow more powerful and widespread. YouTube’s efforts to improve transparency and accountability are applaudable and will hopefully reduce the damaging impacts of deepfakes and misinformation. 

Back to blog