Meta tightens Facebook rules on AI slop

Meta has announced new anti-impersonation tools and stricter originality rules as Facebook faces complaints over low-quality AI-generated content.
Meta Platforms has rolled out new tools to detect impersonation on Facebook and updated its creator guidelines to better define what counts as original content, as the company tries to curb growing criticism that the platform has become flooded with low-quality AI-generated posts.
The changes mark Meta’s latest effort to protect creators and preserve Facebook’s appeal as a platform for original publishing. The company said the move follows widespread complaints that spammy reposts, impersonator accounts, and so-called AI slop were weakening content quality and making it harder for original creators to stand out and earn money.
Read More: Meta Launches New Tool to Protect Facebook Reels
Meta said it began cracking down last year on unoriginal material, including repeated reuse of another person’s photos, videos, or text. That effort was aimed at elevating original creator content in Facebook feeds and pushing back against low-value posts that had hurt the platform’s reputation.
The company said those earlier steps produced measurable gains. Meta said views of original content and time spent watching original content on Facebook approximately doubled in the second half of 2025 from the same period a year earlier. It also said it removed 20 million impersonator accounts last year and recorded a 33% decline in impersonation reports involving large creators.
As part of the latest update, Facebook said it is testing enhancements to its content protection tools. The tools let creators act when their reels are detected across Facebook’s platforms after being reposted by impersonators. Through a central dashboard, creators can flag that content and seek action.
Meta said an upcoming update will simplify the process further by allowing creators to submit reports in one place. The company is betting that a faster and more centralized reporting system will make it easier for creators to protect their work before copied material spreads widely.
Still, the current system has limits. Meta said the tool is focused on matching duplicate content and does not yet address unauthorized use of a creator’s likeness. That leaves a gap in protection at a time when identity-based misuse and AI-generated impersonation remain major concerns across social platforms.
Meta is not alone in facing those challenges. The company said other technology platforms are also grappling with the impact of AI tools on their communities. This week, YouTube announced it would expand its AI deepfake detection tools to cover politicians, public figures, and journalists, underscoring broader industry pressure to respond to synthetic and misleading content.
Meta also used Friday’s announcement to sharpen Facebook’s definition of originality. Under the updated guidance, original content includes material filmed or produced directly by a creator, as well as reels that remix other content or use overlays to add something new, such as analysis, discussion, or fresh information.
By contrast, content that makes only minor edits to another creator’s work will be treated as unoriginal and deprioritized. Meta said that includes straightforward re-uploads and low-value changes such as adding borders or captions without meaningfully transforming the material.
The changes reflect a broader effort by Meta to draw a clearer line between creative reuse and simple duplication. For Facebook, that distinction could prove critical. If AI slop and recycled posts continue to overwhelm original voices, creators may have less reason to publish on the platform or build businesses there.
Meta’s latest update suggests the company sees content quality as central to Facebook’s future. By tightening enforcement against impersonation and setting firmer rules around originality, Meta is trying to reassure creators that Facebook remains a place where original work can still compete against AI slop.
