Meta has announced new enforcement measures targeting accounts that repeatedly share unoriginal content on Facebook, following a similar move from YouTube just a few days age. The social media giant stated this week that it has already removed approximately 10 million accounts in 2025 alone for impersonating popular creators and taken action against 500,000 accounts engaged in spam tactics or fake engagement practices.
The new approach aims to restrict accounts that frequently repost others’ text, images, or videos without adding meaningful contributions. These accounts will face reduced content distribution and temporary suspension from Facebook’s monetization programs. Meta also indicated that when duplicate videos are detected, they will be downranked to prioritize the original creator’s version.
While Meta emphasized that activities like reaction videos or trend participation remain unaffected, it warned that simply republishing others’ work without commentary or context could trigger penalties. The company is also trialing a feature that links duplicate content back to the original post, directing users to the source.
These updates coincide with growing criticism of Meta’s automated enforcement systems, which have reportedly resulted in wrongful account bans and limited recourse for affected users. A petition with nearly 30,000 signatures calls on Meta to address these issues, though the company has yet to respond publicly.
The backdrop to this policy change is the rise of AI-generated content, often termed “AI slop,” which has led to a proliferation of low-quality videos stitched together from stock imagery, AI narration, and other recycled elements. While Meta did not explicitly reference AI-generated media in its policy update, its advice to avoid “stitching together clips” and focus on “authentic storytelling” suggests the company is factoring AI content into its enforcement considerations.
Meta said these measures will roll out gradually, giving creators time to adapt.