LinkedIn has formally expanded its user policy to explicitly discourage inauthentic behavior – such as engagement pods and automated commenting – by limiting visibility and reducing reach for posts featuring excessive or tool-generated activity. The platform now states: “If we detect excessive comment creation or use of an automation tool, we may limit the visibility of those comments.” This update marks a firm step toward addressing widespread concerns over fabricated engagement tactics – and while it’s a small change, it’s also an official one.
The change comes amid growing criticism that engagement pods – groups of coordinated users boosting each other’s content for algorithmic gain – and AI-driven commenting now distort genuine interaction. LinkedIn confirms it has been actively reducing the reach of such activity when detected.
While LinkedIn itself offers and uses AI tools – namely the writing assistant that didn’t really catch on and the hiring tools that somewhat did – cracking down on this content only makes sense. More and more platforms are looking down on this kind of on automation, forcing brands and apps to prioritize content quality, contextual relevance, and audience trust. Engagement pods may offer quick visibility, but platforms are pivoting toward rewarding substance over semblance. Less than a week ago, TikTok updated its guidelines to better handle AI content and the legal responsibility of the people that create it, a change that other platforms are likely to follow relatively soon.
Given the massive amount of AI content and black-hat SEO tactics out there, and the potential risk for misuse (both for small things like engagement boosting, or genuine crimes and misinformation campaigns), it’s likely that LinkedIn will refine its rules even further as it starts to get a better handle on these new trends and techniques.