The wisdom of the crowd is no longer organic. The crowd is fake.
Social media is not the digital commons we hoped it would become.
Instead, it’s a playground for manipulation, where artificial trends are being engineered to shape public opinion.
From PR firms and extremist groups to corporations and politicians, a range of actors use AI, SIM card-powered bots, and algorithmic exploitation to manufacture controversies and dominate narratives. And you can hire them on Upwork or Fiverr.
Search “growth marketing expert,” and you’ll find hundreds of people you can hire for as little as $20 to create fake social media trends.
Even more troubling is that social media platforms, fully aware of these tactics, turn a blind eye because the controversies increase user engagement and boost their profits.
Blake Lively and the Anatomy of a Smear Campaign
Blake Lively’s legal dispute with Justin Baldoni offers a clear example of how these tactics operate. Lively accused Baldoni of sexual harassment and retaliation, yet her accusations were preceded by a coordinated campaign to tarnish her reputation. Reports allege that Baldoni’s PR firm used AI-powered tools and fake accounts to paint her as “difficult” and unlikable. They spread selectively edited videos of her in interviews and planted stories that questioned her professionalism and character, diverting attention from her allegations.
This approach, known as astroturfing, mimics grassroots movements by deploying fake social media accounts to create the illusion of widespread public sentiment. By amplifying negative posts and planting inflammatory hashtags, the campaign manipulated perceptions, making it harder for Lively’s allegations to gain traction.
The use of these underhanded tactics isn’t confined to Hollywood. From international conflict to corporate lobbying, manufactured trends shape public discourse across the globe. Here are some high-profile examples:
• Extremist Propaganda:
Hamas and other extremist groups use AI-driven bot farms and fake accounts to spread propaganda, incite violence, and rally supporters. During conflicts, hashtags supporting their narratives often trend globally, despite originating from coordinated bot networks. For instance, during the 2021 Israel-Gaza conflict, researchers identified thousands of fake accounts boosting hashtags like #FreePalestine and its counter-campaigns. These campaigns amplified division and escalated tensions online and offline.
• Political Elections:
In the 2016 U.S. Presidential Election, Russian operatives used thousands of fake social media accounts to push divisive narratives, influencing voter sentiment. Similarly, during Brazil’s 2018 election, bots were deployed en masse to spread disinformation about candidates, undermining democratic norms. These campaigns often rely on SIM card farms—physical racks of mobile phones activated with low-cost SIM cards—to bypass account verification measures and flood platforms with automated posts.
• Corporate Misdirection:
Oil and gas companies have long used fake accounts and bots to downplay the role of fossil fuels in climate change. A report in 2020 revealed coordinated campaigns spreading the narrative that individual actions (like reducing meat consumption) were more significant than systemic reforms in tackling climate change. These campaigns conveniently redirected blame away from industry practices.
• Big Pharma:
Pharmaceutical companies have been caught deploying similar tactics to influence public opinion about their products. For example, during the opioid crisis, Purdue Pharma’s representatives allegedly planted articles and social media posts promoting the safety of OxyContin, while downplaying its addictive risks. In recent years, anti-vaccine misinformation campaigns have also been traced back to coordinated efforts by actors seeking to profit from alternative remedies or political polarization.
Role of AI in Amplifying Manipulation
AI-powered bots are central to these campaigns. Unlike earlier, clumsier bots, modern AI tools can mimic human behavior convincingly. They generate realistic posts, engage in discussions, and even argue persuasively with real users. They are programmed to identify trending topics and infiltrate conversations, creating an outsized impact on public discourse.
For instance, during the COVID-19 pandemic, AI bots fueled debates around lockdowns and vaccines, spreading misinformation that complicated public health efforts. Platforms like Twitter and Facebook identified bot activity tied to state-sponsored campaigns from countries like Russia and China, aiming to sow discord in Western societies.
Why Social Media Platforms Allow This to Happen
Social media platforms are not unwitting victims—they are complicit enablers. Controversial content drives engagement, and engagement drives profits. Outrage and controversy are particularly effective at keeping users on platforms longer, clicking more ads, and generating more data for platforms to monetize.
While platforms occasionally crack down on bot accounts or misleading content, these efforts are often too little, too late. Consider the revelations from Facebook whistleblower Frances Haugen in 2021, which showed the company’s internal research acknowledged that inflammatory content increased user engagement but prioritized profits over meaningful reform.
Human Cost of Artificial Trends
The damage from these manufactured trends is profound:
• Erosion of Trust:
When fake narratives dominate, trust in institutions, public figures, and even facts erodes. For example, the anti-vaccine campaigns driven by bots undermined trust in healthcare systems, leading to preventable deaths during the pandemic.
• Polarization:
Manufactured controversies deepen societal divides. The political bot campaigns during the U.S. and Brazilian elections didn’t just influence outcomes—they left lasting scars on national unity.
• Reputational Harm:
For individuals like Blake Lively, these tactics can cause irreparable harm. A single viral narrative—no matter how false—can overshadow years of hard work, damaging careers and personal lives.
What Can Be Done?
Solving this issue requires collective action from governments, platforms, and users:
1. Regulate Platforms:
Governments must enforce stricter regulations requiring platforms to disclose the prevalence of bots and the workings of their algorithms. Transparency is key to rebuilding trust.
2. Strengthen Verification:
Platforms should invest in better user verification processes to deter SIM card farms and fake accounts. While this might inconvenience users initially, it would significantly reduce the prevalence of bot-driven manipulation.
3. Educate Users:
Public awareness campaigns can help users recognize manipulated trends and approach online narratives with greater skepticism.
4. Hold Offenders Accountable:
Whether it’s PR firms, corporations, or political operatives, those deploying these tactics should face consequences, including fines and public exposure.
Reclaiming the Digital Commons
The manipulation of social media by AI, SIM card-driven bot networks, and bad actors threatens democracy, trust, and authentic interaction. Whether it’s a celebrity like Blake Lively fighting to protect her reputation or society grappling with divisive political narratives, the stakes are high. As long as social media platforms prioritize profit over integrity, these tactics will continue flourishing. It’s time for decisive action to restore honesty and accountability in the digital age.