The 2024 election cycle is proving far more treacherous than previous years for disinformation researchers. Individuals who dedicate their careers to combating online falsehoods and propaganda are facing unprecedented levels of harassment, legal challenges, and even death threats. This escalating hostility is chilling the very field responsible for safeguarding the integrity of the internet and the democratic process, leaving many researchers fearful for their safety and the future of their work.
Key Takeaways:
- Escalating Threats: Disinformation researchers are facing a surge in harassment, lawsuits, and even death threats, forcing many to alter their lives and conceal their activities.
- Legal Warfare: Powerful individuals and entities are using litigation to stifle research and silence critics, creating a chilling effect across the field.
- Data Access Restricted: Major tech platforms are limiting or charging exorbitant fees for data access, hindering researchers’ ability to effectively monitor and analyze misinformation.
- AI’s Amplifying Effect: The rise of AI is exacerbating the problem, creating more sophisticated and difficult-to-detect misinformation.
- The Fight Continues: Despite the challenges, researchers remain committed to their work, adapting their strategies and seeking support to continue combating disinformation.
The Rising Tide of Hostility
For the past decade, disinformation researchers like Nina Jankowicz, a vice president at the Centre for Information Resilience, have been crucial in identifying and exposing threats like Russian propaganda, COVID-19 conspiracy theories, and false voter fraud accusations. Their work has, in previous election cycles, earned praise from lawmakers and tech executives alike. However, the 2024 election has marked a dramatic shift. Jankowicz herself, after her brief stint on the White House’s Disinformation Governance Board, has become the target of government inquiries, lawsuits, and a relentless barrage of online harassment, including death threats. This is not an isolated case. Many researchers across the country share similar experiences. “I don’t want somebody who wishes harm to show up,” Jankowicz said, describing the changes she’s had to make to her daily life to protect herself and her family.
The Chilling Effect
The constant attacks, legal battles, and accompanying financial strain have created a palpable fear among researchers. Alex Abdo, litigation director of the Knight First Amendment Institute at Columbia University, calls this a “chill in the community,” highlighting the increasing risks inherent in the profession. Many researchers who spoke to CNBC for this article declined to be named, fearing further harassment and public scrutiny. The overwhelming consensus points to a far more dangerous environment than in previous election cycles. This dangerous environment can be traced back to the rise of conspiracy theories that claim internet platforms are trying to silence conservative voices. These theories, which began during Trump’s first presidential campaign, have only intensified in the intervening years.
The Impact on Research and Institutions
The attacks on disinformation researchers are not merely personal; they directly undermine the fight against online misinformation. The increased prevalence of misinformation, compounded by the rise of artificial intelligence, makes identifying and countering false narratives even more challenging. The situation is akin to removing police officers from the streets during a surge in crime.
Financial and Personnel Losses
The Stanford Internet Observatory (SIO), a leading research institution specializing in the study of online misinformation, has faced multiple lawsuits in 2023 from conservative groups alleging collusion with the federal government to censor speech. This has resulted in millions of dollars in legal costs and a significant downsizing of the institution. Jeff Hancock, SIO’s President, described it as a “trust and safety winter” where “those attacks take their toll.” Similarly, Google’s layoffs of several members of its trust and safety research unit in March and again this past month, just ahead of their scheduled speaking appearances at the Stanford Trust and Safety Research Conference, raises concerns about the sustainability of such work within large corporations. While Google claims the layoffs stemmed from broader company-wide restructuring and shifting business priorities, the timing leaves researchers feeling vulnerable and uncertain.
Legal Battles and Subpoenas
Nina Jankowicz’s experience is not unique. Many other researchers have faced similar experiences. The appointment of Jankowicz to the now-defunct Disinformation Governance Board brought a torrent of criticism from conservative media and Republican lawmakers. Following its disbandment just four months after its launch, Jankowicz received a subpoena from the House Judiciary Committee as part of an investigation into alleged government collusion with researchers to censor conservative viewpoints. “I’m the face of that,” Jankowicz said, “It’s hard to deal with.” This is a pattern repeated among other individuals, creating a deterrent effect on future research.
The Role of Tech Platforms and Powerful Actors
The hostility toward disinformation researchers extends beyond individual attacks. Major tech platforms are increasingly restricting access to data, making the researchers’ work far more difficult. Elon Musk’s acquisition of Twitter (now X) has been particularly problematic.
X’s Actions
X has not only instituted a paywalled data access system of $42,000 per month for researchers, dramatically limiting the availability of data, but Musk also engages in a litigation strategy used to silence critics and neutralize researchers. The company has launched several lawsuits against researchers and organizations for calling out X for failing to mitigate hate speech and misinformation. This includes the high-profile lawsuit against Media Matters for a report showing that hateful content often appeared alongside advertisements placed by major companies such as Apple, IBM, and Disney; the companies subsequently paused their ad campaigns on the platform. X claims the organization used intentionally deceptive techniques to produce its report.
Congressional Investigations & Corporate Censorship
Further fueling the climate of hostility are congressional investigations conducted by far-right politicians, including House Judiciary Chairman Jim Jordan’s investigation into an supposed collusion between large advertisers and the nonprofit Global Alliance for Responsible Media (GARM). This has led to the suspension of GARM’s operations, further weakening institutional oversight protecting from online misinformation and hate speech.
Changes in Policy and Data Restriction
Beyond legal challenges, several tech companies implemented changes that significantly impact research capabilities. Meta’s shutdown of CrowdTangle, a popular tool used by researchers to track misinformation narratives, and its replacement with a less functional tool, only further impedes their abilities. Similarly, changes to data access on TikTok and YouTube, while also limiting researchers’ ability to track misinformation and hate speech, suggest a deliberate effort to restrict crucial analysis. YouTube’s decision to stop removing false claims about the 2020 election demonstrates a willingness to tolerate misinformation, while Meta’s pre-2022 midterm election policy change showing a tolerance for political advertisements questioning the legitimacy of past elections, creates a more challenging environment for those combatting disinformation.
Looking Ahead: Resistance and Resilience
The challenges faced by disinformation researchers are profound, but the commitment to their work remains steadfast. Despite the personal risks and professional challenges, many continue their work, adapting their strategies and seeking support.
The dismissal of a lawsuit filed by X against the Center for Countering Digital Hate and a subsequent Supreme Court ruling affirming the White House’s authority to urge social media companies to remove misinformation offer some cause for optimism. New initiatives, like Jankowicz’s American Sunlight Project, focus on providing support and resources to researchers, ensuring that the fight against disinformation continues despite the increased risks and challenges.
“The uniting factor is that people are scared about publishing the sort of research that they were actively publishing around 2020,” Jankowicz said. “They don’t want to deal with threats; they certainly don’t want to deal with legal threats, and they’re worried about their positions.” The fight to protect the integrity of information and the democratic process is far from over, but the increasing risks faced by those on the front lines demands attention and support.