Meta’s Murky Profits: How Political Violence Fuels Facebook’s Advertising Revenue
The assassination attempt on Donald Trump in July 2024 sparked an immediate and unexpected surge in online commerce: commemorative merchandise. Within days, Facebook and Instagram were flooded with ads for coffee mugs, t-shirts, and even trading cards depicting a bloodied Trump, fist raised in defiance. Images captured by renowned photojournalists like Doug Mills (New York Times) and Evan Vucci (Associated Press), showing Trump’s reaction to the shooting, became the central imagery for this opportunistic marketing blitz. Even the Trump campaign itself joined the fray, offering its own line of survival-themed merchandise. This startling phenomenon underscores a troubling reality: Meta, the parent company of Facebook and Instagram, appears to profit significantly from moments of political violence and upheaval.
This isn’t an isolated incident. A collaborative investigation by CalMatters and The Markup reveals a disturbing pattern. Following the October 7th, 2023 attack on Israel and the subsequent conflict in Gaza, Meta saw a staggering increase in ad spending related to the conflict. Over the 15 weeks following the attack, organizations and individuals paid Meta between $14.8 and $22.1 million for ads viewed between 1.5 and 1.7 billion times. This represents a 450 percent increase compared to the same period in the previous year, jumping from a range of $2.4 to $4 million.
The American Israel Public Affairs Committee (AIPAC), a prominent pro-Israel lobbying group, was a major driver of this surge. Its ad spending on Facebook and Instagram, promoting Israel’s actions and pressuring politicians for support, increased by over 300 percent in the six months following the October 7th attack. AIPAC spokesperson Marshall Wittmann explained their efforts as being directed to "encouraging pro-Israel Americans to stand with our democratic ally as it battles Iranian proxies in the aftermath of the barbaric Hamas attack of October 7th." However, the advertising landscape wasn’t monolithic; other campaigns, such as those run by Doctors Without Borders, highlighted the humanitarian crisis in Gaza, presenting a more nuanced picture of the conflict.
While the Trump assassination attempt-related ads represent a smaller portion of Meta’s overall revenue, the consistent pattern across these events highlights a worrisome trend. Meta’s business model, heavily reliant on targeted advertising, doesn’t differentiate between benign content and that directly related to or even glorifying acts of violence. The company profiles users online, then allows businesses to pay to target specific demographics with their ads. This algorithmic approach, while effective for marketers, fails to account for the ethical implications of profiting from tragedies and politically charged events.
Meta spokesperson Tracy Clayton argues that the company doesn’t profit from political violence, suggesting that advertisers often pull back spending during turbulent times. While Clayton points to lower ad spending "correlating with the start of the conflict" in the Middle East and Ukraine, the data presented by CalMatters and The Markup paints a different picture. The massive increase in spending following the attacks on both Israel and the attempted assassination of Trump directly contradicts this assertion. The claim that "advertisers responding to current events are nothing new" fails to acknowledge the ethical concerns surrounding the financial incentives created by Meta’s platform.
Meta’s response also highlights the transparency limitations of their advertising data. While the company’s Ad Library offers public access, the analysis relies on broad estimates and ranges, making precise financial calculations difficult. Further, the review process for ads, though described as rigorous, clearly fails to adequately address content related to violence, especially when coupled with political messaging.
The sheer volume of ads related to the Trump assassination attempt is alarming. A simple search within Meta’s Ad Library for ads mentioning "assassination," including those referencing Trump and countless others making implicit connections, unearthed a diverse range of promotional material. Some ads were blatant in their exploitation of the event, suggesting a conspiracy or promoting pro-Trump sentiments heavily tied to the shooting itself. Other ads tapped into the rising tensions, promoting firearms and self-reliance with thinly veiled references to the event. Phrases like "First they jail him, now they try to end him" and "the assassination attempt was their Plan B" illustrate the inflammatory tone of some of these ads, while calls for self-reliance utilizing the assassination as a backdrop demonstrate a disturbing exploitation of the tragedy for commercial gain.
Many of these ads did not seem to explicitly violate Meta’s policies, particularly its ban against showing weapons while alleging "election-related corruption." Yet, their very existence places Meta in an ethically precarious position, raising questions about whether the platform’s algorithms and moderation processes are sufficient to prevent the monetization of harmful content. The fact that CEO Mark Zuckerberg himself, in a post-attempt interview, described the event as "one of the most badass things I’ve ever seen in my life," further exacerbates the perception of a disconnect between company policy and the reality of Meta’s profit-driven model.
The issue extends beyond individual events. The Tech Transparency Project has extensively documented how organizations like the National Rifle Association (NRA) have increased ad spending on Meta and Google platforms after mass shootings, highlighting a consistent pattern of leveraging tragedy for political messaging and commercial gain. Meta’s own internal investigations, such as the 2018 report on its role in inciting violence in Myanmar, confirm its repeated failures to effectively moderate content and prevent the spread of harmful material. Amnesty International researcher Alia Al Ghussain correctly points out the potential for even greater harm in non-English speaking communities, where Meta’s resources for content moderation are significantly less robust. Recent tests also reveal the consistent failure of Meta’s systems to flag ads explicitly calling for violence against Palestinians – a clear violation of the company’s own stated policies.
The case of Red First, a clothing company promoting pro-Trump merchandise, specifically illustrates how events like the attempted assassination can be co-opted as a full-blown marketing strategy. Although their ads might seem benign compared to others, the company’s use of slogans like "he will overcome," "fight fight fight," "legends never die," and "shooting makes me stronger" all clearly connected their wares to the assassination attempt, allowing them to capitalize on the heightened political climate for greater financial returns while avoiding explicit mention of the act itself.
In conclusion, the evidence suggests that Meta’s platform, despite its stated policies against inciting violence, consistently fails to adequately prevent the monetization of politically charged content related to violence. This raises serious ethical concerns about whether Meta should profit from such events, and whether its current moderation and enforcement processes are sufficient to address the problem. The company’s reliance on targeted advertising, combined with algorithmic limitations and potential deficiencies in international content moderation, creates a fertile ground for the exploitation of tragedy for commercial gain. A deeper investigation and more robust regulatory oversight are needed to determine how to hold tech companies accountable for the harmful consequences of their business practices.