The Role of Social Media Platforms in Disaster Response: A Call for Change

During the Lahaina wildfires, the slow response from government and media led to an overwhelming spread of misinformation on social media. Independent outlets like 808 Viral quickly disseminated critical updates, but faced challenges from bots and foreign campaigns. Calls for improved platform protocols are vital to manage communication during future crises.

0
2105
Disaster Disinformation in Hawaii
article top

As the admin for 808 Viral, I unexpectedly found myself on the front lines of communication during the Lahaina wildfire and what I experienced was nothing like I had ever experienced before.

As the community, government and traditional media scrambled to get their footing, people turned to pages like 808 Viral—the leading independent entertainment platform in Hawaii—for immediate updates and information. The benefit of being independent, is the ability to quickly receive and share content unencumbered by red tape. When the national media began reporting, the entire world turned its eyes to Maui, and that is when the disinformation began flooding the feeds.

It was obvious from the very start, that the algorithm prioritized the disaster profiteers, clout chasers, scammers, conspiracy theorists, and foreign actors, over verified information. By the time first responders, security experts, and officials from Katrina tried to warn us about the looming challenges, the damage was already done. Everyone in Hawaii was fighting to get out vital communication and battling against the disinformation that was blocking victims from getting the help they needed. We formed volunteer chat groups attempting to identify and report damaging content but NOT ONE THING WAS REMOVED.

The Problem: Bots, Impostors, and Misinformation Overwhelmed Crisis Communication

During the Maui wildfires social media platforms moderation efforts failed to keep up, or act at all. We pleaded with the platforms to prioritize content relevant to the affected areas, because accurate and vital information was being drowned out by misinformation spreading fear and confusion faster than the storm. This pattern isn’t new. The same thing that happened during Katrina and Lahaina happened during Hurricane Helene. The same misinformation and fraudulent campaigns flooded social feeds.

In fact, I was able to debunk the Maui laser disinformation because I just happened to recognize the same imagery used on other Direct Energy Weapon conspiracies.

The consequences of this were severe: people became suspicious of legitimate aid, unsure whether to trust the very organizations trying to help them. Those ripples of distrust remain, leaving a lasting impact on affected communities.

Foreign influence campaigns found these moments ripe for exploitation. Russian and Chinese-linked accounts were highly visible during the Maui wildfire response, amplifying disinformation and sowing distrust in government institutions. Political opportunists latch onto these crises, manipulating narratives to divide the public and undermine coordinated recovery efforts.

Everyone I talked to, from mainstream media to first responders all said the same thing. “We have never seen anything like this before.”

This is an urgent call to platforms to Implement the following: 

1. Take immediate action to improve its detection and removal of fake accounts. Bots and scam accounts are still rampant on Youtube, Instagram, TikTok and Facebook, eroding trust and making it harder to find credible information during emergencies.

2. Content Prioritization during disasters – amplify posts from trusted emergency services, local government agencies, and verified accounts. Critical updates need to rise above other content so people can quickly access the information they need. Geo-targeted algorithms already exist and can be leveraged to ensure local updates reach the right audiences in real time.

3. Identity Verification for Trusted Sources – Meta should pre-prioritize verified accounts in emergency zones, ensuring that official agencies, relief organizations, and community leaders can cut through the noise. Clear disaster response protocols would allow emergency posts to bypass regular algorithms, ensuring they are seen and shared widely.

4. Real-Time Disaster Protocols – these platforms must recognize the stakes and implement formal disaster response protocols to manage communication during crises. Lives depend on it. Emergency information can’t compete with the noise from scammers, bots, and disinformation campaigns.

5. Consequences for spreading harmful disinformation – The case of Alex Jones is an example of why disinformation must face legal repercussions. Jones profited from propagated false narratives about the Sandy Hook shooting and misled millions. It inflicted real harm on the victims’ families. When individuals or entities profit from “rage baiting” and conspiracies, the consequences must extend beyond demonetization. Permanently disabling their channels can help curb financial incentives. But legal and regulatory measures send a clear message that spreading dangerous disinformation will not be tolerated. To protect vulnerable communities from the psychological and emotional damage from harmful content, content creators should be held to ethical standards. With influence comes great responsibility.

The Time to Act is Now

The chaos that unfolded during the Maui wildfires serves as a warning: The platforms need to be better equipped to serve the public during disasters. When critical updates are lost among a flood of misinformation, people suffer. With hurricanes, wildfires, and other natural disasters becoming more frequent, the need for reliable, real-time information will only grow.

Meta has the technology to make these changes and stop enabling disaster profiteers and foreign actors who exploit these situations for personal gain or political agendas.

Politicians and Disinformation: A Widening Threat Beyond Crises

In recent elections, we’ve seen that disinformation isn’t limited to disaster responses—politicians and campaign strategists are now adopting the same manipulative tactics. Instead of working to unify and protect communities, they often use social media to amplify divisive narratives, push unverified claims, and even fund disinformation efforts that undermine democratic processes. Politicians leverage bot accounts, echo conspiracy theories, and stoke public fear to sway public opinion, often distorting the reality of their opponents’ platforms or pushing exaggerated crises to gain votes.

The consequences are profound. Just as in disaster scenarios, this flood of misleading information confuses voters, disrupts informed decision-making, and reduces trust in legitimate sources. By amplifying fear and misinformation, politicians are undermining democracy in ways strikingly similar to disaster profiteers.

Preventing Campaign Interference: A Call for Social Media Accountability

To protect both public safety and democratic integrity, platforms must address the misuse of algorithms and prioritize transparency, especially during election cycles. Implementing real-time disaster and election protocols will help ensure that accurate, trusted information isn’t drowned out by manipulation. As social media becomes an even more dominant source of information, its role in both crisis response and election integrity must be reassessed.

The clock is ticking. In the next crisis, will the platforms leaders step up—or will misinformation cost lives?

Comments

comments

Leave a Reply