Twitter has updated its content moderation policy to affect what information may be shared in times of crisis.

The Big Tech company released a blog post Thursday that introduced its "crisis misinformation policy," a new set of guidelines that will regulate what sorts of information will be shared during chaotic events such as armed conflicts, natural disasters, and other crises.

"Today, we're introducing our crisis misinformation policy — a global policy that will guide our efforts to elevate credible, authoritative information, and will help to ensure viral misinformation isn't amplified or recommended by us during crises," wrote Twitter Head of Safety Yoel Roth. "In times of crisis, misleading information can undermine public trust and cause further harm to already vulnerable communities. Alongside our existing work to make reliable information more accessible during crisis events, this new approach will help to slow the spread by us of the most visible, misleading content, particularly that which could lead to severe harms."


The new policy is an attempt to counter viral information that can spread on the platform in dangerous situations despite it not being verified. The company said it would rely on public information from "credible" sources to determine the validity of a claim. These sources will include humanitarian groups, news outlets, conflict monitoring services, and open-source intelligence investigators. If Twitter determines that a post is based on misinformation, Roth said that it would stop amplifying or recommending it and that it would add warning notices to the post before the user clicks through to it.

The social platform pointed to its recent efforts to limit the spread of Russian propaganda as a case of what this new policy might look like. Russian propaganda saw a 30% drop in its reach on Twitter after the company stopped recommending or promoting it, Roth told a number of journalists in a call.

"We believe that we'll see similar effects in this context, but we're studying it closely, and we're going to share data about this as we learn more," Roth said. This new approach would not remove the content in most cases. Twitter would only remove the content "in the most severe cases where the potential to cause harm is the greatest."


When asked what sorts of events would fit within Twitter's category of "crisis," Roth said the platform would start with international conflicts — specifically in Ethiopia, Afghanistan, and India. The company also said it eventually hopes to apply the policy to situations involving mass shootings or natural disasters.

It is unclear how this policy may be affected by Elon Musk's involvement in the company's future. While Musk announced Friday that he was temporarily pausing his deal with Twitter over its estimate of the number of spambots, the crisis policy could change upon Musk's final acquisition of Twitter in the future.