In Myanmar, the military led a widespread, covert disinformation campaign on Facebook to fuel ethnic tension and genocide. For tech companies, this should be a clear warning that a proactive, rather than a reactive approach, is necessary to combat misuse of services.
A New York Times investigation revealed how Myanmar’s military can and did use social media, not against hostile foreign countries, but against their own people. These findings, after a U.N. report about the use of Facebook to spread hate leading to genocide in Myanmar, raised hard questions for the social media giant. Now, it turns out that several of the accounts were run by military officials under false names.
Although Facebook has discovered disinformation campaigns from countries like Russia and Iran targeted and stirring up chaos in other countries, in Myanmar, those same tactics were used to fuel domestic ethnic violence.
To stir up hate, military officials created Facebook accounts posing as regular users. From these accounts, which had millions of followers, they spread false stories including one about the rape of a Buddhist woman by a Muslim man and others that pitting Islam against Buddhism. The operation also included spreading rumors and fake satire about the country’s civilian leader, Daw Aung San Suu Kyi.
That hate, spread deliberately by government officials turned into real acts of violence: murders, rapes, and the destruction of entire communities leaving a trail of dead and creating a refugee crisis as Myanmar’s Rohingya minority fled.
To do so was no simple task, and the military established secret bases where operatives using faking accounts and names targeted posts critical of the military, learned about trends in social media, and posted incendiary comments.
Facebook had previously deactivated the accounts of top military leaders, the covert operations that spurred the violence remained beneath the company's radar. Now, those accounts have been deleted as well, but as many have pointed out, the actions from the company were too late.
On Monday, the company posted a statement explaining:
“Today, we removed 13 Pages and 10 accounts for engaging in coordinated inauthentic behavior on Facebook in Myanmar. As part of our ongoing investigations into this type of behavior in Myanmar we discovered that these seemingly independent entertainment, beauty and informational Pages were linked to the Myanmar military. This kind of behavior is not allowed on Facebook under our misrepresentation policy because we don’t want people or organizations creating networks of accounts to mislead others about who they are, or what they’re doing. After a U.N. report implicated Facebook in the genocide earlier this year, the company acknowledged that it had been slow to act.”
That acknowledgment, however, leaves plenty of questions for just what social media platforms should do when faced with such a situation.
One option would be to not operate in countries like Myanmar. This reasoning was behind Google’s initial move to pull out of China — which it now seems to be reconsidering with the development of a censored search engine to meet Chinese Communist Party requirements.
Facebook, however, has not opted to cut service in Myanmar, but instead decided to employ more content reviewers, block specific accounts, and continue to actively monitor social media activity. It is also hiring a product policy director for human rights.
Although imperfect, Facebook is right to keep services available while also ramping up efforts to monitor content. After all, without Facebook, other companies would emerge to meet demand and those platforms, rather than being manipulated, might well be run by the military.
Facebook and other companies must not wait to relearn the lessons from Myanmar with each new market it enters and grows to dominate, but instead have a well-thought-out plan from the beginning to meet the specific challenges or the political landscape.