Facebook is losing the battle to combat hate speech in Myanmar despite hiring an external team of 60 reviewers to police Burmese content on the platform, a Reuters and UC Berkley School of Law investigation has revealed.
The probe found more than 1,000 examples of posts, comments and graphic imagery targeting the Rohingya people with hate speech that in some instances incited violence against the Muslim minority in the country’s north.
It illustrates the complex technical challenges that Facebook must overcome to eradicate the spread of hate speech and abuse on a platform that has more than 2.2 billion users speaking hundreds of different languages around the world.
The Rohingya have been persecuted by the Myanmar military and hardline nationalists in recent years, forcing 700,000 to flee into neighbouring Bangladesh. The humanitarian crisis has seen many thousands executed and villages burnt to the ground in what has widely been condemned as genocide.
In April, Facebook CEO Mark Zuckerberg promised the platform would tackle the problem by hiring Burmese-speaking monitors after several years of ineffective and inefficient attempts to combat the problem without native speakers.
“We have a responsibility to fight abuse on our products. This is especially true in countries like Myanmar where many people are using the internet for the first time, and Facebook can be used to spread hate and incite violence," a Facebook spokesperson tells AdNews.
"It's a problem we were too slow to spot — and why we're now working hard to ensure we're doing all we can to prevent the spread of misinformation and hate." See Facebook's full 'Update on Myanmar'.
One of the examples the Reuters investigation found.
Horrific examples
The Reuters/UC Berkley School of Law investigation revealed that Facebook is struggling to stem the tide of abusive posts against the Rohingya and Muslims in the largely Buddhist country.
Examples dug up in the investigation are alarming. One post said: “May the terrorist dog kalars fall fast and die horrible deaths”, and another post masked as a restaurant ad with Rohingya food, used the words: “We must fight them the way Hitler did the Jews, damn kalars!”.
The investigation found examples where Rohingya were referred to as Muslim dogs, maggots, rapists, and that they should be fed to pigs or killed.
AdNews spent an hour on Facebook to see if it could find similar cases.
On two Burmese speaking groups, there were dozens of derogatory references to “Muslim dogs”, “Muslim f***ers”, as well as one crude post with a Muslim woman posing with male genitalia, and an image referring to a large pig in the back of a cart as a body. One post talked about “Indians” that “are going to be destroyed”.
It didn't take AdNews long to dig up its own examples.
Facebook's response: 'We're getting better'
Shortly after the Reuters investigation, Facebook product manager Sara Su admitted Facebook had been “too slow” to prevent misinformation and hate on the platform and that it had a responsibility to fight the abuse.
“The rate at which bad content is reported in Burmese, whether it’s hate speech or misinformation, is low. This is due to challenges with our reporting tools, technical issues with font display and a lack of familiarity with our policies,” Su said.
“So, we’re investing heavily in artificial intelligence that can proactively flag posts that break our rules.”
Su said that Facebook had proactively identified 52% of the content that had been removed for hate speech in Myanmar, which is up from 13% in the last quarter in 2017.
“As recently as last week, we proactively identified posts that indicated a threat of credible violence in Myanmar. We removed the posts and flagged them to civil society groups to ensure that they were aware of potential violence,” Su added.
“It has also become clear that in Myanmar, false news can be used to incite violence, especially when coupled with ethnic and religious tensions. We have updated our credible violence policies to account for this, removing misinformation that has the potential to contribute to imminent violence or physical harm.”
A crisis six years in the making
Facebook has been used by hardline nationalists, often extreme Buddhist monk groups like Ma Ba Tha, to attack the Rohingya for at least five years.
It is by far the most powerful and popular form of media in Myanmar with 18 million active users, the same number as Spain. Facebook is widely relied upon as a source of news, information and communication in a country where other internet players are not sophisticated and the media is state controlled.
In April, UN chair of an Independent International Fact-Finding Mission on Myanmar, Marzuki Darusman, said Facebook was playing a “determining role” in the Rohingya crisis, while UN Myanmar investigator Yanghee Lee said it had “turned into a beast, and not what it originally intended”.
Nationalists have previously used videos of Muslim girls practising martial arts to encourage
Buddhists to be prepared for conflict. Image courtesy of No Hate Speech Project.
Facebook’s response in the past has been described as slow and inadequate by local activists who have monitored the situation for several years.
Reviews often took more than 48 hours after a post was reported, which is too slow to curtail the threat of violence.
Facebook said it has banned groups that have spread hate speech, including Wirathu, Thuseitta, Parmaukkha, Ma Ba Tha and the Buddha Dhamma Prahita Foundation.
The Reuters investigation revealed that Facebook monitors hate speech in Myanmar through a contractor in Malaysia under the codename “Honey Badger”.
The investigation interviewed former monitors based in Southeast Asia who said the scale of the operation wasn’t large enough. There are plans for Facebook to increase the number of reviewers to 100.
The spread of hate speech and Facebook posts that incite violence have been blamed for deadly violence against minorities in Sri Lanka, Cambodia, Indonesia, Mexico, India, Cameroon, Central African Republic and other developing country where there are flashpoints and poor media resources.
"In the last year we have established a team of product, policy and operations experts to roll out better reporting tools, a new policy to tackle misinformation that has the potential to contribute to offline harm, faster response times on reported content, and improved proactive detection of hate speech," a Facebook spokesperson tells AdNews.
"This is some of the most important work being done at Facebook, and there is more we need to do.”
Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop us a line at adnews@yaffa.com.au
Sign up to the AdNews newsletter, like us on Facebook or follow us on Twitter for breaking stories and campaigns throughout the day.