In a review posted to Facebook’s internal message board last year regarding ways the company identifies abuses on its site, one employee reported “significant gaps” in certain countries at risk of real-world violence, especially Myanmar and Ethiopia.
The company designates countries “at-risk” based on variables including unrest, ethnic violence, the number of users, and existing laws, two former staffers told Reuters. The system aims to steer resources to places where abuses on its site could have the most severe impact, the people said.
But languages spoken outside the United States, Canada, and Europe have been a stumbling block for Facebook’s automated content moderation, the documents provided to the government by Haugen show. The company lacks AI systems to detect abusive posts in a number of languages used on its platform.
In an undated document, which a person familiar with the disclosures said was from 2021, Facebook employees also shared examples of “fear-mongering, anti-Muslim narratives” spread on the site in India, including calls to oust the large minority Muslim population there. “Our lack of Hindi and Bengali classifiers means much of this content is never flagged or actioned,” the document said.
Three former Facebook employees who worked for the company’s Asia Pacific and the Middle East and North Africa offices in the past five years told Reuters they believed content moderation in their regions had not been a priority for Facebook management. These people said leadership did not understand the issues and did not devote enough staff and resources.
Example, Philippines
Philippines Latest News, Philippines Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: ANCALERTS - 🏆 26. / 50 Read more »
Source: inquirerdotnet - 🏆 3. / 86 Read more »
Source: ABSCBNNews - 🏆 5. / 83 Read more »
Source: manilabulletin - 🏆 25. / 51 Read more »
Source: CNN Philippines - 🏆 13. / 63 Read more »
Source: cebudailynews - 🏆 8. / 71 Read more »