Facebook employees have warned for years that as the company raced to become a global service it was failing to police abusive content in countries where such speech was likely to cause the most harm, according to interviews with five former employees and internal company documents viewed by Reuters.
In a review posted to Facebook's internal message board last year regarding ways the company identifies abuses on its site, one employee reported"significant gaps" in certain countries at risk of real-world violence, especially Myanmar and Ethiopia. The company designates countries"at-risk" based on variables including unrest, ethnic violence, the number of users and existing laws, two former staffers told Reuters. The system aims to steer resources to places where abuses on its site could have the most severe impact, the people said.
Facebook has long touted the importance of its artificial-intelligence systems, in combination with human review, as a way of tackling objectionable and dangerous content on its platforms. Machine-learning systems can detect such content with varying levels of accuracy. Facebook spokesperson Jones said the company now has proactive detection technology to detect hate speech in Oromo and Amharic and has hired more people with"language, country and topic expertise," including people who have worked in Myanmar and Ethiopia.
Facebook's Jones acknowledged that Arabic language content moderation"presents an enormous set of challenges." She said Facebook has made investments in staff over the last two years but recognizes"we still have more work to do."
Singapore Latest News, Singapore Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: ChannelNewsAsia - 🏆 6. / 66 Read more »
Source: TODAYonline - 🏆 1. / 99 Read more »
Source: TODAYonline - 🏆 1. / 99 Read more »
Source: ChannelNewsAsia - 🏆 6. / 66 Read more »
Source: YahooSG - 🏆 3. / 71 Read more »
Source: ChannelNewsAsia - 🏆 6. / 66 Read more »