Meta: Facebook and Instagram parent company hails new AI system to detect harmful content

It follows months in which the company has been criticised for allegedly prioritising its own profit before the wellbeing of its users - something the firm has denied.

FILE PHOTO: Woman holds smartphone with Facebook logo in front of a displayed Facebook's new rebrand logo Meta in this illustration picture taken October 28, 2021. REUTERS/Dado Ruvic/Illustration/File Photo
Image: Meta has hailed a new AI system it says can detect more harmful content
Why you can trust Sky News

Meta, the parent company of Facebook, Instagram and WhatsApp, has announced a new AI system called a "Few-Shot Learner" that can be used to detect harmful content without months of training.

It follows months in which the firm has been criticised for allegedly prioritising its own profit before the wellbeing of its users - something it has denied.

Among the many criticisms raised by whistleblower Frances Haugen were that Meta's automated tools for flagging harmful content on Facebook and Instagram to human moderators were not effective enough.

Please use Chrome browser for a more accessible video player

Whistleblower claims Facebook 'enabling' genocide

Now, Meta says it has developed a new type of AI technology called a Few-Shot Learner (FSL) that is capable of adapting to recognise evolving types of harmful content "within weeks instead of months".

Training algorithms to recognise particular kinds of content is a major impediment to moderation on platforms such as Instagram and Facebook.

The scale of user generated content on these platforms - amounting to many millions of posts per day - is beyond what anything less than an army of human moderators could analyse.

According to Meta, the value of the FSL is that it will enable the company to implement new moderation policies without having to train the algorithm on thousands if not millions of examples of content which humans have already reviewed and labelled as types of post that should be blocked.

More on Facebook

"We've tested FSL on a few relatively new, real-world integrity problems," the company said.

"For example, one recent task was to identify content that shares misleading or sensationalised information in away that likely discourages COVID-19 vaccinations."

As an example, it showed a post with the headline "Vaccine or DNA changer?" which it suggested would not be detected as sensational by traditional AI systems that analyse the meaning of sentences.

"In another, separate task, the new AI system improved an existing classifier that flags content that comes close to inciting violence," it added, with the example post featuring an image conatining the question: "Does that guy need all of his teeth?"

Meta said the traditional systems may miss these types of inflammatory posts because they were unusual - the reference to DNA alteration or teeth to imply violence was not something the systems had seen before.

Please use Chrome browser for a more accessible video player

Sky News investigates abuse in custody in Myanmar

Meta says the FSL can work in more than 100 languages - something which has also been a challenge for its content moderators.

The company previously admitted failing to tackle inflammatory posts from the Myanmar military targeting the country's minority Rohingya Muslim population.

In March of 2018, a UN investigator accused Facebook of being used to incite violence and racial hatred against the Rohingya - violence which the UN called a "textbook example of ethnic cleansing".

On Wednesday, the social media platform banned an additional range of accounts, groups, and pages connected to businesses linked to the Myanmar military, which is now controlling the country.