Connect with us

Technology

Facebook Now Uses AI for Harmful Content Detection

Published

on

Social media giant Facebook shifts to artificial intelligence (AI) and other digital strategies to detect “harmful content” much faster than the usual. But despite their constant move against “harmful content,” Facebook still keeps the no takedown policy on false news and information as long as it doesn’t violate community standards.

“We use it, in particular, to problem-solve whether or not a post or an account, or a page, or a group violates our community standards,” said Chris Palow, Facebook’s Community Integrity Engineer.

Harmful content includes terrorism, hate speech, sexual exploitation, illegal drugs, self injury, bullying, suicide, harrasment and all forms of violence.

Facebook’s AI will send flagged harmful content to human reviewers for final assessment.

Facebook has 15,000 reviewers to maintain their focus on monitoring harmful content that requires more urgent action. The number of reviewers and with the help of AI, faster response will be expected. (MLC)

Subscribe

Advertisement

Facebook

Advertisement

Ads Blocker Image Powered by Code Help Pro

It looks like you are using an adblocker

Please consider allowing ads on our site. We rely on these ads to help us grow and continue sharing our content.

OK
Powered By
Best Wordpress Adblock Detecting Plugin | CHP Adblock