Technology
Facebook Now Uses AI for Harmful Content Detection
Social media giant Facebook shifts to artificial intelligence (AI) and other digital strategies to detect “harmful content” much faster than the usual. But despite their constant move against “harmful content,” Facebook still keeps the no takedown policy on false news and information as long as it doesn’t violate community standards.
“We use it, in particular, to problem-solve whether or not a post or an account, or a page, or a group violates our community standards,” said Chris Palow, Facebook’s Community Integrity Engineer.
Harmful content includes terrorism, hate speech, sexual exploitation, illegal drugs, self injury, bullying, suicide, harrasment and all forms of violence.
Facebook’s AI will send flagged harmful content to human reviewers for final assessment.
Facebook has 15,000 reviewers to maintain their focus on monitoring harmful content that requires more urgent action. The number of reviewers and with the help of AI, faster response will be expected. (MLC)