But keeps no takedown policy on fake news that doesn’t violate community standards
MANILA - Facebook has shifted its strategy in the use of artificial intelligence and machine learning technologies so it can weed out “harmful content” from its platforms at a much faster pace, but is still keeping a no takedown policy on “false news” that don’t violate its community standards.
In an online briefing on Tuesday, Facebook officials explained how they are using AI to track down and remove harmful content on its social media platforms including Facebook, Instagram, WhatsApp, and Oculus.
Chris Palow, Facebook’s Community Integrity Engineer, said the social media giant has long been using AI to police its social networks.
“We use it, in particular, to problem-solve whether or not a post or an account, or a Page, or a Group violates our community standards,” Palow said.
Palow said Facebook’s AI can even tell if a brownie being sold on their platforms contains illegal drugs, by assessing the context behind the content.
Facebook counts content that promotes or include sexual exploitation, violence, hate speech, terrorism, illegal drugs, suicide and self-injury, bullying and harassment as violative of its standards.
Using fake accounts, and engaging in “inauthentic behavior” also violate community standards. These include misrepresentation, artificially boosting the popularity of content, concealing a Page’s purpose by misleading users about the ownership or control of that Page, among others.
Facebook’s earlier strategy was to use AI to scan potentially harmful posts which are then sent chronologically to human reviewers. This was Facebook’s way of proactively policing its platforms, which complement the complaints sent by users about certain posts.
Ryan Barnes, Product Manager of Community Integrity at Facebook said this helped them weed out around 95 percent of harmful content from their platforms before users even reported them.
But chronologically sending these posts to human reviewers also presented problems.
“Not all harmful content is equal. We want to ensure we’re getting to effectively the worst of the worst, and making sure we’re prioritizing real-world imminent harm above all,” Barnes said.
Now, the company uses AI to also prioritize potentially harmful content based on its virality, severity and likelihood of violating Facebook policies.
Harmful posts that could go viral are prioritized for review and action.
“Potentially violating content that’s quickly being shared is given a greater weight than content that is not being shared or viewed,” Barnes said.
Posts that feature real-world harm are also ranked higher for review.
“We want to prioritize that, such as suicide, self-injury, child exploitation, terrorism. This, prioritized over other areas such as spam.”
Facebook’s AI can also identify content that has “signals similar to other content that violate its policies,” which is also prioritized, Barnes said.
Facebook said this shift in strategy has allowed its 15,000 reviewers to focus on content that require more urgent action, and has resulted in a faster response for most harmful reports.
The social media giant however is not going to use its AI or its content reviewers to remove “false news” unless the misinformation presents imminent harm.
“The system is really designed for enforcing our community standards,” Barnes said.
She said that unless misinformation is harmful to someone’s health or physical wellbeing, the post is not taken down.
Tessa Lyons, Facebook’s News Feed Product Manager, who is focused on false news earlier said the company is instead trying to reduce the distribution of false news, and inform people by giving them more context on the posts they see.
Facebook takes action against entire Pages and websites that repeatedly share false news, reducing their overall News Feed distribution, Lyons said.
Pages that spread false news are also not allowed to have ads on their content.
Facebook said it has also partnered with third-party fact-checkers to review and rate the accuracy of articles and posts on Facebook. Posts tagged as false get ranked lower in News Feed.
“On average, this cuts future views by more than 80 percent,” Lyons said.
If a fact-checker has rated a story as false, we’ll let people who try to share the story know there’s more reporting on the subject. We’ll also notify people who previously shared the story on Facebook.
In the Philippines, Facebook has taken down thousands of accounts for inauthentic behaviors.
In 2018, Facebook removed 95 pages and 39 accounts for violating its policies.
The pages removed included names like Duterte Media, Duterte sa Pagbabago BUKAS, DDS, Duterte Phenomenon, DU30 Trending News, Hot Babes, News Media Trends, Bossing Vic, Pilipinas Daily News, Like and Win, Manang Imee, and Karlo ang Probinsiyano, Facebook said.
Facebook said the pages and accounts encouraged people to "visit low-quality websites that contain little substantive content and are full of disruptive ads."
Last year, Facebook also took down 67 pages, 68 accounts, 40 groups and 25 Instagram accounts associated for “inauthentic coordinated behavior” and for using fake accounts to mislead people about the origin of contents posted on the pages.
The pages were operated by Nic Gabunada, who handled President Rodrigo Duterte’s social media campaign in 2016.
Some of the pages removed include "Bong Go Supporters," "Duterte Warriors," "Pinulungang Binisaya," "Trending Now," and "Kuya Sonny Angara."
Last September, Facebook also removed several accounts belonging to two networks -- one based in China, the other purportedly with "links" to individuals associated with the Philippine military and police -- for violating its policies.
The removal of the pro-government pages and accounts irked President Duterte.