Yolanda Redrup* says in its attempt to regain users’ trust, Facebook has for the first time revealed its community standards for content deemed inappropriate or otherwise.
Facebook has revealed for the first time its community standards policies and introduced an appeals process for content removal decisions.
The move is part of an overall shift by the social media giant to become more transparent, as it continues to fight to win back user trust in the wake of the Cambridge Analytica scandal.
Last week’s announcement came a day before Facebook was due to release its first quarter results and Mr Zuckerberg would be forced to answer analyst questions for the first time since the revelation that 87 million Facebook profiles were improperly shared with political consulting and research firm Cambridge Analytica and used by the Trump campaign in the leadup to the last US election.
Community standards policies provide insights into which posts Facebook deems to be inappropriate or otherwise.
Previously the social media giant had only provided a high-level overview of its policies, without giving enough information for an individual to accurately determine what posts would or would not be removed.
Facebook US-based public policy manager Jessica Leinwand told The Australian Financial Review the social network had been working on making these policies public since September last year.
“The goal is to make it so that the public understands exactly how we regulate our policies,” she said.
“We’ve received feedback from our community for some time now saying that in a lot of cases they didn’t understand why certain content was removed and thought we’d made a mistake, so we’ve been figuring out a way to do this given our scale.”
As part of Facebook’s decision to introduce appeals for content removal, the company is doubling the number of people who work on safety issues from 10,000 to 20,000 by the end of this year.
While it bolsters its workforce it is launching the appeals process for certain categories including bullying, violence, nudity, sexual activity and hate speech; this will then be extended to other categories.
The company uses automation to detect content on the social network that is in violation of its community standards, but when a complaint is made it gets sent to a reviewer.
“When you create a platform for all voices, some voices can be objectionable to others and we try to create policies that scale globally and apply to all users equally and can be enforced consistently,” Ms Leinwand said.
“The goal is to increase transparency that fosters our own accountability and also creates a dialogue in an effort to solicit feedback.”
* Yolanda Redrup writes on technology for The Australian Financial Review in Melbourne. She tweets at @YolandaRedrup.
This article first appeared at www.afr.com