In a marked shift from its earlier stance, Facebook now says that it will flag all “newsworthy” posts from politicians that break its rules, including those from US President Donald Trump. Earlier, CEO Mark Zuckerberg had claimed that he would let political figures speak freely and count on voters to judge truthfulness. “In a democracy, I don’t think it’s right for private companies to censor politicians or the news,” he had said earlier this month.
This is a point of ongoing debate. In early July, Twitter had clashed with Donald Trump, posting clarifications for, and then withholding some of the US president’s tweets. Zuckerberg, then appearing on Fox News, had directly criticised Dorsey and Twitter, saying that privately-owned digital platforms should not act as the “arbiters of truth”. “We have a different policy than, I think, Twitter on this,” Zuckerberg said. “I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online,” he added. “Private companies, especially these platform companies, shouldn’t be in the position of doing that.”
Dorsey, without naming Zuckerberg or Facebook, tweeted in reply: “We will continue to point out incorrect or disputed information about elections globally. This does not make us an ‘arbiter of truth.’ Our intention is to connect the dots of conflicting statements and show the information in dispute so people can judge for themselves.”
What is Facebook changing now?
“The policies we’re implementing today are designed to address the reality of the challenges our country is facing and how they’re showing up across our community,” Zuckerberg wrote on his Facebook page announcing the changes. Zuckerberg said the social network is taking additional steps to counter election-related misinformation, as the US elections are inching closer. In particular, the social network will begin adding new labels to all posts about voting that will direct users to authoritative information from state and local election officials. Facebook is also banning false claims intended to discourage voting, such as stories about federal agents checking legal status at polling places.
The company also said it is increasing its enforcement capacity to remove false claims about local polling conditions in the 72 hours before the US elections.
Why did Facebook make the changes?
Facebook’s stock dropped more than eight per cent, erasing roughly $50 billion from its market valuation, after Unilever, the European company behind brands such as Ben&Jerry’s and Dove, announced it would boycott Facebook ads through the end of the year over the amount of hate speech and divisive rhetoric on its platform.
Later, Coca-Cola and Verizon also announced they joined the boycott.
The European consumer-product maker, Unilever, said it took the move to protest the amount of hate speech online. Unilever said the polarised atmosphere in the United States ahead of November’s presidential election placed responsibility on brands to act. Shares of both Facebook and Twitter fell roughly seven per cent following Unilever’s announcement.
The company, which is based in the Netherlands and Britain, joins a raft of other advertisers pulling back from online platforms. Facebook, in particular, has been the target of an escalating movement to withhold advertising dollars to pressure it to do more to prevent racist and violent content from being shared on its platform.
“We have decided that starting now through at least the end of the year, we will not run brand advertising in social media newsfeed platforms Facebook, Instagram and Twitter in the US,” Unilever said. “Continuing to advertise on these platforms at this time would not add value to people and society.”
Sarah Personette, vice president of global client solutions at Twitter, said the company’s mission is to serve the public conversation and ensure Twitter is a place where people can make human connections, seek and receive authentic and credible information, and express themselves freely and safely.
Facebook and the thorny issue of political ads
With a burgeoning fake news industry and targeted advertisements being used for nefarious purposes globally, the three social media giants—Facebook, Twitter and Google—have constantly been on the backfoot. In the 2016 US general elections, Facebook faced flak for allegedly allowing Russian bots to run a massive ‘disinformation campaign’. By the 2019 Indian general elections, all three platforms had learnt their lesson, maintaining a close watch on political advertisements, putting out detailed transparency reports on the finances disbursed on the same, broken down partywise.
However, fact-checking can often have political consequences. The conservatives and the right wing have often expressed aggrievances over what they called “algorithmic bias” or “liberal bent to the platforms” and “suppression of right wing voices”.
In June 2019, Carlos Maza, a popular anchor from the Vox Media corporation, put up a Twitter thread alleging YouTube of inaction against conservative figures who routinely harangued him online for his homosexuality and race. This triggered a firestorm, with a period of what was called ‘adpocalypse’ on YouTube, when, independent YouTube creators complained, half their content were demonetised. This also resulted in a routine suppression of conservative voices on the platform, right wing activists claimed. This had resulted in a multitude of legislations in the US, proposing to regulate Silicon Valley. The Biased Algorithm Deterrence Act in 2019 and Ending Support for Internet Censorship Act, all mandated political neutrality from the social media platforms.
Facebook’s ‘Supreme Court’ for fact checks
Facebook had earlier named 20 members to an “oversight board”, a quasi-independent panel that is to make decisions on thorny issues. The oversight panel is intended to rule on difficult content issues, such as whether Facebook or Instagram posts constitute hate speech. It will be empowered to make binding rulings on whether posts or ads violate the company’s standards. Any other findings it makes will be considered ‘guidance’ by Facebook.
Facebook cannot remove members or staff of the board, which is supported by a $130 million irrevocable trust fund. The board’s members were named by Facebook and hail from a broad swath of regions around the world. They include Tawakkol Karman, a Nobel Peace Prize laureate from Yemen, Alan Rusbridger, the former editor-in-chief of British newspaper The Guardian, and Helle Thorning-Schmidt, the former prime minister of Denmark.
According to Facebook, the Oversight Board members have lived in more than 27 countries, and speak at least 29 languages among them. “For the first time, an independent body will make final and binding decisions on what stays up and what is removed,” Thorning-Schmidt said. “This is a big deal; we are basically building a new model for platform governance.”
But, critics call the oversight board a bid by Facebook to forestall regulation or even an eventual breakup.
-Inputs from agencies