Misleading Takedowns: Facebook needs to be a lot more transparent when it comes to banning pages, groups

Earlier this week, Facebook deleted over 700 pages in India for “coordinated inauthentic behaviour” and “spam”. Most of these pages were related to the Congress and the BJP. The action, evidently with an eye on the upcoming elections, targeted what Facebook described as deliberate attempts to mislead. Facebook’s role in cleaning up mala fide actions antithetical to electoral democracy needs to be understood better. The Internet has made the process of coordinated action easier, thanks to the anonymity it allows and social media has brought new dimensions to the social engineering of thought and action.

Digital virality has transformed the propagation and spread of news and views. Law enforcement agencies have had to turn to social media companies ever-so-often to restore ‘order’. In 2012, when there was a mass exodus of the North-Eastern community from Bengaluru following rumours of violence against them, social media companies were asked to cooperate with the government to screen websites spreading these rumours. Similarly, after its platform was used to fuel mob lynchings, WhatsApp made some design changes following government directions.

However, it has not always been straightforward to resolve the issues of misinformation or ‘coordinated action to mislead’, on digital platforms. Anti-caste activists have pointed out that even as Facebook unfairly removes posts of those who speak against caste oppression, it does not act against those who use casteist slurs or tackle trolls against women in public life.

In the case at hand, Facebook took suo moto action to identify and delete the Pages. Ostensibly, the company acted in accordance with its own policies against spam. Facebook says it took down pages for behaviour, not content. This behaviour, according to Facebook, includes deceiving people about the identities, locations or motivations of those running these Pages. As per Facebook’s categorisation, therefore, the case of deletion of Pages of political parties is distinct from the case of lynchings, or from what happened in Myanmar – where Facebook was used to spread hate speech and rumours. This claim of minding behaviour and not doctoring content needs some unpacking.

Facebook as a public sphere infrastructure
Appearing to cause trouble in elections does not augur well for Facebook’s reputation, and so, it does have a business interest in maintaining/appearing to maintain fairness. And while Facebook’s actions might be legitimate according to its own policies, we must not uncritically accept its self-appointed role as the custodian of Indian democracy. The platforms that Facebook controls – including WhatsApp – have now become an infrastructural service provided to countries around the world. And Facebook is technically not accountable to the people of these countries for how it manages this infrastructural service.

Facebook is a company incorporated in the US, and as such is only accountable to the US government. The implication of the infrastructural nature of its service is that — as a platform that co-creates the public sphere encompassing not just ordinary people, but their political representatives and the lobbies that make and break political power, no individual or political party can afford to ignore it. Even if Facebook faces flak, as it did after the infamous Cambridge Analytica scandal, any reputational damage it may suffer in real terms is pretty much contained, thanks to its monopoly power. Facebook could, therefore act in allegedly fair and noble ways or not; it could choose to censor that which is inauthentic or not.

In addition to its lack of accountability to the Indian electorate, Facebook’s actions also lack transparency. We do not know whether Facebook is telling the truth about why it took down these Pages, who the people were who controlled these Pages, and whether the behaviour was really inauthentic and coordinated. More importantly, we also do not know which other Pages were allowed to stay on despite exhibiting coordinated inauthentic behaviour. We cannot know, because Facebook controls how much data it releases about these actions.

These glaring lacunae in the accountability and transparency of the infrastructure scaffolding the public sphere become more worrisome because they directly impact Indian democracy. This issue is not limited just to elections, as one can easily imagine other activity on Facebook that endangers Indian democracy; coordinated, vicious hate speech, for example.

Who must decide authenticity?
In the digital age, social media platforms are co-implicated in how society’s democratic contours evolve. Decisions by platform companies deeming certain activity authentic and certain others inauthentic beg the question about who determines what action is good for democracy. Facebook’s acts of selective commission and omission are part of its pick-and-mix game to use shifting standards as may be best for its business and in utter disregard of local laws. The fine line between content and behaviour may amount to nothing significant for social good and completely ignore democratic consensus about the same.

In the Kathua rape case, the Delhi High Court issued notices to Facebook, Google, Twitter and YouTube for disclosing the identity of the victim, in contravention of the law. In the wake of the Christchurch shootings, the Privacy Commissioner of New Zealand asked Facebook to provide the police with the account details of everyone who had shared the gunman's video, in an "egregious" breach of the victim's privacy. Facebook refused to hand over the names and Global Policy VP, Monika Bickert, held that she was following the law — presumably, US law.

Meanwhile, Facebook is trying to fix the issue of contentious content by attempting to institute an External Oversight Board. But in the manner in which this is planned, such a Board will neither be democratically elected nor have more than mere advisory power over Facebook. What countries need from Facebook is transparency and democratic oversight over that transparency. Rules on intermediary liability only partially address these concerns.

Civil society activists in India are now asking for the Election Commission to demand transparency, both from the platforms and from candidates. We should however not limit our demands for transparency only towards ad spending. Public disclosure of decisions made by social media platforms with respect to allowing, promoting, and limiting content must be mandatory. Platforms make trade-offs between free speech and safety/anti-racism/counter-democratic speech, and these trade-offs are often made algorithmically. Transparency will help citizens understand what choices were made and demand different choices, if they so desire.

The changes we need in intermediary liability rules are ones that would make Facebook reveal data on such actions. In December 2018, the French Parliament passed a law that inter alia requires digital platforms exceeding a certain number of hits a day to publish their algorithms. Facebook needs to be obligated to explain the basis of its decisions, and not just carry out these decisions as a matter of discretion.

The solution to its lack of accountability, of course, will have a much longer path. The Indian Parliament does not have the same ability to summon Facebook to a hearing as the US Congress does. Nevertheless, we must demand that democratic oversight over digital platforms like Facebook be maintained in the case of elections through the Election Commission of India, or through a Parliamentary Standing Committee comprising members from ruling and opposition parties.

This article was first published in Firstpost.

Focus Areas
Resource Type