As part of IT for Change’s ‘Recognize-Resist-Remedy’ project, supported by IDRC (Canada), the World Wide Web Foundation, and the Ford Foundation, in collaboration with InternetLab, we co-organized a roundtable to catalyze a productive debate revolving around the central question: What new imaginaries of social media governance will be adequate to eradicate the unfreedoms arising from misogyny in online communications agora?
During 19-20 April 2022, we brought together 20 academics, lawyers, digital rights activists, and scholar-practitioners committed to feminist politics, to collectively reflect on, discuss, and debate these questions, in order to weave a rich tapestry of perspectives on future directions for social media governance.
This curated compendium of essay submissions is from the participants of the roundtable and offers glimpses of the enriching discussions that covered:
Part One: Gendered Political Violence
- The manner in which political contexts enframe the gendered terms of access and appearance on digital platforms; the role of platforms in perpetuating hate, disinformation, and gender-based violence and its implications for its profitability.
- Negotiating free speech thresholds in the techno-spatial dimensions of the digital: Self-effacement as a strategy for surviving internet abuse adopted by Brazilian women activists.
- Employed (in)visibility: Conceptual connections between disinformation, political violence, and gender identity. Experiences of African Muslim women navigating policing and protection on the internet.
- Resisting online gender-based violence: A complex field of compromises, conciliations and challenges in surviving technological and gendered surveillance.
Part Two: Contextual Content Governance
- Rethinking intermediary liability and differentiated governance approaches by type and size of platform towards building alternative platform protocols.
- Who assigns the "community" in "community guidelines and standards"?
- Unpacking platform "duty of care", including studies of transparency reports, explainability of content moderation decisions, and mapping response to user complaints.
Part Three: Community-led Moderation Systems
- Automated content moderation: Designing for human-in-the-loop systems that support machine learning tools to detect online gendered violence.
- Recommendations for international law and global benchmarking of standards to hold transnational platform culpability in promoting gendered violence.
- Algorithmic accountability: Studying algorithmic responses to instances of violence and unpacking how market interests of platforms shape investment trajectories in algorithmic tools for sexist hate speech detection in non-dominant languages.
- State sovereignty and women’s human rights: Can there be models of a regulated platform economy that foster a healthy public sphere?
Part Four: Legal Responses to Online Gendered Violence
- Interrogating the conceptions of sexist hate speech and misogyny in the law, including possible absences of this theme in legislation.
- Addressing the responsibility gap: Constitutional-legal principles to address sexism/misogyny, procedural law, and institutional arrangements for securing safe digital environments.
- Deploying anti-carceral and intersectional lens: Implications of criminal law reform for addressing circulation of misogynistic digital content.
- Content governance regimes: Implications for freedom and censorship.
Part Five: Racialized Techno-political Organization of the Internet
- Algorithmic bias as structural reinforcement of socio-racial inequalities and sexual-class hierarchies: Digital ethnography of the Kenyan experience of the law and sexist internet cultures.
- Feminist theories of justice with regard to online misogyny. How can digital rights be culturally grounded?
- Digital infrastructural and data divide: Misogynist-racist violence is not an aberration but a symptom of internet coloniality.