More than wor(l)ds : Can AI effectively monitor online harms?
Part 1 - Lead Organizer
Organization / Affiliation (Please state "Individual" if appropriate)
The Sentinel Project
Global Project Coordinator - Hatebase
Economy of Residence
Primary Stakeholder Group
Part 2 - Session Proposal
Your proposal is for
Main Conference (Day 1-3)
More than wor(l)ds : Can AI effectively monitor online harms?
Track(s) (can select more than one)
Where do you plan to organize your session?
Virtual / online
Specific Issues for Discussion
Hate speech (HS) is a human problem, which will always require human engagement to teach machines its relevance. It's also a very context-dependent phenomenon, making it imperative for us to engage with experts across different regional and cultural contexts. How can we collaborate sustainably to keep pace with the constantly evolving nature of online HS? Can the monitoring of HS meaningfully support offline efforts to prevent violence and build societal cohesion? Artificial Intelligence (AI) is now widely used by platforms to find, categorize and remove harmful online content at scale. The hope is that AI systems will work robustly and efficiently to find hate speech and harassment and then take them down before they reach targeted individuals, communities and bystanders. However, in practice AI systems are beset with serious methodological, technical, and ethical challenges. In this session, we bring together human rights experts, practitioners who research and develop AI-based hate detection systems, in an effort to formulate a rights-respecting approach to tackling hate. Our hope is that bridging the gap between these communities will help to drive new initiatives and outlooks, ultimately leading to better and more responsible ways of tackling online abuse, balanced with protecting our freedom of expression.
Describe the Relevance of Your Session to APrIGF
Digital technologies have brought a myriad of benefits for society, transforming how people connect and communicate with each other. However, they have also enabled harmful and abusive behaviours to reach large audiences. The marginalised and vulnerable communities are often disproportionately at risk of receiving hate, compounding other social inequalities and injustices. This has created a huge risk, exacerbating social tensions and contributing to the division and breakdown of social bonds. Global tragedies demonstrate the potential for online hate to spill over into real-world violence. The main outcome is for participants to leave with a clear understanding of the complexities of online hate, the difficulties of defining, finding and challenging it, the limitations (but also potential) of AI to ‘solve’ this problem. We will address the risk of online harms and scrutinise the need for human rights to be actively integrated into how online spaces are governed, moderated and managed. This session has direct relevance at a time when thought leaders, politicians, regulators and policymakers are struggling with how to understand, monitor, and address the toxic effects of HS. We adopt a multi-stakeholder approach, reflecting the need for social and computational voices to be heard to develop feasible and effective solutions.
Methodology / Agenda
Rough agenda : Introduction: 5 minutes Part_1: Challenges of regulating hate speech using AI (25 minutes) 20 minutes Question 1: What are the key dimensions that social media firms should report on in order to ensure clearer communication of policies such as content guidelines and enforcement to users? Question 2: What should we do with online hate? Is the answer just to ban people? Question 3: What role, does AI have to play in tackling online hate? 5 minutes Q&A + activity Part_2: The ethical challenges and conflicts in Asia-Pacific (25 minutes) 20 minutes Question 1: Who should be responsible for the development and enforcement of policies to restrict hate speech and incitement to violence online, and how should these be applied? Question 2: How do we protect freedom of speech whilst still protecting from hate? Question 3 : How can we crowdsource hate speech lexicons for appropriate linguistic, cultural, and contextual knowledge? 7 minutes Q&A + activity Closing remarks: 2 minutes Remote participants will be able to pose questions to subject matter experts and other participants during the session through Slido. We will also use polls, shared documents and activity based tools such as Miro/Mural board to enhance participation.
Please provide 3 subject matter tags that best describe your session.
#MitigatingOnlineHate #OnlineHarms #ModeratingHateSpeech
Moderators & Speakers Info (Please complete where possible)
|Name||Organization||Designation||Economy of Residence||Stakeholder Group||Gender||Status of Confirmation|
|Moderator (Primary)||Safra Anver||Watch Dog||Co-Founder||Sri Lanka||Private Sector||Female||Confirmed|
|Speaker 1||Ayesha S Abduljalil||ISOC Manama||Member||Bahrain||Youth \/ Students||Female||Confirmed|
|Speaker 2||Raashi Saxena||The Sentinel Project||Global Manager||India||Civil Society||Female||Confirmed|
|Speaker 3||Maija Lyytinen||UNESCO South Asia||Program Lead : Prevention of Violent Extremism and Hate Speech||Finland||Intergovernmental Organization||Female||Invited|
|Speaker 4||Shah Zaidur Rahman||EyHost Ltd||Head of Business Strategy & Development||Bangladesh||Technical Community||Male||Confirmed|
Please explain the rationale for choosing each of the above contributors to the session.
Our proposed speakers come from diverse geographical backgrounds, keeping in mind the gender balance, youth representation and various skill sets - Safra Anver previously ran WatchDog and comes with a strong background in local fact-checking, identifying misinformation and scanning trusted news sources to verify information - Ayesha Abduljalil is a professional data analyst and graduate diploma student that has done extensive research on the ethical implications of Artificial Intelligence. - Raashi Saxena from The Sentinel Project will talk about how Hatebase was built to assist companies, government agencies, NGOs and research organizations moderate online conversations and potentially use hate speech as a predictor for regional violence. She will introduce participants to their Citizen Linguist Lab to inspire participants in crowdsourcing their inputs that have appropriate linguistic, cultural, and contextual knowledge. - Shah Zaidur Rahman will address the technical challenges associated with current HS moderation, the need for effective automation and highlight some best practices observed in the community. 1) Age: There is a diverse mix of qualifications ranging from youth representatives (early stage professionals) to graduate diploma students, civil society leaders and subject matter experts. 2) Geographic Area: The speakers for this panel discussion come from Finland (Europe), Bahrain (Asia), India (Asia), and Bangladesh representing four distinct countries. The session is going to rely on neuro linguistic capability of speakers that ideate on the current trends of effective online moderation of hate speech. 3) Organisers : The organizing team members comprises of youth members from civil society and private sector. They hail from India (Asia) and Sri Lanka(Asia) 4) Stakeholder Group: The speakers represent five different stakeholder groups, namely, inter-governmental, civil society, private sector and technical. One of the speakers, Maija Lyytinen, will highlight UNESCO's mitigation efforts from a South Asian context and the work they have been doing with local on ground partners. We strongly believe the diversity in stakeholder groups is necessary to approach the issue from all possible sides. The session will create a direct conversation between 5 key stakeholders (Private, Civil, Technical, Youth and Inter-Governmental) who all work to tackle online abuse to establish a shared understanding of challenges and solutions, but are rarely brought into contact. In particular, we anticipate articulation of a global human rights based critique of data science research practices in this domain, helping to formulate constructive ways to better shape the use of AI to tackle online harms.