HyperAI
Back to Headlines

Personalized AI Moderation Tools Aim to Tackle Subtle and Overt Ableism on Social Media Platforms

12 days ago

People with disabilities face significant levels of online harassment, including microaggressions and slurs, yet social media platforms often fail to address these issues adequately. Many of the existing tools are limited to hiding harmful content, which does not fully meet the needs of users. New research from Cornell University sheds light on this problem and proposes innovative solutions. The study, titled "Ignorance is not Bliss: Designing Personalized Moderation to Address Ableist Hate on Social Media," involves researchers Shiri Azenkot, Aditya Vashistha, Sharon Heung, and Lucy Jiang. They conducted interviews and focus groups with social media users who have disabilities to explore their preferences in content moderation tools. The participants tested various AI-powered systems designed to label and present ableist content differently, focusing on the type of language rather than its intensity. One of the key findings was that users preferred moderation systems that categorized and summarized hate speech based on specific types of ableism, such as associating disability with inability or promoting eugenicist ideas. This approach was deemed more transparent and trustworthy, enhancing user agency and control over their online environment. According to Heung, subtle forms of ableism can be more harmful and long-lasting than overtly aggressive language, highlighting the importance of context in content moderation. Participants also expressed deep skepticism about social media platforms’ commitment to addressing disability hate speech. This distrust stems from past experiences where their reports were ignored or handled poorly. There is a concern that current AI models might mislabel neutral sentences containing disability-related terms as toxic, leading to frustration and disuse of the moderation tools. The researchers emphasize the need for ongoing collaboration with the disability community to refine these AI models, ensuring they accurately detect and appropriately address ableist content. Azenkot and her team advocate for the development of more context-aware AI tools to detect and mitigate ableist content. These tools would need to understand the nuances of language and community norms. For instance, they propose content warnings that alert users to ableist content while still providing visibility into why it was flagged. This transparency can help reduce the emotional and psychological harm caused by encountering hate speech. The researchers also recommend adding features that allow users to undo and correct filtering errors and creating "allowlists" to exempt trusted accounts from these filters. Such measures can enhance the usability and effectiveness of the moderation systems. By giving users more control over their online environment, these tools can better protect them without completely removing their access to information. The study was set to be presented at the Association for Computing Machinery's Conference on Human Factors in Computing Systems (CHI '25) in Yokohama, Japan, from April 26 to May 1, 2025. It underscores the critical need for social media platforms to prioritize the concerns and input of users with disabilities when designing and implementing content moderation tools. Industry insiders praise the research for its comprehensive approach to understanding the experiences of disabled users on social media. The findings are seen as a significant step forward in the development of more inclusive and effective AI content moderation systems. Companies like Intel, known for its commitment to accessibility and inclusive design, have already begun exploring similar technologies, which could lead to broader adoption and improved online environments for all users. This study not only highlights the gaps in current moderation practices but also provides actionable recommendations for platform developers. By fostering greater collaboration with the disability community, social media companies can create more robust and user-centric AI tools that enhance online safety and reduce the burden of ableist harassment on vulnerable users.

Related Links