Personalized AI tools can combat ableism online

People with disabilities experience high levels of harassment online, including microaggressions and slurs. However, social media platforms frequently fail to address reports of disability-based harassment and offer only limited tools that simply hide hateful content.

New Cornell research reveals that social media users with disabilities prefer more personalized content moderation powered by AI systems that not only hide harmful content but also summarize or categorize it by the specific type of hate expressed.

“Our work showed that indicating the type of content – whether it associates disability with inability, promotes eugenicist ideas, and so on – supported transparency and trust, and increased user agency,” said the paper’s co-author, Shiri Azenkot, associate professor at Cornell Tech. She is also an associate professor at the Jacobs Technion-Cornell Institute and at the Cornell Ann S. Bowers College of Computing and Information Science.

The researchers will present their work, “Ignorance is not Bliss: Designing Personalized Moderation to Address Ableist Hate on Social Media,” April 28 at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems (CHI ‘25), April 26-May 1 in Yokohama, Japan.

The study’s co-authors are Aditya Vashistha, assistant professor of information sciences at the Bowers College of Computing and Information Science; Ph.D. student Sharon Heung of Cornell Tech; and University of Washington Ph.D. student Lucy Jiang, M.S. ’24, a former master’s student at Cornell.

Researchers conducted interviews and focus groups with social media users with disabilities. The participants tried out different designs for AI content moderation systems, which varied in how they labeled and presented speech that is ableist, or discriminatory toward disabled people.

The study found that social media users with disabilities had a strong preference for moderation systems based on the “type” of language being presented, compared to a sensitivity slider that hid content based on the perceived “intensity” of the hate.

“This distinction is crucial: The most overt or extreme examples of ableist language aren’t always the most harmful or triggering,” Heung said. “As highlighted by our participants, sometimes subtler or more insidious forms of ableism can cause deeper, more lasting harm.”

The study also highlighted a recurring belief among participants: Social media platforms do not take disability hate speech and ableism seriously. Specifically, participants expressed distrust in AI-based moderation due to past negative experiences and the subjective nature of what constitutes “ableism.”

“Models may flag a neutral sentence as more toxic simply because it includes a disability-related term,” Heung said. “More work needs to be done with the disability community to ensure the accuracy of LLMs and to ensure that these tools are usable in practice.”

In their paper, the researchers advocate for the development of more accurate, “context-aware” tools for detecting and addressing ableist content. AI programs, known as large language models (LLMs), could offer a promising solution for social media platforms that are overwhelmed with large amounts of content uploaded daily. However, collaboration with the disability community is necessary to ensure these tools work as intended.

“If the goal is to reduce the emotional or psychological harm caused by encountering hate, then the design of these tools must reflect what users themselves find harmful or distressing, not what automated systems or external evaluators deem as most toxic,” said Heung, a Ph.D. student in the field of information science who led this work.

The researchers also recommend platforms add the ability to undo and correct filtering errors, as well as the use of “allowlists” that exempt trusted accounts from filtering.

Overall, the researchers recommend that AI filters move away from current designs that completely remove hateful content without providing explanations as to why. Instead, the researchers suggest customized designs, such as content warnings for ableism, to promote online safety.

“Social media platforms can adopt this approach to moderation for all kinds of hateful content, not just ableism,” Azenkot said.

Grace Stanley is a staff writer-editor for Cornell Tech.

Media Contact

Becka Bowyer

OSZAR »