This paper investigates how Large Language Models (LLMs) represent the affective experiences of individuals with disabilities on social media. We simulate posts using LLMs and compare them to authentic user-generated content in English, collected from disability-related subreddits, focusing on sentiment, emotion, and indicators of depression. Our analysis reveals that LLMs tend to produce overly positive and idealized portrayals, often failing to capture the complexity and nuance of disabled individuals’ emotional expressions. These misrepresentations underscore broader concerns about the limitations of LLMs in authentically reflecting the lived experiences of marginalized communities.
Do LLMs authentically represent affective experiences of people with disabilities on social media?
Marco Bombieri
;Marco Rospocher
2025-01-01
Abstract
This paper investigates how Large Language Models (LLMs) represent the affective experiences of individuals with disabilities on social media. We simulate posts using LLMs and compare them to authentic user-generated content in English, collected from disability-related subreddits, focusing on sentiment, emotion, and indicators of depression. Our analysis reveals that LLMs tend to produce overly positive and idealized portrayals, often failing to capture the complexity and nuance of disabled individuals’ emotional expressions. These misrepresentations underscore broader concerns about the limitations of LLMs in authentically reflecting the lived experiences of marginalized communities.| File | Dimensione | Formato | |
|---|---|---|---|
|
8_main_long.pdf
accesso aperto
Licenza:
Creative commons
Dimensione
366.2 kB
Formato
Adobe PDF
|
366.2 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



