AI Emotion Extraction Research
Background: With the advent of social media, existing literature has found that content eliciting negative reactions tends to be promoted more by social media algorithms. Similarly, as AI image generation models grow increasingly popular, they rely on social media content for training data. This raises the question: Are AI images models being trained on data that is generally more negative in nature? To explore this question, I'm co-authoring a paper under the mentorship of Dr. Cody Buntain (See: Context).
Summary: Our research has two objectives. First, to design and
assess various empirical techniques to identify emotions in images, such as
image captioning and model fine-tuning. Second, to determine whether
AI-generated images evoke emotions consistent with the prompts used to produce
them. For example, given a prompt such as “A flowing waterfall on a luscious
mountainside,” would an AI-generated image evoke feelings like contentment,
joy, and excitement, or might it have some internal emotional bias?
Results: We found that AI image generation models have a small yet significant bias towards NEGATIVE emotion. On balance, images generated by AI models tend to be more negative than their prompts themselves are. This finding has significant social implications, particularly as AI-generated images are increasingly used on social media and journalism.
This research also has psychological relevance. We examined how different models and datasets classified emotions as a technical question and found that current image-emotion datasets are misaligned with leading psychological emotion frameworks. We thus call for interdisciplinary studies and datasets that better incorporate psychological insights.
Publication: Our paper has been submitted to the 2025 ACM Web Science Conference and is available as an arXiv preprint.