AI Emotion Extraction Research
With the advent of social media, research has found that content eliciting negative reactions tends to be promoted more by social media algorithms. Similarly, as AI image generation models grow increasingly powerful, they rely on social media content as an avenue for data. This raises the question: Are AI images models being trained on data that is generally more negative in nature?
To explore this question, I'm co-authoring a paper with University of Maryland, College Park Professor Dr. Cody Buntain. Our work has two primary objectives.
First, to implement and assess various techniques for identifying emotions in
images, such as using image captioning and custom-trained AI models. Second, to
determine whether AI-generated images evoke emotions consistent with their
prompts, or if some underlying bias exists from the models' training.
For this project, I personally
trained the AI models and collected data in Python. A key aspect of this
research is balancing the technical nuances of AI classification with psychological
theories of emotion. For example, we seek to identify whether images can evoke
multiple emotions and how different models' classifications align with established
psychological frameworks. In answering these questions, we hope to lay the foundation for a broader inquiry into the underlying evocative nature of the produced images and training data for AI image generation models, and what implications that might have on domains such as social media and journalism.
Our research is available as an arXiv preprint.