Academic Research

I have had the privilege of accessing opportunities through my school and the University of Maryland to conduct and publish academic research in two key domains: Educational Equity and AI emotion extraction.

K-12 Educational Equity Research

Certain subgroups of students tend to be underrepresented within K-12 "gifted" programs due to a variety of causes, such as biased teacher nominations and culturally insensitive standardized testing. Having witnessed a lack of minority representation myself as a G/T student, I was interested in identifying potential policy solutions.

In this literature review, I explore the causes of and potential solutions to a lack of minority representation. I also interview professors and high school educators to gain a holistic understanding of the complexities surrounding identification for underrepresented students. Ultimately, I identify 3 key solutions: Universal Screening, Local Norming, and Frontloading.

Publication: This paper has been accepted to and is currently pending publication in the Journal of Student Research. It is also available here.

If PDF reader does not load, please refresh page

Context: I conducted my research at the University of Maryland: College Park under the guidance of Dr. Cody Buntain. My research was affiliated with the Human-Computer Interaction Lab and the Institute for Trustworthy AI in Law & Society (TRAILS). Our project began in June 2024. To identify this opportunity, I initially independently reached out to Dr. Buntain, whose work on AI in society and membership in the TRAILS institute closely aligned with my research interests. After communicating via email, we met virtually, and he offered to mentor me through a research project.

AI Emotion Extraction Research

Background: With the advent of social media, existing literature has found that content eliciting negative reactions tends to be promoted more by social media algorithms. Similarly, as AI image generation models grow increasingly popular, they rely on social media content for training data. This raises the question: Are AI images models being trained on data that is generally more negative in nature? To explore this question, I'm co-authoring a paper under the mentorship of Dr. Cody Buntain (See: Context).

Summary: Our research has two objectives. First, to design and assess various empirical techniques to identify emotions in images, such as image captioning and model fine-tuning. Second, to determine whether AI-generated images evoke emotions consistent with the prompts used to produce them. For example, given a prompt such as “A flowing waterfall on a luscious mountainside,” would an AI-generated image evoke feelings like contentment, joy, and excitement, or might it have some internal emotional bias?

Results: We found that AI image generation models have a small yet significant bias towards NEGATIVE emotion. On balance, images generated by AI models tend to be more negative than their prompts themselves are. This finding has significant social implications, particularly as AI-generated images are increasingly used on social media and journalism.

This research also has psychological relevance. We examined how different models and datasets classified emotions as a technical question and found that current image-emotion datasets are misaligned with leading psychological emotion frameworks. We thus call for interdisciplinary studies and datasets that better incorporate psychological insights.

Publication: Our paper has been submitted to the 2025 ACM Web Science Conference and is available as an arXiv preprint.

Personal Contributions: The first phase of our study required the development of emotion identification techniques. I used existing libraries to evaluate our dataset using zero-shot classification techniques. I then preprocessed our datasets and used them to fine-tune and evaluate several transformer-based image classifiers that could detect one of 8 discreet emotions. Finally, I used ChatGPT and BLIP-2 to develop a captioning-based pipeline for emotion recognition.

The second phase of our study required cross-modal analysis between textual prompts and AI-generated images. I identified a dataset (DiffusionDB) that met necessary parameters, and then used existing text classifiers and the previously fine-tuned image classifier to evaluate emotional salience within prompts and images in the dataset. For both phases, I independently wrote Python code and collected data.

During the writing stage, I drafted the entire paper and abstract, and then Dr. Buntain provided edits, feedback, and supplemental data. I was first author on our paper. Overall, my contributions were literature review (identifying relevant existing models and papers), data collection, writing, and research design modifications.