🔬

MY FURI
EXPERIENCE

Research over Fall 2024
at DMML with Prof. Liu

Generative AI has transformed human-computer interaction, but inherent biases, hallucinations, and misinformation remain significant barriers to safe and reliable adoption. In March 2024, with guidance from Prof. Trowbridge through FSE 150, I contacted Prof. Huan Liu and his team at ASU's Data Mining and Machine Learning Lab (DMML) to pursue research in this area.

My FURI project focused on resolving the ambiguity problem in Generative AI Models. Human language, by its nature, is ambiguous and this poses a significant challenge to Large Language Models (LLMs). LLMs often struggle with the inherent uncertainties of human communication, leading to inaccuracies, hallucinations, and biased responses. I started this project by conducting a literature review to understand the current state of the art in the field. I also attended a talk by Stanford Professor Edward Chang hosted by Prof. Liu to learn more about the work being done in this field. I labelled a dataset of 1000+ ambiguous questions to test the performance of current methods of using Generative AI Models on ambiguous questions. I then used this dataset to test the performance of current methods of using Generative AI Models on ambiguous questions. Then, with the help of Prof. Liu and Dr. Amrita Bhattacharjee, I developed techniques to improve the performance of Generative AI Models on ambiguous questions such as task-specific fine-tuning and chain-of-thought which proved our hypothesis that simple, training-free, token-level disambiguation methods may be effectively used to improve LLM performance for ambiguous question answering tasks. Fast forward to December 2024, we published a paper in the IEEE BigData Conference and it has earned 2+ citations as of February 2025.

Research Poster

From our published paper (Keluskar et al., IEEE BigData 2024): "Ambiguity in natural language poses significant challenges to Large Language Models (LLMs) used for open-domain question answering. LLMs often struggle with the inherent uncertainties of human communication, leading to misinterpretations, miscommunications, hallucinations, and biased responses. This significantly weakens their ability to be used for tasks like fact-checking, question answering, feature extraction, and sentiment analysis. Using open-domain question answering as a test case, we compare off-the-shelf and few-shot LLM performance, focusing on measuring the impact of explicit disambiguation strategies. We demonstrate how simple, training-free, token-level disambiguation methods may be effectively used to improve LLM performance for ambiguous question answering tasks. We empirically show our findings and discuss best practices and broader impacts regarding ambiguity in LLMs. "

📄

RESEARCH
ABSTRACT

💭

REFLECTION

Working under Prof. Liu provided a rigorous foundation in AI safety and the responsible development of generative systems. Beyond the technical training, FURI integrated me into the DMML research community through lab meetings, dissertation defenses, and collaborative discussions that shaped my understanding of academic research culture.

This project shaped my professional direction and has already opened doors in internship interviews and job applications. I'm grateful for the mentorship and excited to continue working at the intersection of AI and Security through GCSP.

My GCSP theme is Security. This FURI project directly addresses AI security: when LLMs produce confident but incorrect answers to ambiguous questions, the downstream consequences include misinformation propagation, unreliable fact-checking, and compromised decision-making in safety-critical applications. By developing training-free disambiguation methods, this research contributes to making generative AI systems more trustworthy and reliable. The experience also deepened my understanding of current advancements at the intersection of AI and security, preparing me for continued work in this space.

🔒

RELATION
TO MY
THEME

Want to connect?
My inbox is always open!

contact@aryankeluskar.com
aryankeluskar.com