🔬
MY FURI
|
Generative AI has transformed human-computer interaction, but
inherent biases, hallucinations, and misinformation remain
significant barriers to safe and reliable adoption. In
March 2024, with guidance from Prof. Trowbridge through
FSE 150, I contacted Prof. Huan Liu and his team at ASU's
Data Mining and Machine Learning Lab (DMML) to pursue
research in this area.
|
|
From our published paper (Keluskar et al., IEEE BigData 2024): "Ambiguity in natural language poses significant challenges to Large Language Models (LLMs) used for open-domain question answering. LLMs often struggle with the inherent uncertainties of human communication, leading to misinterpretations, miscommunications, hallucinations, and biased responses. This significantly weakens their ability to be used for tasks like fact-checking, question answering, feature extraction, and sentiment analysis. Using open-domain question answering as a test case, we compare off-the-shelf and few-shot LLM performance, focusing on measuring the impact of explicit disambiguation strategies. We demonstrate how simple, training-free, token-level disambiguation methods may be effectively used to improve LLM performance for ambiguous question answering tasks. We empirically show our findings and discuss best practices and broader impacts regarding ambiguity in LLMs. " |
📄
RESEARCH
|
💭
REFLECTION |
Working under Prof. Liu provided a rigorous foundation in
AI safety and the responsible development of generative systems.
Beyond the technical training, FURI integrated me into the DMML
research community through lab meetings, dissertation defenses,
and collaborative discussions that shaped my understanding of
academic research culture.
|
|
My GCSP theme is Security. This FURI project directly addresses AI security: when LLMs produce confident but incorrect answers to ambiguous questions, the downstream consequences include misinformation propagation, unreliable fact-checking, and compromised decision-making in safety-critical applications. By developing training-free disambiguation methods, this research contributes to making generative AI systems more trustworthy and reliable. The experience also deepened my understanding of current advancements at the intersection of AI and security, preparing me for continued work in this space. |
🔒
RELATION
|
|
Want to connect? |
contact@aryankeluskar.com
|