MARCH 17, 2025 — Researchers at the UTSA School of Data Science are making artificial intelligence (AI) more reliably answer medical questions. Their project, Reducing Hallucination of LLMs for Causality-Integrated Personal Medical History Question Answering, received a $35,000 grant in 2024 through the Collaborative Seed Funding Grant program, sponsored by the School of Data Science (SDS) and the Open Cloud Institute.
The research team is led by Ke Yang, an assistant professor of computer science, along with Anthony Rios, an assistant professor of information systems and cybersecurity, and Yuexia Zhang, an assistant professor of management science and statistics. All three are SDS Core Faculty members.
Their work focuses on reducing AI hallucinations, a term used to describe when AI confidently provides false or misleading information. Large language models (LLMs), such as ChatGPT, generate responses based on patterns in data, but they do not have real-world understanding. This means they can sometimes sound convincing while being completely wrong. The issue is made worse when AI learns from flawed, outdated or biased information.
Some AI mistakes are harmless — such as misidentifying Toronto as Canada’s capital — but others, particularly in health care, can be much more serious. A recent Harvard study found that many people preferred ChatGPT’s medical advice over responses from doctors, highlighting the need for more reliable AI in medicine.
Incorrect AI-generated medical advice could lead to misdiagnosed conditions or incorrect treatment recommendations. This makes it essential to improve accuracy.
“We found that there’s not much work targeting AI hallucinations in specialized fields, especially those with high risks such as health care, finance and employment,” Yang said.
Ke Yang discussed her research on AI hallucinations and large language models at the 2024 Los Datos Conference.
To improve AI accuracy, UTSA researchers are working on ways to give AI better context before it generates a response.
“We observed that AI sometimes gives incorrect answers because it doesn’t have enough background information on medical questions,” Yang said. “To address this, we proposed to extract previously known knowledge about diagnoses and practices to help AI think more logically. We then structured this information in a way that allows AI to process it more effectively using another AI system.”
The team is developing an AI model that can fact-check itself. To achieve this, they created a Causal Knowledge Graph (CKG) — a well-known format that organizes information from trusted medical sources. This structure helps AI recognize connections between medical concepts, allowing it to provide more accurate and reliable answers.
By integrating this external data with the user’s original question, the model gains a better understanding of the context of the user’s question, making its answers more. If successful, the team expects its model to generate hallucination-free answers, even in areas where the AI has received little to no prior training.
One challenge with using external data sources is ensuring AI pulls only the most relevant context for each question. To solve this, the team is also developing a system that filters information more effectively using subgraphs — smaller, targeted sections of data. These act like an index, helping the AI focus only on the most useful information instead of searching through everything it has learned.
Beyond improving AI-generated medical answers, the team wants to create a benchmark database — a collection of hallucination-free question-and-answer pairs that could serve as a standard for other AI researchers. This resource would serve as a testing tool, allowing developers to evaluate AI models against verified data and improve overall performance across various applications.
“Our project helps create open-source tools for researchers and develop new AI solutions that improve reliability in high-risk fields like health care,” Yang said.
Yang believes this research could extend beyond health care, improving AI accuracy in various fields.
“This work has the potential to bolster trust in AI and encourage people to use it for a variety of important applications,” she said.
By reducing hallucinations and improving reliability, UTSA researchers are making AI a safer and more trustworthy tool, particularly in areas where accuracy is critical.
— Christopher Reichert
UTSA Today is produced by University Communications and Marketing, the official news source of The University of Texas at San Antonio. Send your feedback to news@utsa.edu. Keep up-to-date on UTSA news by visiting UTSA Today. Connect with UTSA online at Facebook, Twitter, Youtube and Instagram.
Join UTSA Libraries for a virtual workshop for EndNote users who have mastered the basics but would like to learn about more advanced features the program can offer.
Join UTSA Libraries for a virtual workshop for EndNote users who have mastered the basics but would like to learn about more advanced features the program can offer.
Students, faculty and staff are invited to join UTSA’s delegation at the annual César E. Chávez March for Justice in downtown San Antonio. Free transportation will be provided from the Main and Downtown Campuses, and all university community participants will receive a complimentary commemorative t-shirt (while supplies last)
Join UTSA Libraries and Museums to learn more about the publishing discounts available for UTSA researchers.
PubMed is an essential database for anyone conducting biomedical or health-related research. This workshop will teach attendees how to effectively navigate this free resource and locate peer-reviewed articles using advanced search features, MeSH subject headings, and Boolean operators.
Join us for a hands-on workshop about the basics of copyright, both in education and as a researcher. We’ll dispel some common copyright myths, differences between copyright law and other intellectual property law, and teach you how to apply a Fair Use checklist to your scholarly work.
In this workshop, we will explore sentiment analysis, a method for identifying feelings in text, whether the tone is positive, negative, or neutral.
Submit an Event
Spotlight
The University of Texas at San Antonio is dedicated to the advancement of knowledge through research and discovery, teaching and learning, community engagement and public service. As an institution of access and excellence, UTSA embraces multicultural traditions and serves as a center for intellectual and creative resources as well as a catalyst for socioeconomic development and the commercialization of intellectual property – for Texas, the nation and the world.
To be a premier public research university, providing access to educational excellence and preparing citizen leaders for the global environment.
We encourage an environment of dialogue and discovery, where integrity, excellence, inclusiveness, respect, collaboration and innovation are fostered.
UTSA is a proud Hispanic Serving Institution (HSI) as designated by the U.S. Department of Education .
The University of Texas at San Antonio, a Hispanic Serving Institution situated in a global city that has been a crossroads of peoples and cultures for centuries, values diversity and inclusion in all aspects of university life. As an institution expressly founded to advance the education of Mexican Americans and other underserved communities, our university is committed to promoting access for all. UTSA, a premier public research university, fosters academic excellence through a community of dialogue, discovery and innovation that embraces the uniqueness of each voice.
