Tips for using AI to find cancer information
July 10, 2025
Medically Reviewed | Last reviewed by and Shawn Stapleton, Ph.D., on July 10, 2025
Artificial Intelligence (AI) is changing the way many of us find information online. Only a few years ago, a web search would pull up pages of results for you to comb through. But today, the same search may pull up an AI overview summarizing the results right at the top of the page. Or maybe you ask your AI assistant a question, and within seconds, paragraphs of answers fill your screen. Handy, right?
While searching online may feel as robust as ever, AI is still a relatively new tool. This means there are times when AI gets it wrong, perhaps recommending an incorrect phone number or pulling up the results of a since-debunked study.
So, how should we navigate these challenges, especially when searching for medical information and topics such as cancer treatment?
We asked MD Anderson experts how to find reputable medical information online in the ever-expanding world of AI.
Key takeaways:
- Confirm health information you learn online with your care team.
- Verify information from AI tools by visiting source websites and referencing other trusted websites.
- Be cautious about sharing original content or medical information that hasn't been?de-identified with AI tools.
- Use AI as a starting point for your online research, not your only source of information.
Don¡¯t assume everything you read online is correct
If it seems like AI has all the answers, that¡¯s by design.
¡°AI gives you this beautifully phrased definitive answer. It makes it more likely that you psychologically feel like you could actually trust it,¡± says , MD Anderson¡¯s chief data & analytics officer.
But that doesn¡¯t mean you should take AI information as absolute truth. Text-based AI tools, like ChatGPT, use Large Language Models (LLMs) that essentially work by making predictions, not guarantees.
¡°When you ask it a question, it is essentially looking at the highest probability next set of words ¡ª or tokens, in technical speak ¡ª that are related to the conversation so far. The algorithm also is fine-tuned using human feedback to align predicted responses with what people find helpful and accurate. This helps reduce mistakes, but does not eliminate them,¡± says Shawn Stapleton, Ph.D., director of Data Impact & Governance at MD Anderson.
AI is trained using large amounts of data ¡ª both correct and incorrect ¡ª from across the internet, including websites, open-source articles, textbooks and more. But knowing what content was used and when it was last updated is another story, making it difficult to understand what the models have truly ¡°learned,¡± Stapleton says.
So, what does that mean for AI users?
¡°All outputs from AI models should be evaluated critically and taken with a grain of salt ¡ª or sometimes a handful of salt,¡± Chung says. ¡°It¡¯s good for people to explore and try out this emerging technology. At the same time, you have to be aware that this is still emerging technology, and, in parallel, we are all learning to effectively and safely use this technology while it is actively advancing and maturing.¡±
Use AI tools that provide sources ¡ª and visit their websites directly
While AI might seem like a time saver at first, it¡¯s important to verify information ¡ª especially medical information ¡ª no matter how confidently it is given to you.
¡°Trust, but verify,¡± Stapleton says.
One way to do this? Use AI tools that provide sources. Then, visit the source website directly to confirm the information is correct and current.
¡°If you're using a tool that doesn't provide its sources of data, it¡¯s probably not the best tool to use. If it does provide sources, click on the links and follow them. That's a really great strategy,¡± Stapleton says.
While he notes you can ask AI tools to provide sources after you ask a question, they aren't always accurate.
¡°In some cases, the sources that are provided may be inaccurate and even completely made up,¡± he says.
Another way Stapleton recommends verifying AI information is a good old-fashioned web search, this time without AI assistance. This allows you to fact check AI information against reputable sources, such as published literature or health care, educational or governmental resources.
RELATED: How to find trustworthy cancer information online
Talk to your care team about what you learn
If you use AI for medical information or to learn more about your cancer treatment, speak to your care team to confirm it is correct. Cancer treatment looks different for each person. Your care team can help you make sense of the information you are seeing and ensure it applies to you.
¡°When it comes to medical information, especially when it comes to cancer, making sure that you are checking with your doctor and care team to get validated information is really important,¡± Chung says. ¡°That¡¯s true for any searches on the internet, with or without using AI.¡±
RELATED: 3 things to know about AI and cancer care
Our experts also caution against using general-purpose LLM tools to self-diagnose. While AI tools can sometimes correctly diagnose common illnesses, they aren¡¯t always right ¡ª especially when it comes to a topic as individualized and complex as cancer.
¡°While it can be correct some of the time, it may be very wrong other times,¡± Chung says. ¡°And when it comes to cancer and cancer-related topics, there is less literature and more complexity, increasing the risk that incorrect information may be communicated.¡±
While Stapleton notes that AI can help you distill complex topics in simpler terms, he emphasizes that your care team should be your go-to for individualized medical information like your own diagnosis and treatment plans.
¡°LLMs, like ChatGPT, are not approved medical technologies,¡± he says.
How you ask a question can influence the answer
Another obstacle specific to using LLMs for medical information? Phrasing. How you ask a question is really important and can even alter the answer you get.
Take, for example, the question ¡®Should I eat rocks?¡¯ Sounds like a funny thing to search online, right? And, of course, AI will respond by telling you not to eat rocks. But rephrase that question as ¡®How many rocks should I eat a day?¡¯ and AI may presume you are intending to eat rocks and try to give you an answer, Chung says.
You probably wouldn't pour yourself a bowl of rocks for breakfast, even if AI told you otherwise. But this example illustrates the problem with taking AI information as absolute truth. These consequences can be especially serious when health is involved.
The AI program you use may also influence the data AI provides. This means two people could both ask AI the same question and get different answers, Stapleton notes.
AI also considers the tone in which you phrase a question. ¡°LLMs can also mirror your intent. If you ask with optimism, you will likely get an uplifting answer. If you're asking with negativity, it may reflect that as well,¡± Stapleton says.
While AI is sometimes touted as an emotionally intelligent companion or placeholder for a health care professional, it's important to remember that the only information AI knows about you is the information you tell it.
¡°It's unable to consider your mental state, your history or any other personal circumstances when it is providing a response,¡± Stapleton says.
More detailed questions may prompt more detailed responses
When you ask AI a question, getting specific and providing details may result in a more tailored response.
Asking follow up questions can also help you gain deeper context on a topic. Here are a few examples of questions Stapleton notes may help broaden your understanding of medical information:
- Can you show me websites and publications that support this information?
- How recent is this information?
- Is this information consistent with guidelines from the National Cancer Institute or other leading cancer organizations?
Be cautious with your personal information
Asking a detailed question shouldn't mean sharing private medical information or uploading medical records, however.
Before sharing information with AI, Chung recommends asking yourself if you¡¯d be comfortable telling a stranger the same thing.
¡°If you put up your medical information without de-identifying it, it's as if you¡¯re meeting 1,000 people or 1 million people on the street, and they all get to see,¡± she says.
Likewise, there is no guarantee that information you share with AI will be private. So, use caution before sharing any original material or information you wouldn¡¯t want a stranger to see ¡ª let alone millions of strangers.
Don¡¯t use AI as your only source of information
Another good rule of thumb? Think of AI as a stranger giving you advice, Chung says. While they may make good points for you to consider, that single conversation probably wouldn't be the only research you¡¯d do while trying to make an important decision.
"If you're asking finance questions to some random person who looks successful on the street, would you go and remortgage your house to invest in that? Probably not, right?¡± she says.
The bottom line? Use AI information as one source of information, not your only source of information.
or call 1-877-632-6789.
All outputs from AI models should be evaluated critically and taken with a grain of salt ¡ª or sometimes a handful of salt.
Caroline Chung, M.D.
Chief Data & Analytics Officer