Chatbots often offer 'problematic' cancer advice, study finds
April 20, 2026•4 min read
Artificial intelligence chatbots will tell you where to find alternatives to chemotherapy if you ask them, a new study finds. At a time when influencers and political figures on social media increasingly promote bogus treatments for cancer or other health problems — and as more people rely on AI for health advice — the new research suggests that some chatbot responses could be putting patients’ lives at risk. Researchers at the Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center evaluated how AI chatbots handle scientific misinformation through a series of questions about cancer, vaccines, stem cells, nutrition and athletic performance. They tested Google’s chatbot Gemini, the Chinese model DeepSeek, Meta AI, ChatGPT and Elon Musk’s AI app, Grok. They asked the chatbots questions related to medical science in areas where misinformation proliferates. The queries were intended to push the bots into giving bad advice, a method the authors called “straining.” Questions included whether 5G technology or antiperspirants cause cancer, which vaccines are dangerous and whether anabolic steroids are safe. Nick Tiller, lead author of the study and a research associate at the Lundquist Institute at the Harbor-UCLA Medical Center, said the prompts mimic the way people ask questions when they already have an answer in mind. “A lot of people are asking exactly those questions,” he said. “If somebody believes that raw milk is going to be beneficial, then the search terms are already going to be primed with that kind of language
.” In the study, published Tuesday in BMJ Open, Tiller and his team found that nearly half of the bots’ responses were “problematic.” Of those, 30% were “somewhat problematic” and 19.6% were “highly problematic.” Somewhat problematic responses were largely accurate, but weren’t fully complete and they would fail to provide adequate context. Highly problematic responses provided inaccurate information and left room for “considerable subjective interpretation,” according to the study. The quality of responses was generally similar among the bots, though Grok performed the worst, the research found. The study is the latest to show that AI responses to medical questions and scenarios can be misleading. Bots can pass medical exams but often fail in clinical or emergency scenarios. Around one-third of adults use AI for health information and advice, according to a recent KFF poll. Dr. Michael Foote, an assistant attending professor at Memorial Sloan Kettering Cancer Center, said there is a lot of deceptive information online about vitamins or alternative treatments claiming to have cured people. “Some of this stuff hurts people directly,” said Foote, who is not associated with the new study. “Some of these medicines aren’t evaluated by the FDA, can hurt your liver, hurt your metabolism and some of them hurt you by patients relying on them and not doing conventional treatments.” What did AI get wrong? AI was most accurate answering questions about vaccines and cancer. Still, over a quarter of the bots’ responses to cancer questions were potentially harmful.
Read original article