Why AI is a dubious source of nutrition and health information (January 2024)
Carrie Dennett, MPH, RDN
It’s common to paint certain forms of technology as either good or bad, but in most cases it’s not about the technology itself, but about how it’s used. Take artificial intelligence, or AI. If you’ve ever asked Alexa a simple question only to get “Sorry, I don’t know that”—or maybe an answer to a totally different question—as a response, you may have scoffed at the idea of artificial “intelligence.” But while Alexa, Siri and other AI applications might legitimately make our lives better, or at least easier, there are other, aspects of AI that are concerning. One of those is the spreading of misinformation about nutrition and health.
This was highlighted somewhat dramatically a few months ago when the National Eating Disorders Association’s (NEDA) AI-powered helpline chatbot “Tessa” began dispensing weight loss advice to people seeing help for eating disorders. One of Tessa’s developers said the weight loss advice the chatbot gave wasn’t part of the program her team worked on, and she doesn’t know how it got into the Tessa’s repertoire.
Around the time of this backlash, I saw that AI-generated meal plans were trending, so I decided to look into it. I asked three AI sources for meal plans: the top result to the search query “AI meal planner,” Bard (Google’s AI experiment), and ChatGPT (the widely used AI chatbot). I requested three types of 7-day meal plans: vegan, low-FODMAP (for people with irritable bowel syndrome, aka IBS) and gluten free. My goal was to assess usefulness, practicality and, most importantly, accuracy. The “AI meal planner” asked me my age, preferred cuisines, if I had any food allergies, and what type of diet I wanted to follow. Bard and ChatGPT did not, and none provided actual recipes.
Alarmingly, the “AI meal planner” gave me a vegan meal plan that included eggs and a lot of cheese, plus many meals with almost no protein (spaghetti with tomato sauce; pancakes with syrup; garden salad with vinaigrette). Bard and ChatGPT were a bit better, offering simple plans that included more specifics about primary ingredients (such as Bard’s “salad with quinoa, vegetables and a vinaigrette dressing”) and snacks. They also included more beans, lentils nuts and tofu than the “AI meal planner,” although some meals were still low in protein, and some days were a bit low in fruits and vegetables.
The “AI meal planner’s” low-FODMAP diet was an epic failure. It included several high-FODMAP foods, sometimes all in the same meal. ChatGPT’s low-FODMAP meal plan specified lactose-free milk and yogurt (lactose is a FODMAP) but didn’t specify how much to eat of a few foods that are low-FODMAP only in smaller serving sizes. Bard’s plan was in the middle: It didn’t specify high-FODMAP foods, but it was so vague (“chicken or fish with roasted vegetables”) that you would have to know which vegetables are low-FODMAP. Both Bard and ChatGPT include the disclaimer that you should consult with a registered dietitian. The gluten-free meal plans were similarly vague, largely leaving users to their own devices.
Basically, these plans were as impersonal as any of you would find in a book or magazine. The lack of detail made some of the plans unhealthy for people with celiac disease or non-celiac gluten sensitivity, or for those who have IBS. And they didn’t ask any questions about pre-existing health conditions, including a history of eating disorders, which is especially important to consider in the context of a meal plan that excludes a number of foods or food groups.
It’s notable that ChatGPT includes this disclaimer: “While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.” But people are looking to AI for advice and accurate information. Bad meal plans are bad enough, but what happens when AI hallucinates?
Hallucination is when AI provides a confident, plausible-sounding response that is based on…nothing. ChatGPT has made up fake legal cases, fake song lyrics, fake book titles, fake quotes, fake citations, fake news articles. There’s already a lot of questionable human-generated information about nutrition and health online, so what happens if AI starts hallucinating about the benefits of vitamin D, how to eat to reduce heart disease risk, or how to recover from an eating disorder? How would you know the article was AI-generated, and how you would know if it’s a hallucination?
(A note to dietitian bloggers who accept guest posts: have a plan for either avoiding accepting and publishing AI-generated posts, or for vetting the accuracy of the information.)
The bottom line is that there’s not a lot of transparency about whether information online is AI-generated or human-generated. And poor-quality AI-generated self-published books on a variety of topics, including medicine and self-help, have been showing up on Amazon. That’s why it’s more important than ever to seek information from trusted, verifiable sources.
References:
Wells K. An eating disorders chatbot offered dieting advice, raising fears about AI in health. NPR. June 9, 2023. https://www.npr.org/sections/health-shots/2023/06/08/1180838096/an-eating-disorders-chatbot-offered-dieting-advice-raising-fears-about-ai-in-hea
Hoover A. An Eating Disorder Chatbot Is Suspended for Giving Harmful Advice. Wired. June 1, 2023. https://www.wired.com/story/tessa-chatbot-suspended/
Weise K and Metz C. When A.I. Chatbots Hallucinate. The New York Times. May 1, 2023. https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html
Hallucination (artificial intelligence). Wikipedia. https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
Kugel S and Hiltner S. A New Frontier for Travel Scammers: A.I.-Generated Guidebooks. The New York Times. August 5, 2023. https://www.nytimes.com/2023/08/05/travel/amazon-guidebooks-artificial-intelligence.html