Public Skepticism Grows Over AI Chatbots as Health Information Sources

by Krystal

In this edition of KFF’s bi-weekly Monitor, we delve into the growing concerns surrounding the reliability of AI chatbots as sources of health information. We analyze public opinion based on recent KFF surveys, examine examples of AI-generated election misinformation, and share our experiences querying AI chatbots on health-related topics. We also discuss the challenges in safeguarding against inaccuracies in AI-generated content.

AI Chatbots and Public Trust: A Widening Gap

With the increasing integration of artificial intelligence into consumer-facing platforms, the latest KFF Health Misinformation Tracking Poll reveals a significant divide in public trust regarding AI’s role in health information. The survey found that two-thirds of adults have interacted with AI in some form. However, when it comes to the accuracy of information provided by AI chatbots, the public remains skeptical. Over half of the respondents (56%), including many who use AI regularly, expressed doubts about their ability to distinguish between accurate and misleading information generated by these tools.

Health Information: Trust in AI Chatbots Remains Low

The KFF poll highlights that while some consumers trust AI for tasks like cooking and technology-related queries, trust diminishes sharply when it comes to health information. Only 29% of adults believe AI chatbots can provide reliable health advice, and even fewer (19%) trust them with political information. This skepticism reflects broader concerns about the potential for AI to spread misinformation, especially in the critical area of health.

Evolving AI Chatbots: Improvements and Persistent Challenges

AI chatbots are continuously evolving to address concerns about accuracy and reliability. Recent updates have focused on improving the ability of these models to cross-reference information from multiple reliable sources and detect inconsistencies. However, the effectiveness of these updates varies among different AI platforms.

To gauge the progress of these AI tools, we conducted a series of tests using three well-known AI chatbots: ChatGPT, Google Gemini (formerly Google Bard), and Microsoft CoPilot (formerly Bing Chat). Our findings reveal that while these chatbots have become more assertive in addressing false claims, they still exhibit significant differences in their approach to complex health issues.

For instance, when asked about the controversial use of ivermectin as a COVID-19 treatment, ChatGPT initially offered a cautious response, acknowledging ongoing debate rather than outright refuting the claim. In contrast, Google Gemini and Microsoft CoPilot were more direct in labeling the claim as false. By 2024, ChatGPT had become more decisive, yet it continued to hedge on certain statements, particularly those related to firearms.

Challenges in Source Citation and Transparency

A critical issue in AI-generated content is the inconsistency in how these chatbots cite sources. While some chatbots reference scientific evidence, the lack of specific citations or the provision of inaccurate details undermines their credibility. For example, ChatGPT often mentioned scientific consensus without citing particular studies, whereas Google Gemini and Microsoft CoPilot provided more concrete references, though not always accurately or with direct links to original research.

Over time, the chatbots also changed their approach to citing public health institutions. Initially, ChatGPT was cautious, rarely mentioning specific agencies unless the topic was directly related to COVID-19. By 2024, it had expanded its references to include a broader range of health topics. Conversely, Google Gemini began to generalize its references, while Microsoft CoPilot remained consistent in citing authoritative sources throughout the period.

The Bottom Line: Proceed with Caution

Our observations, though limited, suggest that while AI chatbots can offer a quick and convenient starting point for health information, they are not infallible. Users should be wary of the potential for misleading or incomplete information and are advised to cross-check chatbot responses with multiple trusted sources. As AI technology continues to evolve, staying informed about system updates is crucial, as these can significantly alter the accuracy of the information provided.

AI Chatbots and Election Misinformation: A Growing Threat

The World Economic Forum’s 2024 Global Risks Report identified AI-fueled misinformation as a major threat to global stability. The potential for AI chatbots to spread election-related disinformation was starkly demonstrated in a New York Times investigation, which showed how easily these tools could be manipulated to generate biased and misleading content. This capability poses significant risks, particularly in the context of the upcoming U.S. elections, where the spread of false information could have serious consequences.

In a related development, five Secretaries of State issued an open letter to Elon Musk, urging immediate changes to the AI chatbot Grok after it disseminated false information about Kamala Harris’s eligibility for the 2024 presidential ballot. The letter emphasized the need for accurate election information and called for directing users to trusted resources.

AI in Disinformation Campaigns: The Case of the Paris Olympics

AI has also been implicated in a Russian disinformation campaign targeting the 2024 Paris Olympics. The campaign used AI-generated content to spread false narratives, including a viral video that painted Paris as a city in decline. The campaign further amplified false claims about Algerian boxer Imane Khelif, reflecting the growing use of AI in global disinformation efforts.

Research Updates: Safeguards and Health Disinformation

A recent study published in BMJ highlighted the inconsistent safeguards in AI chatbots, particularly in preventing the generation of health disinformation. The study found significant variability in how different models handled complex health queries, raising concerns about the transparency of AI developers regarding the measures they have implemented to address these challenges.

In another study, ChatGPT’s latest updates were shown to improve the accuracy of responses to vaccine-related myths. However, the research also emphasized the need for caution, as even the most advanced AI models are not immune to errors and should be supplemented by expert advice.

Conclusion

As AI continues to permeate various aspects of daily life, including health and politics, it is essential to approach these tools with a critical eye. While they offer convenience and speed, the risks of misinformation and the evolving nature of AI demand that users remain vigilant and informed.

Related Posts

blank

Step into Dailyhealthways.com and unlock the door to comprehensive well-being. From nutritious diet to fitness routines and mental health support, we’re your virtual guide to a healthier lifestyle. Start your journey towards balance and vitality today.

【Contact us: [email protected]

Copyright © 2023 dailyhealthways.com