It’s well known that modern AI-based chatbots, such as Microsoft Copilot, still have significant limitations, and their recommendations shouldn’t be fully trusted. A study by researchers from Germany and Belgium confirmed these concerns by evaluating the bot’s ability to provide medical advice. The study revealed that Copilot’s responses were scientifically accurate in only slightly more than half of the cases. Some of the bot’s advice could even cause serious health risks or, in extreme situations, lead to fatal outcomes.
Study Methodology and Findings
The researchers asked Copilot to answer ten of the most frequently searched questions by U.S. residents about popular medications. A total of 500 responses were analyzed for accuracy, completeness, and potential harm. Unfortunately, the results were less than encouraging, notes NIXSolutions.
The study reported, “As for accuracy, the AI’s responses did not correspond to established medical knowledge in 24% of cases, and 3% of the responses were completely incorrect. Only 54% of responses aligned with scientific consensus. In terms of potential harm, 42% of AI responses could lead to moderate or mild harm, and 22% could cause serious harm or even death. Only about a third (36%) of the responses were judged to be harmless.”
Caution Needed When Using AI for Medical Information
This research highlights the risks involved in relying on AI chatbots like Copilot for medical advice at this stage of development. Although these tools might improve over time, they currently cannot replace the expertise of healthcare professionals. For accurate medical guidance, consulting with a qualified doctor remains the safest and most reliable option.
We’ll keep you updated as these technologies evolve, but for now, it’s crucial to treat AI recommendations with caution, especially when health is concerned.