A recent study revealed that search engines and AI-powered chatbots do not provide reliable information about medicines.
Researchers wrote in the specialized journal BMJ Quality and Safety that the answers were frequently inaccurate and incomplete, and were often difficult to understand. Therefore, the researchers recommended caution when dealing with such information, calling for warnings to be attached to users.
The main author of the study, Varam Andrikyan, from the Institute of Experimental and Clinical Pharmacology and Toxicology at the University of Erlangen in Germany, said, “The main result of our study is that the quality of chatbot answers is not yet sufficient for safe use by users… In our opinion, it is necessary to clearly indicate that the information that “Chatbots cannot replace professional advice.”
The study’s starting point was an experiment in patients’ access to information published on the Internet about prescribed medications. The experts in the study asked the “Bing” chatbot – developed by Microsoft – in April 2023, which are the ten most common questions about the 50 most prescribed medications in the United States of America, including questions about how to take them, their side effects, or contraindications to their use.
The chatbot generally answered questions with a high degree of completeness and accuracy, but some answers were not, Andrikyan said. He added, “This poses a danger to patients, because as non-medical specialists, they cannot personally evaluate the accuracy and completeness of the answers generated by artificial intelligence.”
The expert noted that although there has been rapid progress in AI-powered search engines that include integrated chatbot functions from the time the study was conducted last year until now, improvements are not sufficient and risks to patient safety remain until further notice.