Researchers at the University of Westminster in collaboration with University College London (UCL) have found that patients are more likely to discuss highly stigmatising health conditions such as sexually transmitted infections (STIs) with an Artificial Intelligence (AI) chatbot compared to a healthcare professional such as a General Practitioner (GP).

Healthcare professional holding AI chatbot between hands
Credit: Production Perig / Shutterstock.com

The findings suggest that for health conditions that are perceived as highly severe such as cancers, people would be less likely to use AI algorithms. However, patients were more inclined to discuss highly stigmatising conditions with AI chatbots than a GP.

Chatbots and virtual voice assistants are increasingly common in primary care without sufficient evidence for their feasibility and effectiveness. The researchers aimed to assess how perceived stigma and severity of various health issues are associated with the acceptability for three sources of health information and consultation: an automatic chatbot, a GP, or a combination of both.

Although healthcare professionals are perceived as the most desired sources of health information, the new research shows that chatbots may be useful for sensitive health issues in which disclosure of personal information is challenging. The researchers identified limitations for AI technology used in healthcare which requires an input from patients, and highlights the role of hesitancy for AI based on perceived severity and stigma of various health conditions.

The researchers suggest that policymakers and digital service designers therefore need to recognise the limitation of health chatbots. Future research about the use of AI in healthcare should also establish a set of health topics most suitable for chatbot-led interventions and primary healthcare services. 

Dr Tom Nadarzynski, lead author of the study from The University of Westminster, said: “Many AI developers need to assess whether their AI-based healthcare tools such as symptoms checkers or risk calculators are acceptable interventions. Our research finds that patients value the opinion of healthcare professionals, therefore implementation of AI in healthcare may not be suitable in all cases, especially for serious illnesses.” 

Read the full paper in the SAGE Digital Health journal.

Press and media enquiries

Contact us on:

[email protected]