Chatbot technology still has a way to go in healthcare

[ad_1]

While chatbots are gaining interest to developers, providers and payers alike, the technology isn’t quite as technologically advanced as marketers may claim.

A recent analysis of 78 health-related chatbots by researchers at Johns Hopkins University found only a few use machine learning and natural language processing approaches, despite marketing claims. The researchers, who published in npj | digital medicine, say chatbots in healthcare are in a nascent state of development and require further research for broader adoption.

 “A lot of the bots we reviewed followed a pre-determined algorithm. They guide you systematically through a process. They aren’t at the level of automation where they can read the user language, understand the intent and respond to it based on the question,” said Smisha Agarwal, one of the lead authors on the research and Assistant Professor, Department of International Health, The Johns Hopkins University Bloomberg School of Public Health. 

Most of the apps reviewed by Agarwal’s team used a fixed input method of dialogue interaction and 88% of them have finite state dialogue management, meaning there is only so much the bot can say to a patient. Only a few apps allowed for the user to write a few sentences and then receive a relevant response.

Chatbot technology has advanced in computation time, cost of data storage/analysis, and algorithmic complexity, said Melanie Laffin, senior machine learning specialist at tech consultancy, Booz Allen Hamilton. But the tech has yet to develop with regard to context. 

“Chatbots struggle with contextual understanding and are best suited for conversations with a narrow scope, for example, simple question/answer and information retrieval,” Laffin said. “Conversations that are moderately complex can often be difficult for chatbots to reason through, leaving them unable to resolve issues.”

Clinical usage

While most of the focus remains on administrative duties, an increasing number of chatbot solutions are being used for clinical purposes, particularly for mental health and in primary care. For all but six of the apps reviewed by the Johns Hopkins team, there was no therapeutic framework underpinning their approach.  

“There is a vast amount of untapped potential. They’re not using patient background information to personalize health information, whether you’re a 40-year-old man with hypertension or a 22-year-old female, you’re taking the same pathway in the app. You’re being guided through the same process, but if you’re in your 40s with chronic disease compared to someone in their 20s the pathway could be quite personalized,”  Agarwal said.

As more companies emerge with chatbot mental health solutions, there is risk to the end-users because of the lack of regulation, said Agarwal. “Anything can be marketed to the patient, there isn’t a way to actually assess their robustness,” she said. 

Many developers who use chatbots for mental health purposes try to reduce liability with disclaimers on their website about not using the app for clinical diagnosis, said Craig Klugman, professor of bioethics at DePaul University and frequent researcher into healthcare technology ethics. 

“If you look in the fine print, they’ll say do not use this for medical purposes. It’s entertainment. We’re not diagnosing or treating anyone,” Klugman said. “If they are diagnosing and treating patients, they need to have a licensed provider behind it.”     

There are also privacy concerns related to clinical usage of chatbots. Agarwal said only 10 of the 78 apps reviewed by her team complied with health information privacy regulations, which is particularly noteworthy when they’re interacting with vulnerable patient populations, such as those who have a mental health disease. 

Download Modern Healthcare’s app to stay informed when industry news breaks.

A matter of trust 

Chatbots have seen an uptick in popularity over the past two years as healthcare providers increasingly use them to screen potential COVID-19 patients. But an analysis by researchers at Indiana University in the Journal of the American Medical Informatics Association found users don’t necessarily trust chatbots compared to humans performing the same tasks. 

“All things being equal, people didn’t quite trust the AI as much. They didn’t think it was as capable,” said Alan Dennis, professor of information systems, Kelley School of Business at Indiana University, and lead author on the study. “The biggest thing providers can do is bolster the perception of trust and be clearer on how the chatbot was developed, who is standing by the recommendations, how was it tested.” 

Dennis said were similar findings to the COVID-19 screenings when his team researched chatbots for mental health screenings. He said when people screen for mental health purposes, they want information and emotional support. 

“People seeking help for mental health and perhaps other stigmatized diseases need emotional support, which you can’t get from a chatbot. You can’t get a chatbot to feel sorry for you or empathize. You can program it in, but at the end of the day, people will know a chatbot doesn’t feel bad for you,” Dennis said.

Look to the data 

Cybil Roehrenbeck, a partner at Hogan Lovells law firm who specializes in AI-related health policy, said that healthcare systems are likely using the technology as assisted AI rather than a software program that is fully autonomous. “In that situation, you have a clinician who is overseeing that and using the information as they see fit,” she said. This means the technology is less risky than other types of AI systems that are fully autonomous. 

Any AI that’s used from a clinical perspective should have its algorithms validated rigorously and be compared to non-AI services, she added. In fact, with anything involving AI, it comes down to data, Laffin said. She said many organizations struggle with data organization and governance, which negatively affects the effectiveness of any AI project. 

“To ensure the chatbot is effective, you need relevant and accurately labeled data as well as an explicitly defined scope for the chatbot’s knowledge base,” Laffin said. “Additionally, the chatbot should have integration with other systems in order to accurately provide information to the user, which can be challenging given authentication needs. Ultimately, the better the data, the more effective the chatbot will be.” 

If the technology develops to better incorporate personal information, Agarwal is bullish on the future of chatbots. She said the technology will be critical in helping patients address medical issues that are stigmatized and therefore sensitive to address in person, such as HIV or other sexually transmitted diseases. “I think there is a lot of room for growth,” Agarwal said. 

Dennis is optimistic about the potential use of chatbots, but he said they should be limited to administrative and business-related tasks until more developments are made. 

“Look at the stuff that the primary care providers don’t really want to do and see if you can ease their burden a bit by taking the more mundane busy work so that you can free them up to do what they really signed up to do, which is care for patients,” Dennis said. 

[ad_2]

Source link