Adoption of AI, machine learning faces challenges in healthcare

[ad_1]

Artificial intelligence is becoming a ubiquitous part of our everyday lives. It is being used to drive cars, power smart devices, create art and improve healthcare. Given AI’s potential, healthcare leaders are increasingly faced with building strong AI units and teams within their organizations.

This is not a trivial task, as it requires a level of understanding of the technology many leaders do not possess due to its novelty and rapid evolution. Competent AI teams must address a wide range of important issues such as patient safety, fairness, governance, explainability, reproducibility, data drift, clinical workflows, decision support, as well as the technical details of the algorithms themselves. Let me highlight one example of the challenges healthcare leaders and their AI teams they assemble must think about if AI is going to revolutionize healthcare.

A common type of AI is machine learning, which can be used to identify patterns in electronic health record data to predict clinical outcomes. The “learning” part refers to the adaptive process of finding mathematical functions (models) that produce actionable predictions. A model is often evaluated by making predictions in new data. It is common to assess the quality of a model using measures of its predictive accuracy. While this makes sense from a mathematical point of view, it does not mimic how we as humans solve problems and make decisions.

Consider the process of buying a car. The key part of this process is deciding which car to buy. We consider the make and model along with other objectives such as size, color, style, engine type, horsepower, range, efficiency, reliability and, of course, price. We rarely consider only one feature, and usually don’t get everything we want. The consideration of multiple objectives is not unique to buying a car. We go through this same process for many life decisions such as selecting a university, a political candidate, a job, etc. These tasks are not easy, but we seem to be wired to make decisions in this manner. So why is it that machine learning typically focuses on only one objective?

One possible answer to this question is that machine learning models are typically developed by AI experts who may not fully understand healthcare. Consider the goal of identifying new drug targets from machine learning models using genetic information to predict disease risk. The hope is that the model will point to genes with protein products that could be developed as new drugs. However, as with buying a car, there are other important factors. For example, only about 10% of proteins have chemical properties making them accessible to small molecule drug candidates. This protein “druggability” information could be used to assess the value or utility of a model in addition to its predictive accuracy. This moves beyond model performance to include model utility and actionability.

How do we teach machine learning algorithms to choose models the same way humans buy cars? The good news is that numerous multi-objective methods for machine learning have been developed. They are just rarely used in healthcare or other domains. An intuitive approach is something called the Pareto optimization, in which multiple machine learning models are generated and evaluated using two or more quality criteria such as accuracy and complexity. The goal here is to identify a subset of models that optimally balance the tradeoffs of all the criteria. This approach more closely mimics the car buying process.

Machine learning for improving health is different from other application domains. Models need to do more than predict with good accuracy. They need to be transparent, unbiased, explainable, trustworthy, useful and actionable. They need to teach us something. They need to be good for patients. They need to reduce healthcare costs. This is not possible from a single objective.

An important next step with clinical AI is for computer scientists and informaticians to continue working closely with clinicians to identify the right set of objectives to maximize the healthcare impact of machine learning models. This will require engagement of the human side of AI in addition to the algorithmic side. Healthcare leaders play a critical role in assembling AI teams because they understand the necessary health outcome objectives, they commit the resources, and they can promote a diverse and collaborative culture needed for success. Healthcare has unique challenges and requires an AI strategy tailored to the complexities of patient care and institutional goals.

[ad_2]

Source link