Does AI need ‘Nutrition Facts’?

[ad_1]

A longer package insert template would also leave room for information that’s useful for different people involved in the procurement process, like clinicians, IT staff and others.

That said, Shah isn’t convinced yet that a pharmaceutical-style label is the best approach. It’s just one idea he wants to see the healthcare industry consider.

There’s the potential for information overload, particularly if there are dozens of variables that make it difficult for providers to sift through and identify what they actually need.

“How many patients and doctors read that thin leaflet that comes with every prescription?” Shah said. “It’s fine to report all of this … but what are we going to do with it?”

At Stanford Health Care, Shah said they’re working on setting up a virtual testing environment, so that researchers can run AI algorithms on historical medical data held at the health system. That way, the organization can assess whether the tool works as expected with their patients and get a sense of whether a deployment would pay off.

It’s basically a “try before you buy” scenario, Shah said of the process, which he calls a virtual model deployment.

The role of the hospital

Dr. Atul Butte, chief data scientist at University of California Health, said he’d suggest hospitals take a page from how they already think about prescription drugs.

Today, hospitals typically have a group—such as a pharmacy and therapeutics committee—that oversees the organization’s drug formulary, taking on tasks like medication use evaluations, adverse drug event monitoring and medication safety efforts. That approach could work for AI algorithms, too.

Butte suggested hospitals set up a committee focused on algorithmic stewardship to oversee the inventory of AI algorithms deployed at the organization. Such committees—composed of stakeholders like chief information officers, chief medical informatics officers, medical specialists and staff dedicated to health equity—would determine whether to adopt new algorithms and routinely monitor algorithms’ performance.

Just like accrediting bodies such as the Joint Commission require medication use evaluations for hospital accreditation, a similar process could be established for evaluating algorithm use.

“Instead of inventing—or re-inventing—a whole review process, why not borrow what we already do?” Butte said.

That stewardship process could go hand in hand with a pharmaceutical-style label that explains the subpopulations an algorithm was trained on, outcomes from clinical trials and even what types of equipment and software the tool pairs well with—for example, if there’s a developer that’s only tested its image analysis software on X-rays from certain vendors.

“I can see a complicated label being needed some day,” Butte said. “It’s going to have to be sophisticated.”

That’s part of the role hospitals can take on to ensure clinicians are given high-quality AI tools that they know how to use as part of patient care.

Regulators, AI developers, hospital governance committees and clinician users all have a shared responsibility to monitor AI and check that it’s working as expected, said Suchi Saria, professor and director of the Machine Learning and Healthcare Lab at Johns Hopkins University and CEO of Bayesian Health, a company that develops clinical decision-support AI.

Hospitals and AI vendors should empower clinicians to report when an AI recommendation is different from their judgment so they can assess whether there’s a reason why, she said.

“They each have a role to play in making sure there’s end-to-end oversight,” Saria said, including measuring performance over time. “It’s not on one body alone.”

[ad_2]

Source link