, , , , ,

Increasing the Transparency and Trustworthiness of AI in Health Care

Kathryn Marchesini; Jeff Smith and Jordan Everson | April 13, 2023

This is part five of a blog series on predictive models, artificial intelligence (AI) & machine learning (ML) in health. We encourage readers to (re)visit the four previous blog posts for important context to what follows.

Through a series of blog posts over the last year, we’ve described our understanding of the current and potential uses of predictive models and machine learning algorithms in health care, and the role that ONC can play in shaping their development and use. In this post, we will connect the dots between our understanding of this landscape and our proposed rule titled, Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing, or HTI-1.

Revising ONC’s existing decision support certification criterion to include AI, ML, and other predictive decision support

In our first two posts, we described foundational trends and important history related to the use of information technology (IT) (software) to aid decision-making in health care. In our first post, we noted that the health care sector is at a nascent stage in which more predictive models, especially those driven by machine learning, are used to inform numerous facets of health care. We also noted that certified health IT, particularly electronic health records (EHRs), increasingly serves as both a data source and delivery mechanism for the output of predictive models, many times as recommendations to clinicians and other users of certified health IT.

In the next post, we dove further into ONC’s role in advancing the development and use of technology for decision support by describing the history of our clinical decision support (CDS) certification criterion and our existing requirements that certified health IT support “evidence-based decision support interventions,” “linked referential CDS,” and provide users “source attribute” information, such as CDS bibliography information. This approach has led to a dynamic and flourishing landscape of decision support technologies, varied in purpose and scope, ranging from patient safety and clinical management to administrative and documentation functions. Our approach to DSIs in the proposed rule reflect these observations:

ONC Proposal in HTI-1: Given the centrality of certified health IT to these emerging technologies, and our existing requirements for CDS, we propose to define “Predictive Decision Support Interventions,” or predictive DSIs, as part of a revised version of our CDS certification criterion. This definition would encompass many of the kinds of predictive models we see emerging across health care, including those created by other parties, such as health systems or tech firms, and deployed through a developer’s certified health IT.

Importantly, our proposal would not require that certified health IT support predictive DSIs; rather, we propose that health IT modules that enable or interface with a predictive DSI take specific steps for their users.

We also propose to leverage our existing requirements for source attributes to ensure that users know when data relevant to health equity, such as race, ethnicity, and social determinants of health, are used in DSIs. Our proposal also includes a new functionality to enable user “feedback loops” on the performance of these DSIs.

Promoting consistent and routine electronic access to technical and performance information on predictive decision support

In our third and fourth posts, we took a more critical look at the risks that predictive DSIs could cause harm. Some of these risks include the potential that predictive DSIs:

  • Reproduce or amplify implicit and structural biases of society, health, and health care delivery;
  • Bake-in existing, inexplicable differences in health care and health outcomes; and
  • Make recommendations to users that are ineffective or are unsafe, among others.

During a Health Information Technology Advisory Committee public hearing on the concept of health equity by design last year, one presenter noted that clinicians have unmet needs for information and transparency, and that until these needs are met, clinicians are unlikely to use ML-driven tools or risk misapplying them to their patients. For example, panelists noted that clinicians need to know that an AI product has been evaluated in their setting of care, that the technology was trained on data that reflects their practice population, and that the product will be continuously monitored. We also heard that ML-driven technology has recreated or exacerbated systemic inequalities that come with the lack of an individual’s access to quality health insurance and quality care and have the potential to do so at a larger scale. The proposed rule’s approach to increase algorithmic transparency for users aim to help address concerns we’ve have heard related to the risks of algorithms in EHRs:

ONC Proposal in HTI-1: To address a wide array of potential risks, we propose to require that health IT modules that enable or interface with a predictive DSI provide users with information across four categories of source attributes related to the intended use; development; evaluation of validity, fairness and effectiveness; and ongoing monitoring and use of predictive DSIs (technical and performance information), including additional specific information relevant to fairness. We intend for information provided about source attributes to enable potential users to determine if the predictive DSI is fair, appropriate, valid, effective, and safe, or FAVES.

While numerous interested parties have various terms to describe what we refer to as FAVES, we believe that each concept of FAVES – and those related concepts not included in our catchy acronym, like reliable or secure – is meant to describe facets of a trustworthy or “high quality” algorithm.

Promoting transparent risk management for and governance of predictive decision support

We also noted in the fourth post that information on the development and validation of models alone proved insufficient to prevent harm from the use of predictive models in the financial services industry, and that additional oversight related to organizational competencies (i.e., governance) to manage risks related to predictive DSIs were also necessary to avoid harm. The examples from other industries as well as mounting patient, provider, and industry concerns about AI governance informed parts of the proposed rule related to governance and data management:

ONC Proposal in HTI-1: Leaning on experiences from the financial services industry, we propose to require developers of certified health IT to perform, document, and make information publicly available about intervention risk management practices related to risk analysis, risk mitigation, and governance. We identify several categories of risk, closely tied to the recently released NIST AI Risk Management Framework, including risks related to validity, reliability, privacy, and security. We also propose that developers describe their governance and data management practices, given we are aware that many risks are impacted by the extent, quality, source, and representativeness of the data used to develop predictive models.

ONC’s role in advancing transparent and trustworthy predictive technology in health care

There is tremendous value in predictive technologies in health care. Together, our proposals are meant to optimize the use of high-quality algorithms in health care. We believe that these proposed requirements would improve transparency, promote trustworthiness, and incentivize the development and wider use of FAVES predictive DSIs to inform decisions across a range of use cases in the health care industry, including clinical, administrative, and public health. The resulting information transparency would enable the deployment of these technologies in safer, more appropriate, and more equitable ways.

Be sure to check out the HTI-1 proposed rule and forthcoming DSI fact sheet to understand the details of our proposals. The proposed rule opens for public comment on April 18, 2023 until June 20, 2023.