From the inception of 'artificial intelligence' in the 1950s to its current zenith in 2024, the journey has been nothing short of transformative. AI emerges as both a beacon of innovation and a harbinger of complex challenges, reshaping industries and societies alike. The Alliance for Artificial Intelligence in Healthcare (AAIH) explains, “Large-language models (e.g., GPT) and generative image models have been in development for many years but have now reached sufficient maturity that their future ubiquity seems all but assured.” The AAIH suggests that innovation around AI and machine learning (ML) will drive “healthcare to be more human-centric through combining automation, ML, and human (and human-derived) data.”
91% of healthcare leaders surveyed by Sermo see AI and machine learning integration as important or critically important to their organization’s growth and success.
The clinical use cases for AI seem almost infinite: better risk prediction for disease stratification, faster clinical trials, improved disease diagnoses, and more sophisticated wearables to capture vital health information, just to name a few. An article published by the American Hospital Association (AHA) states that one of the top clinical applications of AI is in clinical decision-making, “AI algorithms analyze a vast amount of patient data to assist medical professionals in making more informed decisions about care — outperforming traditional tools like the Modified Early Warning Score (MEWS).”
The AHA also suggests that AI holds significant opportunities in diagnostics and imaging. The technology’s benefit comes in its ability to perceive and process large amounts of both structured and unstructured data, giving clinicians more insightful data for more accurate diagnostics and more effective, timely clinical decision-making. Enhanced patient safety is yet another top application of AI, according to the AHA, due to better error detection, patient stratification, and drug delivery.
Clinical documentation, now undergoing AI transformation, stands as a significant administrative burden contributing to clinician burnout. The integration of electronic health records (EHRs) has shifted the patient-clinician dynamic, diverting clinicians' attention from direct patient care to extensive data entry tasks. This shift to EHRs and the need for more care documentation not only diminishes the patient experience but also leads to increased charting and documentation during evenings and weekends, thereby encroaching on a clinician's personal time.
AI, along with technologies like machine learning (ML), natural language processing (NLP), and robotic process automation (RPA), streamlines workflows such as documentation and medical coding. For instance, when recording and transcribing exam room encounters—with patient consent—large language models (LLM) interpret transcriptions, feeding data directly into the EHR. Clinicians then review, edit, and approve the notes, drastically reducing manual data entry time from hours to minutes. This efficiency affords clinicians more time for direct patient interaction.
AI- and ML-generated documentation can determine the appropriate diagnosis codes and based on the data, assign them to the encounter, and automatically submit them to billing, significantly reducing administrative burdens for clinicians.
With over 3,300 healthcare AI startups in the U.S. and a projected global market value of $187 billion by 2030, ensuring safety, privacy, and transparency in AI adoption is paramount. The White House's 2023 Executive Order and subsequent government-wide policy prioritizes governing AI development for improved health outcomes while safeguarding security and privacy. That is a good place to start. The overarching concerns of AI in healthcare include critical issues like data security, patient privacy, and ethical considerations. One of the most significant concerns is how the large amount of patient data required to enable AI is collected and stored. This is especially problematic considering that data typically exists in multiple silos across the continuum, meaning it needs to be aggregated from numerous sources before being used. Therefore, AI solutions require access to a multitude of systems to capture the data it needs, which creates significant vulnerabilities for healthcare organizations who are developing and using AI models.
Another challenge is how, when, and by whom patient data is used. Research published by the National Library of Medicine warns, “Beyond the possibility for general abuses of power, AI poses a novel challenge because the algorithms often require access to large quantities of patient data and may use the data in different ways over time.” The article suggests that without the appropriate privacy protections and penalties in place, organizations may decide to “monetize the data or otherwise gain from them.” While HIPAA protections are in place today, there are gray areas around what data should be protected under HIPAA and what type should not. For example, a roundtable conducted by the HHS highlights inconsistencies, “Patient-generated data, such as data collected from mobile applications and wearable devices, can also contain sensitive information about individuals ranging from fertility treatments to mental health conditions. However, there are relatively few legal guidelines that protect this emerging data type from misuse.” The broad application of AI in novel ways in healthcare requires policies and guidelines that offer clarity and stringent penalties. This is an area where any level of ambiguity is not acceptable.
“The sheer volume of data, the ability to re-identify previously de-identified data, and the challenge of navigating through the regulatory landscape make AI a unique risk in healthcare security and privacy.” Health IT Security
To address the risks of using AI in healthcare, providers and payers need a new way of sharing data that ensures privacy and security while eliminating the issues of data integrity.
Today, the primary model for collecting and training data for AI solutions is centralized data aggregation, which comes with significant privacy and security risks. This approach requires considerable safeguard measures during model training and solution deployment, and it provides limited transparency of how the data is used to train the models or how the generated data is eventually shared with or used by others.
A decentralized network enables a different way of exchanging and using data, which works by enabling the following:
Another benefit of a decentralized network is that it significantly improves data security by eliminating the need to send data outside the network to be aggregated. Data remains within the system and under the control of the data originator, keeping patient data safe.
AI has the potential to greatly simplify healthcare by enhancing patient experiences, improving outcomes, and reducing inefficiencies. However, realizing these benefits necessitates a safe and integrity-focused data network. The Avaneer Health Network™ offers a secure, decentralized platform to support permissioned access to data through purpose-built solutions. One solution already in use by payers and providers is Avaneer Coverage Direct™.
Avaneer Coverage Direct delivers updated health insurance coverage information and remediates coverage misalignment through a peer-to-peer network of payers and providers. Avaneer Coverage Direct evaluates patient/member coverage information when changes are made and proactively identifies when a participant is missing coverage, recording incorrect details about a coverage, or has an inaccurate view of coverage primacy. The information is instantly available in the source system if desired. Payers and providers both benefit from improved transparency and reduced costs while patients/members benefit from reduced delays in care and fewer surprise bills.
Learn how to keep control of your data while leveraging future AI solutions