Next Generation Physicians are Using Augmented Intelligence: Is the Law Ready? 

What if a physician working alone at night in a rural hospital could summon a tireless “Dr. House” with every difficult case; a trained medical diagnostician that is always awake, ever ready, and rarely hallucinates?

Interactive artificial intelligence (AI) diagnostic models are rapidly evolving beyond ChatGPT and traditional “black box” systems that opaquely analyze radiology scans or lab values to higher-order transparent language models capable of intelligent explanation and diagnosis of complex illnesses. Researchers at Harvard Medical School recently developed an AI system named “Dr. CaBot” that will eventually function as a digital peer capable of generating differential diagnoses and detailed reasoning processes. As medical schools from Harvard to the University of Miami train tomorrow’s physicians to problem solve using science, clinical judgment, pattern recognition, and logic, educators are embracing a novel resource to strengthen their students’ skills. The American Medical Association (AMA) uses the phrase “augmented intelligence” as a way to conceptualize AI’s assistive role, emphasizing the way the tools enhance human intelligence rather than replace it. 

Technology and medicine are moving quickly, and the legal field has yet to catch up; innovation, in many cases, has spread faster than stare decisis. While attorneys await new rules, advancements in AI and machine learning pose greater risks and rewards for the healthcare sector than many other applications, rivaled only by risks incurred by the defense industry. 

Evolving Liability Frameworks 

As patients navigate an increasingly automated health care ecosystem where many insurance determinations are made by algorithms and 66% of clinicians integrate Artificial Intelligence/Machine Learning (AI/ML) tools—new questions around liability and standards of care will emerge. When harm occurs as a result, does the law look to the software developer who wrote the code, the healthcare system that deployed it, or the physician who ultimately incorporated the technology into their clinical decision making? Is the use of assistive-AI any different than orthostatic vital signs in the hands of a skilled practitioner who interpreted the readings correctly vs. incorrectly?

The incorporation of advanced AI diagnostics into patient care has created a patchwork of legal and regulatory challenges across the nation. Currently, the FDA classifies AI/ML technologies in healthcare settings under “Software as a Medical Device (SaMD)” guidance in an attempt to bring AI tools under medical device and products liability regulations. However, a framework intended for the development of a static medical device that may suffer manufacturing, design, or warning defects was not created for a quickly moving target such as an AI tool which can learn and evolve over time. 

The SaMD classification gives AI/ML diagnostic tools a form of FDA preemption that complicates malpractice and products liability claims under state law. For example, when a legacy device, such as an insulin pump or glucometer for a diabetic patient, receives FDA clearance under the 21 U.S.C. § 360k, the manufacturer may introduce a new product to the market, subject to certain risk-mitigation measures. In Dickson v. Dexcom, for example, a “Class II: De Novo” authorization shielded the manufacturer from tort liability when a continuous glucose monitor failed to warn a patient of hypoglycemia, which led to a motor vehicle accident. Many AI diagnostic tools are entering the market under this same “device” classification, making it critical for doctors and administrators to understand the regulatory landscape and potential exposure before deployment.

Duty to Disclose in Clinical Practice 

In addition to understanding state and federal liability frameworks, there is growing discussion around disclosure and transparency related to the use of AI in diagnostic processes. Because the use of AI/ML is closely associated with protected health information (PHI) and broader risks, California, Colorado, and Utah, have created laws that mandate disclosure in clinical treatment. For providers, and the attorneys who represent them, this is often a state-specific discussion: Texas laws require providers to disclose AI use in clinical care, whereas Nevada prohibits providers from utilizing AI systems in behavioral health contexts. 

Where state law is silent on the issue, physicians should remain vigilant around efforts to obtain valid informed consent regarding use of AI in clinical settings, as state medical boards ultimately hold physicians accountable for disclosures and outcomes related to the integration of novel tools into diagnosis and treatment plans.

Regardless of jurisdiction, research shows that patients value connection with physicians, and when visiting a healthcare practice, they expect to consult with a doctor. Few people expect their provider to sidebar with ChatGPT or even a purpose-built OpenAI language model that can rule out hundreds of mystery illnesses sans implicit bias—although Augmented Intelligence may ultimately solve the problem. Similarly, when harm occurs, current medical malpractice remedies were built around the assumption of human negligence instead of errors arising from machine learning misinformation.

Moving Forward

Legal scholars stand at the nexus of healthcare liability and AI/ML diagnostics where case law is yet to be written. Can plaintiffs’ attorneys establish vicarious or joint and several liability when claims involve an AI developer and a health system? What remedy exists when a physician outsources clinical judgment to a trained language model or fails to scrutinize results? As a net benefit, will the predictive powers of AI diagnostic models decrease both primary care-to-specialist patient wait times, and the risk of human error?

It appears that emerging physicians have embraced the next “possibility model” in medicine—and the health law community must respond by establishing guidance to address outstanding questions related to liability, reliability, governance, consent, and privacy. Perhaps tomorrow’s attorneys can ask AI for guidance.


Authors Note: Some healthcare providers and policymakers now prefer the term “misinformation” over “AI hallucination” in an effort to avoid stigmatizing mental health conditions.

Leave a Reply

Your email address will not be published. Required fields are marked *