Tag: AI

OpenAI to Launch ChatGPT “Health” Amidst Shifting AI Regulatory Schemes Surrounding Privacy

On January 7, 2026, OpenAI announced plans to launch of ChatGPT Health (“Health”), a new model that will allow users to connect their health records and wellness applications to the chatbot. Every week, hundreds of millions of people use ChatGPT to enquire about health and wellness. OpenAI has set out the privacy protections and controls it intends to implement in handling highly personal and sensitive information, including data encryption, data isolation, user options to delete chats from its system, and restricting inputs to Health not to train the foundational model. Similar to its existing system, Health will use a large language model (LLM) to service its users in chatting about health, reviewing medical records, summarizing visits, and providing nutrition advice, among other functions. 

Executive actions have shifted towards limiting AI regulations, attempting to maintain the United States as a global leader in AI innovation, and encouraging industries to adopt automation. In December 2025, President Trump issued the Executive Order 14365 “Ensuring a National Policy Framework for Artificial Intelligence,” attempting to deter state regulations from creating a patchwork of regulatory regimes and instead create national consistency. This action alone does not prevent state-level AI or privacy laws, however, it does establish a task force to challenge them. The EO followed a previous action which removed Biden-Era regulations placed on AI, classifying them as a hindrance to innovation and free markets.

The Food and Drug Administration (“FDA”) regulates AI Health technology, classifying certain developments as software as a medical device (SaMD) under the Federal Food, Drug, and Cosmetic (“FD&C Act”). On January 6, 2026 the FDA provided guidance on their oversight of AI devices, distinguishing low-risk products used for general wellness not to be regulated as medical devices. Software that is “unrelated to the diagnosis, cure, mitigation, prevention, or treatment of a disease or condition” is not a medical device under the FD&C Act. The FDA explicitly classified software programs as general wellness products, likely putting Health into an regulation-exempt status under the FD&C Act. 

Systems which function solely to transfer, store, convert, format, and display medical device data are characterized as Medical Device Data Systems (MDDS) are subject to the FD&C Act. However, the FDA has also clarified that Non-Device-MDDS with software functions that store patient data, convert digital generated data, or display previously stored patient data are exempt from regulation as long as they do not analyze or interpret data. This contention produces uncertainty for Health’s classification because of the functional interaction between data input and user interactions.

The Health Insurance Portability and Accountability Act (“HIPAA”) Privacy Rule ensures covered entities and business associates properly handle protected health information (“PHI”). Users submitting medical records to Health would not render OpenAI a covered entity or business associate, leaving its status as a consumer health product outside of HIPAA’s regulatory scope. Data sharing, such as what Health sets out to do across Apple Health, MyFitness Pal, and other applications, falls outside of the HIPAA framework if it is disclosed for purposes other than for treatment, payment, healthcare operations or otherwise requiring authorization by the Privacy Rule 45 C.F.R. § 164.508.

The Federal Trade Commission (“FTC”) may serve as a backup for these regulatory rollbacks. The FTC regulates healthcare privacy by providing data breach notifications. Compliance is enforced through the Health Breach Notification Rule (“HBNR”), requiring vendors of personal health records to notify the FTC and consumers if a data breach occurs. A vendor under the HBNR is any Non-HIPAA entity or business associate that “offers or maintains a personal health record.” It is uncertain whether Health will be subject to regulation under this category, or any other, despite their handling of users’ personal health record uploads. As an alternative method of accountability, the FTC may bring litigation actions, such as the recent class action settled with Flo Health Inc. for sharing proprietary health data to Facebook, Google, and others without user consent.

As the regulation landscape surrounding Health is actively evolving, it is uncertain how privacy concerns will be handled. Federal agencies and the executive are giving broad autonomy to the developers for privacy practices as AI integrates healthcare practices, leaving much of the accountability to be exercised through litigation or FTC after-actions.

How America’s AI Action Plan Could Affect Brain-Computer Interfaces

On January 23, 2025, President Donald Trump signed Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” which sought to revoke existing AI policies and directives that act as barriers to American AI innovation. The federal government’s push for AI development may accelerate the availability of neurotechnologies that incorporate AI, while reducing regulatory oversight and consumer protections. 

Pursuant to the Executive Order, the White House released a comprehensive policy strategy entitled “Winning the Race: America’s AI Action Plan” in July of 2025. The policy includes a recommendation to remove red tape by “review[ing] all Federal Trade Commission (FTC) investigations . . . to ensure that they do not advance theories of liability that unduly burden AI innovation.” The policy also encourages the country to “establish regulatory sandboxes or AI Centers of Excellence around the country where researchers, startups, and established enterprises can rapidly deploy and test AI tools . . . enabled by regulatory agencies such as the Food and Drug Administration (FDA).” Implementing these recommendations may directly affect the development of neurotechnologies.

Brain-computer interfaces (BCIs) are neurotechnologies that allow for communication between the human brain and external output, such as a computer, mobile device, or prosthetic device. They are subject to FDA and FTC oversight, depending on their intended use. The primary FDA department responsible for regulating medical neurotechnologies is the Division of Neurological and Physical Medicine Devices (DNPMD). For direct-to-consumer technologies, the FTC oversees consumer protection and privacy. The Management of Individuals’ Neural Data Act of 2025 (the MIND Act) is proposed legislation that would direct the FTC to study how neural data are currently governed. 

Neuralink and Merge Labs are companies that are eager to incorporate AI in their BCI technologies. Neuralink, headed by Elon Musk, produces a BCI that is implanted in the brain near neurons of interest. Electrodes within the BCI then detect electrical signals from neurons and decode information. The goals of Neuralink are to “restore autonomy to individuals with unmet medical needs today, and to unlock superhuman capabilities across many people in the future.” The company aims to eventually connect brain neural networks to artificially intelligent networks outside the brain.

One month before the release of America’s AI Action Plan, Neuralink received FDA breakthrough device designation to restore communication for individuals with speech impairment. Musk has also benefited from his relationship with the Trump Administration and his former position as the leader of the Department of Government Efficiency (DOGE), which has significantly reduced the federal civil service. In February of 2025, DOGE reportedly fired the FDA employees responsible for overseeing Neuralink. Ten months later, Neuralink hired the former director of the FDA office responsible for regulating the company to lead its medical affairs division. 

Sam Altman, the CEO of OpenAI, has recently partnered with the Trump Administration on the Stargate Project, which “intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States.” Co-founded by Altman, Merge Labs is researching a new approach that could combine gene therapy with an ultrasound device to create non-invasive BCIs. The company has a long-term mission of “bridging biological and artificial intelligence to maximize human ability, agency, and experience.” OpenAI is the largest investor in Merge Labs, which has raised $252 million of funding. Along with funding, OpenAI announced that it will collaborate with Merge Labs to accelerate progress, stating that “BCIs will create a natural, human-centered way for anyone to seamlessly interact with AI.” 

Merge Labs has not yet submitted technology for FDA approval. It is unclear how the technology would be classified, especially because the company’s mission is not explicitly related to medical uses and the technology aims to be non-invasive. With the release of America’s AI Action Plan, it is also unclear whether consumers can rely on the FTC for privacy protections regarding this technology. Removing red tape and enabling AI adoption may open the door to faster development and distribution of these life-changing technologies for people with disabilities. However, brain surgery and gene therapy that incorporate AI are potentially permanent medical procedures that could put Americans at risk of long-lasting health impacts and privacy invasions.

Next Generation Physicians are Using Augmented Intelligence: Is the Law Ready? 

What if a physician working alone at night in a rural hospital could summon a tireless “Dr. House” with every difficult case; a trained medical diagnostician that is always awake, ever ready, and rarely hallucinates?

Interactive artificial intelligence (AI) diagnostic models are rapidly evolving beyond ChatGPT and traditional “black box” systems that opaquely analyze radiology scans or lab values to higher-order transparent language models capable of intelligent explanation and diagnosis of complex illnesses. Researchers at Harvard Medical School recently developed an AI system named “Dr. CaBot” that will eventually function as a digital peer capable of generating differential diagnoses and detailed reasoning processes. As medical schools from Harvard to the University of Miami train tomorrow’s physicians to problem solve using science, clinical judgment, pattern recognition, and logic, educators are embracing a novel resource to strengthen their students’ skills. The American Medical Association (AMA) uses the phrase “augmented intelligence” as a way to conceptualize AI’s assistive role, emphasizing the way the tools enhance human intelligence rather than replace it. 

Technology and medicine are moving quickly, and the legal field has yet to catch up; innovation, in many cases, has spread faster than stare decisis. While attorneys await new rules, advancements in AI and machine learning pose greater risks and rewards for the healthcare sector than many other applications, rivaled only by risks incurred by the defense industry. 

Evolving Liability Frameworks 

As patients navigate an increasingly automated health care ecosystem where many insurance determinations are made by algorithms and 66% of clinicians integrate Artificial Intelligence/Machine Learning (AI/ML) tools—new questions around liability and standards of care will emerge. When harm occurs as a result, does the law look to the software developer who wrote the code, the healthcare system that deployed it, or the physician who ultimately incorporated the technology into their clinical decision making? Is the use of assistive-AI any different than orthostatic vital signs in the hands of a skilled practitioner who interpreted the readings correctly vs. incorrectly?

The incorporation of advanced AI diagnostics into patient care has created a patchwork of legal and regulatory challenges across the nation. Currently, the FDA classifies AI/ML technologies in healthcare settings under “Software as a Medical Device (SaMD)” guidance in an attempt to bring AI tools under medical device and products liability regulations. However, a framework intended for the development of a static medical device that may suffer manufacturing, design, or warning defects was not created for a quickly moving target such as an AI tool which can learn and evolve over time. 

The SaMD classification gives AI/ML diagnostic tools a form of FDA preemption that complicates malpractice and products liability claims under state law. For example, when a legacy device, such as an insulin pump or glucometer for a diabetic patient, receives FDA clearance under the 21 U.S.C. § 360k, the manufacturer may introduce a new product to the market, subject to certain risk-mitigation measures. In Dickson v. Dexcom, for example, a “Class II: De Novo” authorization shielded the manufacturer from tort liability when a continuous glucose monitor failed to warn a patient of hypoglycemia, which led to a motor vehicle accident. Many AI diagnostic tools are entering the market under this same “device” classification, making it critical for doctors and administrators to understand the regulatory landscape and potential exposure before deployment.

Duty to Disclose in Clinical Practice 

In addition to understanding state and federal liability frameworks, there is growing discussion around disclosure and transparency related to the use of AI in diagnostic processes. Because the use of AI/ML is closely associated with protected health information (PHI) and broader risks, California, Colorado, and Utah, have created laws that mandate disclosure in clinical treatment. For providers, and the attorneys who represent them, this is often a state-specific discussion: Texas laws require providers to disclose AI use in clinical care, whereas Nevada prohibits providers from utilizing AI systems in behavioral health contexts. 

Where state law is silent on the issue, physicians should remain vigilant around efforts to obtain valid informed consent regarding use of AI in clinical settings, as state medical boards ultimately hold physicians accountable for disclosures and outcomes related to the integration of novel tools into diagnosis and treatment plans.

Regardless of jurisdiction, research shows that patients value connection with physicians, and when visiting a healthcare practice, they expect to consult with a doctor. Few people expect their provider to sidebar with ChatGPT or even a purpose-built OpenAI language model that can rule out hundreds of mystery illnesses sans implicit bias—although Augmented Intelligence may ultimately solve the problem. Similarly, when harm occurs, current medical malpractice remedies were built around the assumption of human negligence instead of errors arising from machine learning misinformation.

Moving Forward

Legal scholars stand at the nexus of healthcare liability and AI/ML diagnostics where case law is yet to be written. Can plaintiffs’ attorneys establish vicarious or joint and several liability when claims involve an AI developer and a health system? What remedy exists when a physician outsources clinical judgment to a trained language model or fails to scrutinize results? As a net benefit, will the predictive powers of AI diagnostic models decrease both primary care-to-specialist patient wait times, and the risk of human error?

It appears that emerging physicians have embraced the next “possibility model” in medicine—and the health law community must respond by establishing guidance to address outstanding questions related to liability, reliability, governance, consent, and privacy. Perhaps tomorrow’s attorneys can ask AI for guidance.

Authors Note: Some healthcare providers and policymakers now prefer the term “misinformation” over “AI hallucination” in an effort to avoid stigmatizing mental health conditions.

The AI Doctor Will See You Now—But Is It Regulated?

In early 2025, two-thirds of doctors reported using artificial intelligence (AI) for a wide range of purposes, including “documentation of billing codes, medical charts, and visit notes; generating discharge instructions, care plans, and progress notes; providing translation services; supporting diagnostic decisions; and more.” Although the healthcare sector was initially hesitant to adopt AI, it has since accelerated its integration efforts and now implements AI technologies at twice the rate observed in other economic sectors. The escalating costs of healthcare have prompted the increased adoption of artificial intelligence, aimed at enhancing operational efficiency, optimizing resource utilization, and ultimately reducing expenditures.

AI in healthcare extends beyond addressing administrative inefficiencies, as regulator-approved applications, classified as Software as a Medical Device (SaMD), are already showing clinical promise; for example, one AI algorithm used in a U.S. mammography study improved breast cancer detection rates by 9.4% and reduced false positives by 5.7%. Ongoing research is exploring the efficacy of SaMD across fields such as dermatology, radiology, psychiatry, and personalized medicine, where AI’s capacity to process large datasets and continuously learn enhances diagnostic accuracy and enables more individualized treatment approaches.

Although artificial intelligence presents considerable potential for advancing the healthcare sector, it simultaneously generates substantial uncertainties, given that technological developments outpace the formulation and implementation of regulatory frameworks. According to Professor Dr. Heinz-Uwe Dettling, Partner, Ernst & Young Law GmbHand EY GSA Life Sciences Law Lead, this issue is often described as the ‘locked versus adaptive’ AI challenge; regulatory efforts are necessary, but the current regulations were not designed to keep up with the rapid pace of technological advancements like those seen in artificial intelligence.

In addition to ongoing uncertainties surrounding regulatory frameworks, AI remains inherently imperfect. A study conducted by Rutgers University demonstrated that AI algorithms can inadvertently perpetuate erroneous assumptions, largely because they rely on datasets that may result in broad generalizations about people of color. Furthermore, these algorithms often neglect essential social determinants of health, such as transportation accessibility, the cost of nutritious food, and variable work schedules, which play a critical role in influencing patients’ capacity to comply with treatment regimens requiring frequent medical appointments, physical activity, and other health-related interventions.

Concerns regarding the implementation of artificial intelligence in healthcare have prompted regulators, legislators, and healthcare practitioners to call for the development of more comprehensive regulations and guidelines within this dynamically evolving sector. A thorough understanding of biases inherent in traditional education and healthcare professionals is essential, requiring developers to have both domain-specific knowledge and technical expertise. Additionally, implementing more rigorous processes to review data inputs is crucial to preventing biases in algorithms that may exacerbate healthcare disparities.

Because AI touches every part of the healthcare system, it is essential to have cross-agency coordination as well as regulations at the state and federal levels. Currently, multiple federal agencies regulate AI in healthcare, including the FDA, the Department of Health and Human Services (HHS), and the Centers for Medicare and Medicaid Services (CMS). In addition, states have enacted legislation designed to ensure that artificial intelligence remains a tool, not a replacement, in the doctor’s office. These state-level regulations require “healthtech” companies to embed compliance measures from the earliest stages of product development, including conducting thorough audits and employing geofencing technologies to navigate the patchwork of differing state laws effectively. By prioritizing proactive compliance and transparent practices, companies can not only mitigate legal risks but also build greater public trust, thereby enabling smoother adoption and competitive advantage in an increasingly regulated and scrutinized market.