Category: Blog

Full Coverage or False Promises? Inside Health Care Sharing Ministries

The front page of Samaritan Ministries’ website touts a reliable, Christian way to cover healthcare costs; a “… Biblical, non-insurance approach to health care,” if you will.  Users are given the impression that like health insurers, organizations like Samaritan will cover their basic healthcare costs; such as primary care, childbirth, prescription medicines, or emergency care. Upon first glance, the list of plans of different tiers with monthly prices listed against a backdrop of pastels make the website seem no different than any other health insurance company’s website. It bears a striking resemblance to websites for companies such as Aetna or United Healthcare. What this carefully curated image won’t reveal is a growing string of lawsuits, unpaid bills, and heartache

Health Care Sharing Ministries (HCSMs) are defined as, “…a form of health coverage in which members, who typically share a religious belief, make monthly payments to cover expenses of other members.” An August 2018 report from the Commonwealth Fund revealed that HCSMs do not include basic protections under the Affordable Care Act. These organizations do not offer coverage for pre-existing conditions, can charge higher rates based on health status, and may exclude essential health benefits. They may also decline to cover benefits without dollar caps on health care services, or fail to cap out-of-pocket costs to members. 

The lack of monitoring of HSCM activities has been drawing increasing attention from regulators and lawmakers. Most notably in 2024, the co-founders of Missouri-based Medical Cost Sharing, another HCSM, plead guilty to an $8 million wire fraud conspiracy in the Western District of Missouri. James McGinnis and Craig Anthony Reynolds collected $8 million in revenue and used only 3.1% of this amount to pay health care claims. Similarly, in 2023, an Atlanta-based health care sharing ministry called Trinity run by a company called Aliera declared bankruptcy, leaving behind $660 million in unpaid medical claims. Aliera is suing former CEO Shelley Steele for loans she received from the company and never repaid, including one loan of over $6 million. Liberty HealthShare, based out of Ohio, used $140 million of the $300 million they received in member fees to fund a boutique airline, a marijuana farm, buy real estate, and carpet stores; all while their health sharing subsidiaries Cost Sharing Solutions and Medical Cost Solutions LLC went bankrupt. 

In response to the ongoing cases of fraud and lack of regulations, states like Oregon and Washington State are stepping up to protect consumers from incurring crippling debt and hardship. Oregon Democrats introduced HB 2268 in 2025 which would require any individual or organization marketing or selling a health care cost sharing arrangement to register with the Oregon Director of the Department of Consumer and Business Services. The office of Washington State Insurance Commissioner (OIC) Patty Kuderer fined ClearShare Health $275,000 in 2025 for selling insurance plans, disguised as “memberships,” without permission from the OIC and using only $54,201 of the $524,095 they collected in fees between 2022 and 2024 to pay their members’ medical expenses. 

Unfortunately, with the extreme cost of healthcare, severe cuts to Medicare and Medicaid, and a myriad of misinformation, consumers may continue turning to health care sharing ministries to help cover the cost of healthcare. In spite of the widespread documented cases of fraud, both the previous and current Trump Administrations have shown support for healthcare sharing ministries. The current Trump Administration plans to ensure tax parity for healthcare sharing ministries as part of their larger plan to lower healthcare costs by deregulating the health insurance industry, Medicare, Medicaid, and other federal programs. These changes would impact more than 1.5 million Americans who are members of a health care sharing ministry.  

Consumers looking to save on health insurance or who do not believe that health insurance or federal programs align with their values, should think twice before purchasing a membership from a health care sharing ministry. With the federal government’s support of health care sharing ministries, it is up to state regulators to protect consumers. Increasing supervision of healthcare sharing ministries and imposing long overdue penalties on those engaged in fraudulent practices is long overdue. For an industry that has promised those in need so much, they have left thousands with even more debt, grief, and regret than before. 

Dietary Supplement Labels: Divided Opinions on the Relaxation of Regulations 

Vitamins, probiotics, minerals, and botanicals are among the many dietary supplements used by approximately 75% of Americans to support their diets and maintain their health. Although often found in the same aisle in stores, the FDA does not regulate supplements in the same way as drugs. The Dietary Supplement Health and Education Act of 1994 (DSHEA) defines dietary supplements as a category of foods regulated by the FDA. The Act furthermore enacted labeling requirements, including rules on the placement and contents of disclaimers and nutrition labels. The disclosures are required to remind consumers that supplements are not FDA-reviewed for safety or effectiveness before they are sold. Recently, the issue has been how much a manufacturer must disclose on a label and specifically how many disclosures are required to appear on each “panel” of a supplement label. 

In a class action suit filed against Amazon in 2023, the plaintiffs, a group of consumers, claimed that Amazon promotes and sells products that lack mandatory disclaimers on their labels, making them dangerous, defective, and illegal. The plaintiffs alleged that Amazon advertised purported benefits of certain dietary supplements not approved by the FDA without providing the required disclaimers. While the case remains ongoing, Amazon recently filed a motion to pause the suit, claiming it hinges on a regulation that the FDA announced is under revision. 

On December 11, 2025, the FDA released a letter responding to requests to amend label regulation 21 C.F.R. 101.93(d), which governs the placement of disclaimers. The current regulation, added by DSHEA, provides that statements for supplements can be made if they contain a disclaimer in bold type that reads: “This statement has not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure, or prevent any disease.” The current rules require that the disclaimer appear on each panel of a product label where a claim is made. The FDA’s December letter asserted that, based on its initial review, revising the regulation to remove the requirement that the disclaimer appear on each panel of a supplement would “be consistent with section 403(r)(6)(C) of the FD&C Act while reducing label clutter and unnecessary costs.” The letter also acknowledges that the FDA has rarely enforced this requirement and is therefore likely to propose an amendment. There is no timeline for when the rule change might take effect; however, the letter states the FDA will not enforce the existing rule while it is under review. 

The regulations would still require the disclaimer to appear at least once on the bottle, however, many consumers and critics in the medical field believe the amendment would weaken the already deficient warning system. A study done between 2004 and 2013 found that consumers made more than 15,000 reports of health problems linked to supplements to the FDA’s central reporting system. Supplements that claim to help with weight loss, sexual function, energy and muscle building have been among those found to have potentially harmful undisclosed ingredients like prescription pharmaceuticals and steroids. Public health advocacy organizations and consumers have called for reforms to the supplement regulation process including proposals of mandatory product listing, FDA standards, and premarket review. 

Some supplement retailers advertise that their supplements are voluntarily self-regulated following industry-wide initiatives which created their own standards that complement or enhance government regulations. Voluntary programs like the Council for Responsible Nutrition and the Consumer Healthcare Products Association can promote enhanced product safety and fill the gaps of government regulation. Although they present certain benefits, these programs remain limited by their lack of enforcement power and voluntary nature. 

The letter proposing the relaxation of disclaimer requirements is a step in the wrong direction for advocates who have been fighting for heightened regulation. The plaintiffs in the Amazon case say the letter should not stop their suit, because their claims include many other disclaimer violations beyond the “each panel” rule. If passed, however, many believe the amendment will be the first step toward dangerously weak warnings on supplement labels.

OpenAI to Launch ChatGPT “Health” Amidst Shifting AI Regulatory Schemes Surrounding Privacy

On January 7, 2026, OpenAI announced plans to launch of ChatGPT Health (“Health”), a new model that will allow users to connect their health records and wellness applications to the chatbot. Every week, hundreds of millions of people use ChatGPT to enquire about health and wellness. OpenAI has set out the privacy protections and controls it intends to implement in handling highly personal and sensitive information, including data encryption, data isolation, user options to delete chats from its system, and restricting inputs to Health not to train the foundational model. Similar to its existing system, Health will use a large language model (LLM) to service its users in chatting about health, reviewing medical records, summarizing visits, and providing nutrition advice, among other functions. 

Executive actions have shifted towards limiting AI regulations, attempting to maintain the United States as a global leader in AI innovation, and encouraging industries to adopt automation. In December 2025, President Trump issued the Executive Order 14365 “Ensuring a National Policy Framework for Artificial Intelligence,” attempting to deter state regulations from creating a patchwork of regulatory regimes and instead create national consistency. This action alone does not prevent state-level AI or privacy laws, however, it does establish a task force to challenge them. The EO followed a previous action which removed Biden-Era regulations placed on AI, classifying them as a hindrance to innovation and free markets.

The Food and Drug Administration (“FDA”) regulates AI Health technology, classifying certain developments as software as a medical device (SaMD) under the Federal Food, Drug, and Cosmetic (“FD&C Act”). On January 6, 2026 the FDA provided guidance on their oversight of AI devices, distinguishing low-risk products used for general wellness not to be regulated as medical devices. Software that is “unrelated to the diagnosis, cure, mitigation, prevention, or treatment of a disease or condition” is not a medical device under the FD&C Act. The FDA explicitly classified software programs as general wellness products, likely putting Health into an regulation-exempt status under the FD&C Act. 

Systems which function solely to transfer, store, convert, format, and display medical device data are characterized as Medical Device Data Systems (MDDS) are subject to the FD&C Act. However, the FDA has also clarified that Non-Device-MDDS with software functions that store patient data, convert digital generated data, or display previously stored patient data are exempt from regulation as long as they do not analyze or interpret data. This contention produces uncertainty for Health’s classification because of the functional interaction between data input and user interactions.

The Health Insurance Portability and Accountability Act (“HIPAA”) Privacy Rule ensures covered entities and business associates properly handle protected health information (“PHI”). Users submitting medical records to Health would not render OpenAI a covered entity or business associate, leaving its status as a consumer health product outside of HIPAA’s regulatory scope. Data sharing, such as what Health sets out to do across Apple Health, MyFitness Pal, and other applications, falls outside of the HIPAA framework if it is disclosed for purposes other than for treatment, payment, healthcare operations or otherwise requiring authorization by the Privacy Rule 45 C.F.R. § 164.508.

The Federal Trade Commission (“FTC”) may serve as a backup for these regulatory rollbacks. The FTC regulates healthcare privacy by providing data breach notifications. Compliance is enforced through the Health Breach Notification Rule (“HBNR”), requiring vendors of personal health records to notify the FTC and consumers if a data breach occurs. A vendor under the HBNR is any Non-HIPAA entity or business associate that “offers or maintains a personal health record.” It is uncertain whether Health will be subject to regulation under this category, or any other, despite their handling of users’ personal health record uploads. As an alternative method of accountability, the FTC may bring litigation actions, such as the recent class action settled with Flo Health Inc. for sharing proprietary health data to Facebook, Google, and others without user consent.

As the regulation landscape surrounding Health is actively evolving, it is uncertain how privacy concerns will be handled. Federal agencies and the executive are giving broad autonomy to the developers for privacy practices as AI integrates healthcare practices, leaving much of the accountability to be exercised through litigation or FTC after-actions.

How America’s AI Action Plan Could Affect Brain-Computer Interfaces

On January 23, 2025, President Donald Trump signed Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” which sought to revoke existing AI policies and directives that act as barriers to American AI innovation. The federal government’s push for AI development may accelerate the availability of neurotechnologies that incorporate AI, while reducing regulatory oversight and consumer protections. 

Pursuant to the Executive Order, the White House released a comprehensive policy strategy entitled “Winning the Race: America’s AI Action Plan” in July of 2025. The policy includes a recommendation to remove red tape by “review[ing] all Federal Trade Commission (FTC) investigations . . . to ensure that they do not advance theories of liability that unduly burden AI innovation.” The policy also encourages the country to “establish regulatory sandboxes or AI Centers of Excellence around the country where researchers, startups, and established enterprises can rapidly deploy and test AI tools . . . enabled by regulatory agencies such as the Food and Drug Administration (FDA).” Implementing these recommendations may directly affect the development of neurotechnologies.

Brain-computer interfaces (BCIs) are neurotechnologies that allow for communication between the human brain and external output, such as a computer, mobile device, or prosthetic device. They are subject to FDA and FTC oversight, depending on their intended use. The primary FDA department responsible for regulating medical neurotechnologies is the Division of Neurological and Physical Medicine Devices (DNPMD). For direct-to-consumer technologies, the FTC oversees consumer protection and privacy. The Management of Individuals’ Neural Data Act of 2025 (the MIND Act) is proposed legislation that would direct the FTC to study how neural data are currently governed. 

Neuralink and Merge Labs are companies that are eager to incorporate AI in their BCI technologies. Neuralink, headed by Elon Musk, produces a BCI that is implanted in the brain near neurons of interest. Electrodes within the BCI then detect electrical signals from neurons and decode information. The goals of Neuralink are to “restore autonomy to individuals with unmet medical needs today, and to unlock superhuman capabilities across many people in the future.” The company aims to eventually connect brain neural networks to artificially intelligent networks outside the brain.

One month before the release of America’s AI Action Plan, Neuralink received FDA breakthrough device designation to restore communication for individuals with speech impairment. Musk has also benefited from his relationship with the Trump Administration and his former position as the leader of the Department of Government Efficiency (DOGE), which has significantly reduced the federal civil service. In February of 2025, DOGE reportedly fired the FDA employees responsible for overseeing Neuralink. Ten months later, Neuralink hired the former director of the FDA office responsible for regulating the company to lead its medical affairs division. 

Sam Altman, the CEO of OpenAI, has recently partnered with the Trump Administration on the Stargate Project, which “intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States.” Co-founded by Altman, Merge Labs is researching a new approach that could combine gene therapy with an ultrasound device to create non-invasive BCIs. The company has a long-term mission of “bridging biological and artificial intelligence to maximize human ability, agency, and experience.” OpenAI is the largest investor in Merge Labs, which has raised $252 million of funding. Along with funding, OpenAI announced that it will collaborate with Merge Labs to accelerate progress, stating that “BCIs will create a natural, human-centered way for anyone to seamlessly interact with AI.” 

Merge Labs has not yet submitted technology for FDA approval. It is unclear how the technology would be classified, especially because the company’s mission is not explicitly related to medical uses and the technology aims to be non-invasive. With the release of America’s AI Action Plan, it is also unclear whether consumers can rely on the FTC for privacy protections regarding this technology. Removing red tape and enabling AI adoption may open the door to faster development and distribution of these life-changing technologies for people with disabilities. However, brain surgery and gene therapy that incorporate AI are potentially permanent medical procedures that could put Americans at risk of long-lasting health impacts and privacy invasions.

Next Generation Physicians are Using Augmented Intelligence: Is the Law Ready? 

What if a physician working alone at night in a rural hospital could summon a tireless “Dr. House” with every difficult case; a trained medical diagnostician that is always awake, ever ready, and rarely hallucinates?

Interactive artificial intelligence (AI) diagnostic models are rapidly evolving beyond ChatGPT and traditional “black box” systems that opaquely analyze radiology scans or lab values to higher-order transparent language models capable of intelligent explanation and diagnosis of complex illnesses. Researchers at Harvard Medical School recently developed an AI system named “Dr. CaBot” that will eventually function as a digital peer capable of generating differential diagnoses and detailed reasoning processes. As medical schools from Harvard to the University of Miami train tomorrow’s physicians to problem solve using science, clinical judgment, pattern recognition, and logic, educators are embracing a novel resource to strengthen their students’ skills. The American Medical Association (AMA) uses the phrase “augmented intelligence” as a way to conceptualize AI’s assistive role, emphasizing the way the tools enhance human intelligence rather than replace it. 

Technology and medicine are moving quickly, and the legal field has yet to catch up; innovation, in many cases, has spread faster than stare decisis. While attorneys await new rules, advancements in AI and machine learning pose greater risks and rewards for the healthcare sector than many other applications, rivaled only by risks incurred by the defense industry. 

Evolving Liability Frameworks 

As patients navigate an increasingly automated health care ecosystem where many insurance determinations are made by algorithms and 66% of clinicians integrate Artificial Intelligence/Machine Learning (AI/ML) tools—new questions around liability and standards of care will emerge. When harm occurs as a result, does the law look to the software developer who wrote the code, the healthcare system that deployed it, or the physician who ultimately incorporated the technology into their clinical decision making? Is the use of assistive-AI any different than orthostatic vital signs in the hands of a skilled practitioner who interpreted the readings correctly vs. incorrectly?

The incorporation of advanced AI diagnostics into patient care has created a patchwork of legal and regulatory challenges across the nation. Currently, the FDA classifies AI/ML technologies in healthcare settings under “Software as a Medical Device (SaMD)” guidance in an attempt to bring AI tools under medical device and products liability regulations. However, a framework intended for the development of a static medical device that may suffer manufacturing, design, or warning defects was not created for a quickly moving target such as an AI tool which can learn and evolve over time. 

The SaMD classification gives AI/ML diagnostic tools a form of FDA preemption that complicates malpractice and products liability claims under state law. For example, when a legacy device, such as an insulin pump or glucometer for a diabetic patient, receives FDA clearance under the 21 U.S.C. § 360k, the manufacturer may introduce a new product to the market, subject to certain risk-mitigation measures. In Dickson v. Dexcom, for example, a “Class II: De Novo” authorization shielded the manufacturer from tort liability when a continuous glucose monitor failed to warn a patient of hypoglycemia, which led to a motor vehicle accident. Many AI diagnostic tools are entering the market under this same “device” classification, making it critical for doctors and administrators to understand the regulatory landscape and potential exposure before deployment.

Duty to Disclose in Clinical Practice 

In addition to understanding state and federal liability frameworks, there is growing discussion around disclosure and transparency related to the use of AI in diagnostic processes. Because the use of AI/ML is closely associated with protected health information (PHI) and broader risks, California, Colorado, and Utah, have created laws that mandate disclosure in clinical treatment. For providers, and the attorneys who represent them, this is often a state-specific discussion: Texas laws require providers to disclose AI use in clinical care, whereas Nevada prohibits providers from utilizing AI systems in behavioral health contexts. 

Where state law is silent on the issue, physicians should remain vigilant around efforts to obtain valid informed consent regarding use of AI in clinical settings, as state medical boards ultimately hold physicians accountable for disclosures and outcomes related to the integration of novel tools into diagnosis and treatment plans.

Regardless of jurisdiction, research shows that patients value connection with physicians, and when visiting a healthcare practice, they expect to consult with a doctor. Few people expect their provider to sidebar with ChatGPT or even a purpose-built OpenAI language model that can rule out hundreds of mystery illnesses sans implicit bias—although Augmented Intelligence may ultimately solve the problem. Similarly, when harm occurs, current medical malpractice remedies were built around the assumption of human negligence instead of errors arising from machine learning misinformation.

Moving Forward

Legal scholars stand at the nexus of healthcare liability and AI/ML diagnostics where case law is yet to be written. Can plaintiffs’ attorneys establish vicarious or joint and several liability when claims involve an AI developer and a health system? What remedy exists when a physician outsources clinical judgment to a trained language model or fails to scrutinize results? As a net benefit, will the predictive powers of AI diagnostic models decrease both primary care-to-specialist patient wait times, and the risk of human error?

It appears that emerging physicians have embraced the next “possibility model” in medicine—and the health law community must respond by establishing guidance to address outstanding questions related to liability, reliability, governance, consent, and privacy. Perhaps tomorrow’s attorneys can ask AI for guidance.

Authors Note: Some healthcare providers and policymakers now prefer the term “misinformation” over “AI hallucination” in an effort to avoid stigmatizing mental health conditions.

340b Rebate: Essential for Under-Resourced Hospitals or a Hinderance to Pharmaceutical Companies?

Earlier this month, the United States Court of Appeals for the First Circuit issued a Rule 42 motion to voluntarily dismiss a case filed by the American Hospital Association (AHA) against Robert F. Kennedy Jr. and the Health Resources and Services Agency (HRSA) regarding the implementation of a new pilot 340b rebate program. The 340B program provides substantial discounts on outpatient drugs to covered entities fitting into six categories: disproportionate share hospitals, children’s hospitals and cancer hospitals exempt from the Medicare prospective payment system, sole community hospitals, rural referral centers, and critical access hospitals. These entities, after being deemed eligible for the program, receive significant discounts on a substantial amount of outpatient medications. This program has enabled under-resourced hospitals serving vulnerable populations to provide comprehensive outpatient medication options without imposing a significant financial burden on providers or patients. The 340b program has expanded significantly from approximately 389 covered entities at its inception in 1992 to 5,085 in 2022. 

The usefulness of the 340b drug pricing program has long been debated between healthcare providers and drug manufacturers. Covered entities have argued that these discounted drug prices are essential for poorly resourced healthcare centers to provide adequate care to vulnerable Americans. Drug manufacturers have raised concerns about duplicate discounts, in which discounts are provided through 340b and Medicaid, as well as issues with oversight and transparency regarding who truly saves money through the 340b program.

The proposed pilot sought to address some of the concerns held by drug companies and it would have required 340b providers to assume the full cost of 10 common-use drugs, primarily used to treat diabetes and chronic heart conditions, and later submit claims data to the drug manufacturers for potential 340b pricing. This pilot program was met with resistance from several providers, resulting in legal action from the American Hospital Association. 

In American Hospital Association et al. v. Kennedy et al. the AHA and multiple covered entities filed suit against Secretary Kennedy and HRSA in the United States District Court for the District of Maine, alleging that the proposed pilot program violates the Administrative Procedure Act and would impose unnecessary administrative and financial burdens on already under-resourced hospitals and healthcare providers. The Plaintiff’s complaint alleged that the over 1,000 comments submitted during the mandatory comment period for this proposed pilot were largely ignored by Secretary Kennedy and HRSA, thereby violating the Administrative Procedure Act’s public comment requirements

Several pharmaceutical companies submitted motions to intervene in support of HRSA’s efforts, citing issues with duplicate discounts for Medicaid and 340b, as well as concerns about program integrity and transparency. Secretary Kennedy and HRSA, in their response to AHA’s complaint, alleged that the pilot program and its comment period proceedings were not in violation of the Administrative Procedure Act precedent and that the plaintiffs had failed to show any irreparable harm. The District Court of Maine granted the plaintiff’s motion for a preliminary injunction, which the defendants appealed to the United States Court of Appeals for the First Circuit. The case was ultimately dismissed, and the Department of Health and Human Services withdrew the proposed pilot, indicating that they may restart the administrative process for a similar program in the future. 

340b reform remains an important topic of conversation as the program continues to expand across the nation. While this most recent attempt at reform ultimately did not come to fruition, it seems likely that more attempts for 340b reform may be on the horizon, indicating potential changes in the program forthcoming.