Author: Sarah Jie

Health Care Privacy Concerns Around Mental Health Apps

Between 2019 and 2021, the market for mental health and wellness apps such as Calm, Headspace, and BetterHelp increased by 54.6%. In 2020 alone, the top 100 wellness apps were installed over 1.2 billion times. This increase can be attributed to the Covid-19 pandemic. Lockdowns and the rise of remote work led to an increased number of people turning to mobile apps to work on their health and fitness goals from home. Studies have also shown that individuals with existing mental health problems reported that their symptoms worsened during the pandemic. In this case, the convenience of mental health apps has enabled users to connect with therapists and mental health coaches from the comfort of their home. Today, an increasing number of physicians recommend their patients to use mental health apps to reduce stress and improve sleep habits, with apps such as Calm allowing patients to access content through their health insurance plan. 

But as mental health apps continue to grow in popularity, researchers have begun raising privacy concerns surrounding the collection of user data. For instance, the Federal Trade Commission (FTC) issued a proposed order in 2023 banning BetterHelp, an online counseling service, from sharing sensitive information about mental health challenges for advertising. This included requiring the app to pay a $7.8 million settlement to consumers. In this case, it is useful to consider what the current privacy concerns are as well as policy solutions to address the issue. One of the main issues concerns the volume of data that is collected by mental health apps. For instance, data collection can include geolocation information such as clinic visits and purchase history for healthcare services. Furthermore, mental health apps may collect other sensitive information such as a user’s full name, date of birth, or home address, as well as health biometrics on a user’s mood, sleep habits, or symptoms. In the case of BetterHelp, the app’s privacy policy now states that they are not paid for user data, but that they may share visitor data with third party advertisers such as Meta, TikTok, Snap, and Reddit. 

This leads to the question of what improvements can be made to protect sensitive data of users on mental health apps and what solutions are available to developers and policymakers. Currently, HIPAA regulations do not apply to most mental health apps unless it involves a covered entity, a business associate relationship, and the disclosure of electronic protected health information (ePHI). However, there are various steps that can be taken to safeguard user data. One study created a list of actions that stakeholders could take to ensure data privacy. This includes a variety of recommendations, including the creation of a privacy impact assessment and having readable privacy policies with accessible language. De-identification is another option that developers can take to protect user data, which can involve removing specific identifiers such as names and addresses. In addition, apps that do not fall under HIPAA regulations are now required to follow the FTC’s Health Breach Notification Rule, released in June 2023, which requires apps providing health care services or supplies to to notify consumers, the FTC, and often the media of data breaches. Overall, both new and current developers behind mental health apps can take these steps to better protect sensitive data and to build trust among users in the digital space. 

ChatGPT, MD? Artificial Intelligence in Healthcare 

ChatGPT is a natural language processing tool created by OpenAI, a research company specializing in artificial intelligence tools such as DALL-E 2, an AI art generator. Since its launch in November 2022, ChatGPT has become one of the fastest-growing apps in recent memory. By January 2023, the chatbot reached 100 million active users, with an average of 13 million visitors per day. Available for both desktop and mobile devices, ChatGPT employs a dialogue format similar to messaging apps where users can type in prompts and ask questions on a variety of topics. Numerous articles provide tips to users on the best prompts to ask ChatGPT, from drafting cover letters, solving complex math problems, and editing videos. Given its ability to form human-like responses, some users have turned to ChatGPT for medical advice. Patients can ask general questions on health conditions and use AI tools for summaries or resources to prepare for medical visits. However, the popularity of ChatGPT and its accelerated development has led industries to question how artificial intelligence may affect patient care in the near future, including privacy concerns and clinical care. As a result, it is worth asking how AI tools such as ChatGPT may be used to improve the standards of quality healthcare and the risks that are involved. 

Prior to ChatGPT, people frequently turned to the Internet to self-diagnose. Studies have shown that between 67.5% to 81.5% of American adults have searched for health information online. While this is not a new phenomenon, the conversational aspects of ChatGPT and a lack of regulation around artificial intelligence involves considering the ethical and moral implications of using AI tools. Generally, health experts have recommended against using ChatGPT for medical advice. However, doctors have reported that using AI tools may be useful for patients to learn more about certain conditions such as COVID, including the symptoms, causes, risk factors, and treatments as well as side effects of prescription drugs. Early studies have also indicated that AI models including ChatGPT could be used to screen patients for mental health conditions such as depression or anxiety and to determine treatment options. 

While ChatGPT has the potential to change the medical field in terms of diagnosis and treatment of various health conditions, it also raises new liability risks for health care providers. Physicians who rely on AI tools may be subject to greater liability for medical malpractice, as courts would likely find it unreasonable for health professionals to rely on AI-generated content. In addition, companies have noted that there is no way for physicians to use ChatGPT with protected health information (PHI) of patients while remaining compliant with HIPAA. For instance, if a physician chose to use ChatGPT to transcribe handwritten notes or recordings from a general appointment, they may potentially violate HIPAA regulations without knowing it.

Although the use of artificial intelligence in healthcare settings is fairly limited today, researchers have begun considering how AI systems can be built to improve the efficiency and effectiveness of medical services. This includes proposals for “human-centered AI” that employs a problem-solving approach to key issues in clinical care. One possible method is to train AI models by analyzing large amounts of data to look for certain health conditions in specific populations. In recent news, Stanford School of Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) announced the creation of RAISE-Health (Responsible AI for Safe and Equitable Health), an initiative to integrate AI into healthcare. This includes developing a platform for responsible use of AI systems by enhancing clinical services as well as educating patients and providers on best practices.   

As ChatGPT becomes an increasingly prominent aspect of medicine and other industries, it is important to consider how AI tools have the potential to streamline the work of healthcare systems while also cautioning physicians of the legal and ethical risks involved in its use.

The Rise of Ozempic & Prescription Weight Loss Drugs

Ozempic is an injectable drug used to lower blood sugar in adults with type 2 diabetes and to lower the risk of heart disease. But over the past few months, Ozempic has been hailed as a “miracle drug” by celebrities, tech moguls, and social media influencers alike. In September 2022, an article from Variety described how Ozempic injections were used by celebrities across the industry to lose weight ahead of “major events”, with the most enthusiastic users being non-diabetic. Nevertheless, Ozmepic’s rise in popularity is not limited to Hollywood. As of today, TikTok videos with #ozempic have been viewed more than 637.6 million times and will likely continue to rise. While many videos feature prominent influencers on the app, dozens more feature ordinary users documenting their experience taking Ozempic with hashtags such as #ozempicjourney.

The recent popularity surge of Ozempic also coincides with the Food and Drug Administration’s (FDA) approval of another weight-loss drug known as Wegovy in 2021. Similar to Ozempic, Wegovy is a semaglutide injection intended to facilitate weight management in overweight or obese adults. By mimicking hormones that regulate food intake in the brain, Wegovy works as an appetite suppressor. The FDA’s approval of Wegovy resulted in higher demand for the drug and led many to seek out similar weight-loss prescriptions. The rising demand of these drugs led to a shortage of Ozempic in the United Kingdom in May 2022, followed by another shortage in the United States this past February. An article in Time magazine found that over five million prescriptions for Ozempic were written in 2022 alone, along with similar drugs. And in a study published by the New England Journal of Medicine, researchers found that adults with obesity but no diabetes had weight reductions of 5% or more after taking Ozempic for several weeks. 

Beyond its popularity on social media, Ozempic has also been known to have various side effects, including possible thyroid tumors, pancreatitis, kidney failure, and vision changes. One of the most documented side effects is referred to as “Ozempic face”, which can result in fat loss from the face and the appearance of sagging skin due to drastic weight loss. In addition, it is necessary to continue taking Ozempic indefinitely to maintain the weight loss. 

Currently, it is unclear how drugs such as Ozempic and Wegovy may affect people who are of normal weight or fall outside of the FDA’s criteria. While doctors have warned about the dangers of off-label use, the shortage of available prescriptions has made it more difficult for people with diabetes or obesity to have access to it. However, the rising popularity and recent shortages of Ozempic has led to a broader conversation on health and our relationship to food. In recent news, Weight Watchers has announced their acquisition of a telehealth company known as Sequence, which connects patients with doctors who can prescribe Ozempic and other drugs such as Wegovy and Rybelsus. Going forward, it may be useful to consider how the prescription of weight-loss drugs can be improved to provide affordable access for those with certain medical conditions and to educate the public on its effects. 

Baked Brie, Peanut Butter & Baby Formula: Explaining the Major Food Recalls of 2022

On September 30, 2022, the Food and Drug Administration (“FDA”) issued a food safety alert after two kinds of cheese were linked to a listeria outbreak. The cheeses in question? Brie, baked brie, and camembert cheeses made by Old Europe Cheese, Inc. The products were sold under 25 different brand names in major retail stores such as Safeway, Whole Foods, and Trader Joe’s both nationwide and in Mexico. 

The recalled products consist of a wide variety of cheeses, ranging from double crème wedges to cranberry baked brie. However, it is far from the first grocery recall that has happened this year. A study conducted by Agruss Law Firm found that in 2022 alone, salmonella and listeria outbreaks led to 49 food and beverage recalls from the FDA, representing 37.4% of all food product recalls. It is a considerable increase compared to 33.3% in 2021 and consists of more than 45% of recalls this past year. 

In addition, more than a fifth of grocery recalls this year came from peanut butter products contaminated with salmonella. On May 20, the FDA released a statement announcing the voluntary recall of nearly 50 Jif products for potential salmonella contamination. The recall also affected products distributed in international markets including countries such as Canada, Thailand, and Honduras. An epidemiological review conducted by the Center for Disease Control and Prevention (“CDC”) examined a multistate outbreak of salmonella senftenberg infections. In a joint investigation between the FDA and CDC, the CDC found that out of ten people interviewed, all ten consumed peanut butter and nine out of ten people ate Jif peanut butter products before they became sick. In total, 21 people from 17 states fell ill and four people were hospitalized. Using a process known as Whole Genome Sequencing (WGS) analysis, the FDA discovered that the outbreak was linked to Jif products produced at the J.M. Smucker Company facility in Lexington, Kentucky. Jif then issued a report describing a machinery breach that allowed puddles of water to come into contact with peanut roasting equipment. Plant officials also discovered past incidents of salmonella found at the facility and a report obtained by Axios indicated that the issue may have been considered a routine error or ignored by the company altogether. 

However, food recalls this year weren’t limited to cheese and peanut butter. The number of recalled products during the first quarter of 2022 was also the highest figure in a decade due to a large recall of baby formula. This past February, the FDA published a company announcement concerning a voluntary recall of powder formulas produced in a plant owned by Abbott Nutrition. In total, the announcement stated that four infants from three different states contracted a cronobacter sakazakii infection after consuming powdered formula, resulting in two deaths. The FDA’s inspection of the plant revealed that key production areas, including the machinery and floor of the packaging room, tested positive for cronobacter. And similar to the Jif plant, FDA inspectors discovered a water leak dripping from the valves, leading to standing water on the floor and near the floor scrubber. The FDA also found that the company failed to identify the root causes of the cronobacter complaints and that employees in contact with the infant formula did not sanitize their shoes or wear proper protective gear while working. 

Many of the issues stemming from recent food recalls can be traced back to supply chain issues and a lack of corporate oversight. While the passage of the Food Safety Modernization Act (FSMA) in 2011 gave the FDA greater authority in responding to food safety complaints, the regulations remain at odds with the increased industrialization and expansion of facilities. However, an increase in food recalls does not necessarily mean an increase in food-borne illnesses or contamination. Since the FSMA allows companies to recall products as a precaution, it enables the FDA to prevent health threats before they begin. As a result, it is important to keep in mind how these recalls can affect consumers of all ages and socioeconomic backgrounds, and to consider how the legal system can provide remedies to resolve issues in safety regulations and corporate oversight.