ChatGPT, MD? Artificial Intelligence in Healthcare 

ChatGPT is a natural language processing tool created by OpenAI, a research company specializing in artificial intelligence tools such as DALL-E 2, an AI art generator. Since its launch in November 2022, ChatGPT has become one of the fastest-growing apps in recent memory. By January 2023, the chatbot reached 100 million active users, with an average of 13 million visitors per day. Available for both desktop and mobile devices, ChatGPT employs a dialogue format similar to messaging apps where users can type in prompts and ask questions on a variety of topics. Numerous articles provide tips to users on the best prompts to ask ChatGPT, from drafting cover letters, solving complex math problems, and editing videos. Given its ability to form human-like responses, some users have turned to ChatGPT for medical advice. Patients can ask general questions on health conditions and use AI tools for summaries or resources to prepare for medical visits. However, the popularity of ChatGPT and its accelerated development has led industries to question how artificial intelligence may affect patient care in the near future, including privacy concerns and clinical care. As a result, it is worth asking how AI tools such as ChatGPT may be used to improve the standards of quality healthcare and the risks that are involved. 

Prior to ChatGPT, people frequently turned to the Internet to self-diagnose. Studies have shown that between 67.5% to 81.5% of American adults have searched for health information online. While this is not a new phenomenon, the conversational aspects of ChatGPT and a lack of regulation around artificial intelligence involves considering the ethical and moral implications of using AI tools. Generally, health experts have recommended against using ChatGPT for medical advice. However, doctors have reported that using AI tools may be useful for patients to learn more about certain conditions such as COVID, including the symptoms, causes, risk factors, and treatments as well as side effects of prescription drugs. Early studies have also indicated that AI models including ChatGPT could be used to screen patients for mental health conditions such as depression or anxiety and to determine treatment options. 

While ChatGPT has the potential to change the medical field in terms of diagnosis and treatment of various health conditions, it also raises new liability risks for health care providers. Physicians who rely on AI tools may be subject to greater liability for medical malpractice, as courts would likely find it unreasonable for health professionals to rely on AI-generated content. In addition, companies have noted that there is no way for physicians to use ChatGPT with protected health information (PHI) of patients while remaining compliant with HIPAA. For instance, if a physician chose to use ChatGPT to transcribe handwritten notes or recordings from a general appointment, they may potentially violate HIPAA regulations without knowing it.

Although the use of artificial intelligence in healthcare settings is fairly limited today, researchers have begun considering how AI systems can be built to improve the efficiency and effectiveness of medical services. This includes proposals for “human-centered AI” that employs a problem-solving approach to key issues in clinical care. One possible method is to train AI models by analyzing large amounts of data to look for certain health conditions in specific populations. In recent news, Stanford School of Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) announced the creation of RAISE-Health (Responsible AI for Safe and Equitable Health), an initiative to integrate AI into healthcare. This includes developing a platform for responsible use of AI systems by enhancing clinical services as well as educating patients and providers on best practices.   

As ChatGPT becomes an increasingly prominent aspect of medicine and other industries, it is important to consider how AI tools have the potential to streamline the work of healthcare systems while also cautioning physicians of the legal and ethical risks involved in its use.

Leave a Reply

Your email address will not be published. Required fields are marked *