AI in Healthcare: Balancing the Benefits and Risks

The incredible speed at which artificial intelligence (AI) has been adopted in medicine has outpaced the regulatory environment that would typically provide guidance and rules regarding its use. Because of this, providers who use AI technology may be left looking for answers on how to safely incorporate this technology and how to avoid legal risk. When is patient consent required? How should the use of this technology be documented?

As of the writing of this article, no federal regulation exists to clearly answer these questions, and in the current political climate, there is no indication that new regulations around AI will be created in the near future. A few states (Colorado and California) have been early leaders in passing legislation to regulate AI technology, but the viability of these regulations is unknown and the application of these rules to the practice of medicine remains unclear. In summary, there is no clear regulatory language from governing bodies on the rules for using AI.

Given all the uncertainty, the bedrock of a safe strategy for the use of AI is the guidance that providers should always review the output of AI before it reaches the patient or becomes part of the medical record. Failure to review notes or communication drafted by AI presents substantial medico-legal risk.

AI’s Potential to Reduce Administrative Burdens
While the application of AI-enabled technology in medicine is broad, most practical concerns have been focused recently on the use of so-called “generative AI,” specifically on the use of this technology to aid providers in generating clinical notes and in answering patient questions.

An American Medical Association article1 noted that surveyed physicians said the most relevant use of AI technology would be helping them with documentation. The top areas cited (based on the percentage of surveyed physicians) included:

  • 80% Billing codes, medical charts or visit notes
  • 72% Creation of discharge instructions, care plans and/or progress notes
  • 57% Generation of draft responses to patient portal messages

Virtual Scribing and Consent
Artificial Intelligence has already been widely adopted for use as a virtual scribe during patient encounters with the promise of providing a substantial increase in provider efficiency and satisfaction. Use of this technology involves some form of recording of the patient encounter. As such, patient consent for this recording may be required in some circumstances to comply with state and local laws. Even if not explicitly required, we recommend that verbal consent from the patient be obtained prior to utilization of an AI virtual scribe for the encounter. This consent may provide some protection for the provider from allegations of negligent consent (a lack of informed consent) and helps promote continued trust in the provider-patient relationship.

The rules regarding documentation of this consent are unclear, but some form of contemporaneous documentation in the chart is best practice. We recommend against any automatic blanket statement added to charts indicating that AI technology was used and that errors may be present in the chart as a result. It remains the provider’s sole responsibility to ensure that the documentation is accurate, and disclaimer statements may provide a false sense of security that errors are somehow not the provider’s responsibility or an impression to the patient or third parties that a provider is detached from what is in the chart notes and has not fully reviewed the chart.

Other Considerations
Many of the major electronic health record (EHR) vendors are in the process of rolling out generative AI solutions to aid in direct communication with patients. These fall along a spectrum, from tools that act simply as a glorified spelling and grammar check, to tools that allow a response to be translated to a different reading level, all the way to de novo generation of a response to a patient question. Regulations regarding consent and notification for use of this application of AI are even more opaque than for virtual scribing.

In response to understanding the preceding information, it remains imperative that all responses that have been generated or edited by AI are fully reviewed by a medical professional prior to being sent to a patient. Several studies have already demonstrated that AI solutions may generate advice that, if followed, can directly lead to patient harm. Providers need to keep sight of the fact they need to remain vigilant to catch potential errors in these responses. As a best-practices approach, AI-generated responses may be viewed as a “first/rough draft” and not sent to a patient without review and correction by a qualified provider, as the legal liability continues to rest with the medical entity sending the communication.

The use of generative AI in medicine offers the potential for significant relief from some of the more mundane and frustrating tasks associated with the practice of medicine, but it also brings with it a new set of potential risks, both to patient safety and to provider liability. Using these new tools safely requires all providers to be aware of the importance of reviewing any output generated by AI before it becomes ready for incorporation into the medical record or visible to patients.

Risk Management Recommendations for AI
An article in The New England Journal of Medicine2 looked at the use of AI in healthcare through a “liability risk” lens. With the implementation of AI tools, the article highlighted some proactive recommendations that healthcare organizations and clinicians should consider that include the following:

  • Adoption decisions and post-deployment monitoring should consider the risk level of AI tools. High-risk tools require substantial time and resources for safety monitoring, while lower-risk tools may need more generalized, lower-touch monitoring.
  • Healthcare organizations should apply lessons learned from older forms of decision support to AI applications. Courts may adopt similar modes of analysis for AI-related cases.
  • Resist lumping all AI applications together and instead reflect on the varying risks associated with different tools.
  • Informing patients when AI models are used in diagnostic or treatment decisions can reduce the risk of informed consent claims during litigation.
  • Following emerging guidelines for evaluating AI model safety can help minimize the human and financial cost of AI-informed medicine.

1https://www.ama-assn.org/practice-management/digital/physiciansgreatest-use-ai-cutting-administrative-burdens
2 N Engl J Med 2024;390:271-278


The information provided herein does not, and is not intended to, constitute legal, medical, or other professional advice; instead, this information is for general informational purposes only. The specifics of each state’s laws and the specifics of each circumstance may impact its accuracy and applicability, therefore, the information should not be relied upon for medical, legal, or financial decisions and you should consult an appropriate professional for specific advice that pertains to your situation.

Article originally published in 2Q25 Copiscope.

Featured Resources

Our Resource Center is a comprehensive collection of materials that provide guidance and insight for medical professionals.

Information in this article is for general educational purposes and is not intended to establish practice guidelines or provide legal advice.

usercrosschevron-downcross-circle linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram