Virtual Scribing: Using AI in Medical Practice

August 21, 2024

As we look at the proliferation of artificial intelligence (AI) in healthcare, one of the areas where it is having an immediate impact is virtual scribing. The following provides some information, best practices, and addresses some FAQs from medical providers.

VIRTUAL SCRIBING AND CLINICAL DOCUMENTATION
The benefits of using AI tools to quickly and accurately generate a record of clinical interactions are obvious. However, there are many considerations to examine before implementing the use of such tools.

Patients will need to be aware of and consent to the use of the recording devices in place to generate the records. Providers will need to learn to “narrate their examinations” to populate the record. A policy to erase the work product of the recording at regular, short intervals, as well as open access to the final record generated from that work product will help allay patients’ fears about how their information was captured and what is going in their permanent medical record.
The need to provide the processes required by the Cures Act will be even more important. One can also predict that patients’ awareness of the record and their requests to edit, amend, or delete materials in it will increase and the provider or their staff will need to be cognizant of the necessary HIPAA processes and documentation.
Finally, and most importantly, given how AI works and its inherent ability to produce fluent but possibly inaccurate, misleading, or even harmful output, the provider will absolutely need to read and verify the content of the notes generated by AI. The option of “dictated but not read,” in this case becomes “AI generated but not read” and will not be an accepted practice.

FAQs AROUND AI AND VIRTUAL SCRIBING

Is a HIPAA business associate agreement necessary? Yes. Since AI/scribe functionality will almost invariably involve Protected Health Information (PHI), a Business Associate Agreement (BAA) is required by the HIPAA Privacy and Security Rules.

Do patients need to consent to the remote AI scribing? Verbal or written? What are the elements of that consent? This depends entirely on state regulations. For “two-party” states, where consent is required from both parties to record a conversation, it is absolutely necessary to obtain patient consent. For “one-party” states in which both parties are not required to consent to recording, this may not be a legal necessity, but is still strongly encouraged. There are many ways to obtain this consent. It can be incorporated into a broader consent form that a patient might sign at the onset of treatment, or can be a specific document. The current state of the industry seems to be that providers obtain verbal consent which is then documented in the chart. That seems to be the minimum necessary standard that most healthcare organizations are using.

How long does the recording last and is it destroyed once the medical record is created? This is completely vendor-dependent and not covered by any particular regulation. One major industry leader currently states that they hold on to the recordings for one full year, but we are seeing a trend towards the holding period becoming shorter. The medical record that was created from the recording is subject to the rules and policies for medical record retention and accessibility.

Can you prevent the medical record that is created and the professional interpretations from being used in the generation of further AI? You would have to negotiate this with the product vendor. While technically negotiable, the current business model is so heavily dependent on using the data to further refine the model that vendors usually make this a requirement for product use.

Who owns what? Traditionally the professional owned the record, and the patient owned the access, and all of the ancillaries used to generate that record owned nothing.
This is difficult to answer. The previous concepts of ownership don’t really apply any longer. In the current environment, it is unclear what “owning” any piece of data really means.

Do you need to openly state on the record that it was generated by AI service ______, and reviewed by the professional? This is not currently covered by regulation, but strongly recommended as a best practice.

In the past, some providers included disclaimers about the possible irregularities of voice recognition (although our legal advice frowned against that, as it provided no relief and simply stated you were aware that it had errors and didn’t take steps to fix them). Is the healthcare community using disclaimers like this? While guidance in the previous question states that transparency that AI was used to generate the clinical record is a best practice, the community is (rightly) moving away from additional disclaimers beyond that. They offer no real protection and may actually increase potential liability. By signing a document, providers are responsible for the content of that document, regardless of how that content was created. If some sort of disclaimer is still desired, a better practice would be to say something like “This document was created in part by using voice recognition software and was reviewed by the author. If errors are present, please bring them to our attention.”

Information in this article is for general educational purposes and is not intended to establish practice guidelines or provide legal advice.

Article originally published in 3Q24 Copiscope.

Featured Resources

Our Resource Center is a comprehensive collection of materials that provide guidance and insight for medical professionals.

Information in this article is for general educational purposes and is not intended to establish practice guidelines or provide legal advice.

usercrosschevron-downcross-circle