Clinical Quick Reference Monthly / April 2025 / Spring Seminar Series
ENTER THE DRAGON, MICROSOFT MERGES NUANCE CLOUD-BASED VOICE RECOGNITION AND DAX COPILOT
On March 3rd, Microsoft officially unveiled the Dragon Copilot for healthcare professionals. The company promised an AI that will provide the industry with the first AI assistant able to automate tasks, streamlining documentation and information management.
By integrating and enhancing established features of Dragon Medical One and DAX Copilot, the Dragon Copilot aims to support clinician productivity, elevate the patient experience, and generate financial impact.
What Is Dragon Medical One?
Dragon® Medical One was created for use by clinicians. This cloud-based speech recognition software allows users to quickly generate secure digital documents from anywhere and send them directly to any Windows device.
Cloud-based voice recognition software enables users to utilize voice recognition technology online, usually through a subscription model.
Some common industry examples include:
Nuance Dragon Medical One,
Philips SpeechLive,
and IBM Watson Speech-to-Text.
This technology promote a range of clinician benefits such as:
accessibility from any location,
decreased hardware needs,
possible cost savings.
What Role Does DAX Play?
The DAX AI is an artificial intelligence assistant aimed at simplifying clinical documentation and improving the productivity of healthcare providers.
DAX stands for Dragon Ambient eXperience. It is a voice-activated product powered by AI. This piece of the Microsoft Cloud for Healthcare framework is fueled by Nuance's conversational and ambient AI, along with generative AI features.
DAX AI records dialogues between patients and clinicians and transforms them into detailed, specialty-specific notes. The DAX Copilot offered seamless processing with electronic health records (EHRs) especially Epic.
This product can:
generate referral letters,
summaries of evidence,
after-visit summaries,
and encounter summaries.
See DAX in action here:
AI + HEALTH — WHAT IS AN ARTIFICIAL INTELLIGENCE COPILOT?
AI denotes the discipline and technology of creating “intelligent” devices, utilizing algorithms or a specific set of guidelines. These devices adhere to these principles to replicate human thinking abilities.
AI systems can function in a deliberate, smart, and flexible way. A key advantage lies in their capacity to identify and understand patterns and connections from extensive multidimensional and multimodal datasets. In the healthcare field, AI technologies can convert a patient's complete medical history into one number that signifies a probable diagnosis.
AI is not a single, all-encompassing technology, though. Rather, it merges various subfields (like machine learning and deep learning). Machine learning (ML) is the examination of algorithms that enable computer programs to enhance automatically through experience.
An AI copilot is an assistant powered by AI that helps users with task completion. It automates workflows, enhancing process efficiency. It gathers insights from user interactions, adjusts to their requirements, and offers contextually appropriate recommendations.
AI copilots are present in numerous applications, such as code completion software, virtual writing aids, and enterprise system integrations.
AI Copilot Features:
Task completion*
Copilots can assist with various tasks, including creating code snippets, drafting documents, overseeing finances, and offering health coaching.
Process Automation*
They are capable of automating repetitive tasks and procedures, allowing users to concentrate on more intricate and imaginative assignments.
Contextual Recommendations*
Copilots observe user actions and offer recommendations tailored to the present situation, enhancing their efficiency and assistance.
Ongoing Education*
They continuously learn and adjust to user requirements, enhancing their effectiveness over time.
Diverse Range of Uses*
AI copilots can assist in software development, writing, personal finance, health and fitness, and enterprise systems.
Should Clinicians Be Concerned About Quality or Privacy Issues With The Rise Of Ambient AI?
AI in healthcare can improve patient care and operational efficiencies, but it also brings up important issues regarding physician-patient privacy. AI applications typically necessitate substantial amounts of patient data, encompassing sensitive health information, thus rendering data security and privacy crucial.
Ambient AI utilizes technology and computing to identify, respond to, and create insights based solely on the presence of individuals. Instead of participants actively performing queries or initiating actions, these systems are automatically activated and carry out their tasks independently.
Devices can now go beyond merely sending notifications and tracking metrics, and could potentially integrate more seamlessly into individuals' lives. For instance, suppose a user has an out-patient procedure planned for Thursday and cannot drive home afterward. Utilizing AI and sophisticated edge computing, the user’s phone could identify this calendar event, autonomously arrange an Uber for drop-off and pick-up, and even automatically organize food delivery post-procedure, ensuring the user doesn't need to cook after arriving home.
The integration of AI into clinical workflows is advancing quickly as well. A key example is ambient dictation technology, now capable of quietly monitoring a conversation between a physician and a patient, automatically transcribing it into notes for the physician to review and finalize later.
Given the vast amount of very personal data being collected, a significant worry is the potential for data breaches, which can endanger patient confidentiality and lead to negative consequences such as job discrimination and increased long-term healthcare costs.
Additional issues have been highlighted by research into quality of care. A study published in the journal BMJ Quality & Safety found that AI-powered chatbots, including Microsoft's Bing chatbot, provided patients with answers that were often incomplete and inaccurate when responding to questions about medications.
Researchers have observed that while chatbots generated complete answers to patient questions overall, chatbot answers were difficult to read and frequently lacked information or showed inaccuracies, potentially threatening patient and medication safety. These errors highlight the importance of consulting healthcare professionals for accurate medical advice.
Studies have also noted that the chatbot's inability to understand the underlying intention behind a patient's question was a major drawback.
How Does the Artificial Intelligence System Function to Accomplish All These Tasks?
Let’s review some operational basics—
Each artificial intelligence utilizes algorithms and data to perform tasks, mimicking human-like cognitive functions.
This process can be outlined in five key steps:
Input — Engineers, developers, and users gather the necessary data for the AI. This data can take various forms, such as text, images, or audio, and must be formatted in a way that algorithms can easily interpret.
Processing — The AI processes the collected data based on its programming, determining the appropriate actions to take or insights to offer. This step is akin to how the human brain evaluates information to make decisions or resolve issues based on the received input.
Outcomes — Following the analysis of the data, the AI predicts potential outcomes. This involves the AI assessing whether the data results in a “pass” or “fail,” based on its alignment with previously established patterns.
Adjustments — In cases of a failure, the AI can “learn” from the error, and the processing phase is revisited under modified conditions, such as refining the algorithm.
Assessment — The final step enables the AI to evaluate the data further, drawing inferences and making predictions.
Safeguarding Our Health Information
Key Takeways
It is important to note that when we contrast the data quantity involved in AI with that used in traditional telemedicine, we find that artificial intelligence (AI) applications demand significantly larger data volumes. This means data security is now even more vital.
It is also crucial to remember that the data utilized for AI applications often needs to be transferred to one or more cloud servers or Graphics Processing Units (GPUs), introducing an additional stage in data processing where possible data breaches may happen.
Healthcare facilities and provider offices require strong security measures to safeguard patients against unauthorized access and data breaches.
Patients must be made aware of how their data will be utilized and should have the chance to opt-out of its use in AI applications. Moreover, patients should pursue care at hospitals and healthcare providers that follow stringent privacy and security guidelines, like those specified under HIPAA, to guarantee their information is managed properly.
Healthcare organizations must educate their personnel on AI privacy and security protocols and establish robust security measures, such as strong encryption, helping to safeguard patient data throughout storage, transmission, and processing. They must integrate industry best practices, including refraining from storing patient data in large language models (LLMs).
Patients can likewise promote transparency and accountability regarding AI use in healthcare to guarantee that their privacy remains protected.