[ad_1]

For some time now, we’ve been watching how speech recognition-based AI tools can increase physician productivity, reduce burnout, and improve the quality of the patient experience.

In addition, the health system looked at voice-enabled transcriptions to identify reimbursable conditions identified during diagnosis, while ensuring the diagnosis did not miss any key health indicators.

It is well known that the greatest burden on many caregivers is recording and annotating clinical encounters in electronic health record systems; speech recognition is one of many tools today that can alleviate the problem and reduce clinician workload.

Voice-enabled tools fall into the broad category of conversational AI along with chatbots and other productivity and automation tools. However, the maturity of these tools, especially in the clinical setting, is still a long way from the promise of this technology.

Users of leading speech recognition tools admit that the technology can improve caregiver productivity. However, they also point out that ambient AI, or the underlying assumptions about software that can understand conversations and provide clinical decision support in real time, is still in its early stages.

According to Monument Health’s CIO and CMIO’s Stephanie Lahr, speech recognition in clinical settings is complex, and it is difficult to capture doctor-patient encounters in speech recognition software.

Dr. Lahr noted that even with leading voice tool technology providers, “people behind the scenes” often interpret the conversation and differentiate clinical terms from the overall conversation.

BJ Moore, CIO of Providence Health and user of speech recognition tools, points out: How can AI tools extract the necessary components from the doctor-patient contact and add them to the EHR, while ignoring the rest of the chatter in the room?

Big Tech and Speech Recognition in Healthcare

Given the enormous potential of voice tools to improve productivity and transform the patient experience, big tech companies and startups alike are keen to expand speech recognition capabilities.

Amazon, Google and Apple are all investing in consumer-facing voice applications.Microsoft’s Cortana platform didn’t make much of an impact in the market, it moved on and Acquires voice technology software developer Nuance Nearly $20 billion will be spent by 2021. The move basically means Microsoft is doubling down on healthcare.

Amazon is the only major tech company offering voice-based healthcare, Deploy the Alexa service exist Some Medical institutions. However, Alexa uses voice in a non-clinical (or quasi-clinical, depending on how you look at it) environment. Amazon’s recent announcements point to the use of voice support to stay connected, informed, and entertained by patients in senior living communities and hospitals, just as today’s consumers use Alexa for general information.

While these solutions do not directly assist clinicians and caregivers with diagnosis and treatment, they still play an important role in the delivery of care. For example, voice assistants allow patients to have routine, non-medical needs, such as medication reminders, as if they had a medical attendant at home, but use a voice assistant instead.

This brings us to Oracle, now a major new player in healthcare technology, whose Plan to acquire CernerThe press release mentions several times that speech recognition software is an important driver of future clinician productivity and workload reduction.

While Oracle isn’t the first name that hospitals and health systems think of speech recognition technology, its intent to bring speech recognition technology to the Cerner platform to address clinical workloads suggests Perceived Opportunities for Voice Technology in Healthcare. (Interestingly, Cerner is currently working with Nuance because of its voice-enabled feature).

Environmental clinical computing is still in its early stages

Ambient computing using speech and other conversational interfaces is an exciting field, and Several startups are entering the field.

However, progress towards smarter use of speech recognition in clinical decision support has been slow. As mentioned earlier, separating clinical terminology from other aspects of the conversation is a formidable challenge, which means that speech recognition technology is well suited for some specialties but not others.

Regardless of the pace of adoption, most providers see a reduction in clinician burnout using it. Speech-recognition software can type three times faster than humans can type in a clinical system, potentially freeing up several hours a day for a typical caregiver who sees 20 to 30 patients a day.

We can only hope that as the technology gets better and requires less and less human involvement to review notes, we’ll see higher adoption rates. Big tech’s entry into speech promises to bring significant new investment, driving things like artificial intelligence tools and intelligent automation to encode and extract quality from encounter notes.

An important consideration for automating the use of voice support and similar technologies today is that it can help providers get through the high demand and low staffing levels in the healthcare space — coupled with the “big resignation.”

Allowing clinicians to do the most demanding face-to-face work with the tallest and most complex patients also means using technology to help patients who don’t have high acuity needs – we’ve seen virtual consultations in telemedicine and primary care be very effective as well as behavioral Health and other professions. As ambient technology improves, more use cases for utilizing speech will emerge.

This leaves us with the question of how patients respond to voice-enabled tools during their visit. Early indications are that most patients embrace environmental technology because it offers an opportunity to restore the intimacy they have with their providers that has been lost due to onerous documentation requirements in the EHR.

However, issues surrounding data privacy and patient education about ambient technologies suggest that voice-enabled applications need to tread carefully.

On a broad level, the real potential of speech recognition technology lies in going beyond documentation to become an intelligent decision support tool by effectively listening to clinical metrics and proactively supporting clinical decision-making.

The level of integration between emerging technology tools and core clinical platforms such as EHRs is an important factor in increasing adoption. The fundamental challenges facing speech recognition in ambient computing today are the same as for general AI applications in healthcare settings.

As with all new technologies, voice solutions will be more likely to gain widespread adoption by addressing important and pressing issues in the delivery of care, gaining support from clinician owners and advocates within organizations.

There are many promising technologies that can impact healthcare today. However, clinicians and digital health leaders must recognize that no matter how good the technology, success will be difficult to achieve without organizational alignment and performance.

In addition to integrating with core clinical platforms such as EHRs, new technologies involve business process changes and require effective change management approaches. Success requires alignment between technology vendors and the healthcare organization’s internal stakeholders and developing an end-to-end view of problem solving.

This usually means paying close attention to understanding declared and undeclared requirements. When all these elements come together, the digital transformation of the healthcare industry can take huge leaps.

Paddy Padmanabhan is Digital Transformation in Healthcare – How Consumerism, Technology and the Pandemic Can Accelerate the Future. He is the founder and CEO of Dharma Consulting.

[ad_2]

Source link