IT in Health Care: Voice Recognition Tools Make Rounds at Hospitals
T he infamous doctor’s scrawl may finally be on the way out. Voice technology is the latest tool health care providers are adopting to cut back on time-consuming manual processes, freeing clinicians to spend more time with patients and reduce costs. At Butler Memorial Hospital, voice-assisted technology has dramatically reduced the amount of time the Butler, Pa., hospital’s team of intravenous (IV) nurses spends recording information in patients’ charts and on other administrative tasks. And at the Cleveland Clinic’s Fairview Hospital, doctors are using speech recognition to record notes in patients’ e-medical records. Butler recently completed a pilot project where three IV nurses used Vocollect’s AccuNurse hands-free, voice-assisted technology along with Boston Software System’s workflow automation tools. The nurses were able to cut the time they spent on phone calls and manual processes, including patient record documentation, by at least 75 percent. Now, Butler is rolling out the voice technology for its full IV team of four nurses and seven other clinicians to use for patient care throughout the facility. The productivity boost from the voice-assisted tools also helps with the hospital’s expansion plans, says Dr. Tom McGill. Butler VP of quality and safety. Butler will soon add about 70 beds—growing from 235 beds now to more than 300—but it won’t need to expand the IV nursing team because of the time savings from the voice assisted technology, McGill says. In the past, when a patient needed IV care, such as a change in the intravenous medication being administered, an IV nurse would be paged. The nurse would have to call the
patient’s nursing station or the doctor requesting the IV to obtain details. The nurse then would prioritize the request with all the existing IV orders. Once IV care was completed, nurses would record what they did in the patient’s e-medical record. With the AccuNurse, which combines the use of speech recognition and synthesis for charting and communication, Butler’s IV nurses wear lightweight headsets and small pocket-sized wireless devices that enable them to hear personalized care instructions and other information about patients’ IV needs. IV requests are entered into Butler’s computer system, which sends them through the Vocollect system to the appropriate headset. IV nurses listen to details about new orders and use the system to prioritize IV orders. When they finish caring for a patient, nurses record what they did in the patient’s e-medical record using voice commands. “The nurses can document as they’re walking to the next patient’s room,” says McGill. Once they finish with one patient, nurses say “next task” to obtain instructions for the next patient, McGill says. The system has shown itself to be capable of understanding different accents, he said. Butler is evaluating expanding use of the voice-assisted technology to other clinical areas, including surgery. The technology could be used to help ensure that surgical staff complete patient safety checklists. McGill wouldn’t say how much Butler paid for the system, but he expects the ROI will be realized in 12 to 18 months. “It’s very affordable,” he notes. Meanwhile, Dr. Fred Jorgenson, a faculty physician at Cleveland Clinic’s Fairview Hospital, is using Nuance’s Dragon Medical speech recognition technology to speak patient notes into the hospital’s Epic EMR (electronic medical records) system. “I’m not a fast typist,” Jorgenson says. “Many doctors over a certain age aren’t. If I had to type all the time, I’d be dead.” And, at 13 cents to 17 cents per line, dictation transcription services are expensive. “In primary care, patient notes can be 30 to 40 lines. That adds up,” he says. Fairview is saving about $2,000 to $3,000 a month that might have otherwise been spent on transcription, Jorgenson said. It cost about $3,500 to get Dragon up and running. With transcription services, the turnaround time is 24 to 36 hours before information is available in the EMR. Spoken notes are available immediately. Jorgensen describes the accuracy of Dragon Medical’s speech-to-text documentation as “very good,” especially with medical terms and prescriptions. “It rarely gets medical words wrong,” he says. “If you see a mistake, it’s usually with ‘he’ or ‘she,’ and you can correct it when you see it.” Mount Carmel St. Ann’s hospital in Columbus, Ohio, has been among the early wave of health care providers using electronic clinical systems bolstered with speech recognition capabilities. About seven years ago, emergency department doctors at Mount Carmel St. Ann’s hospital began having access to Dragon’s speech recognition software
not long after an e-health record system from Allscripts was rolled out there. When the e-health record was first rolled out—without the voice capabilities—Mount Carmel St. Ann’s doctors didn’t necessarily see the kind of productivity boost they had been hoping for, in large part because they found themselves spending a lot of time typing notes, says Dr. Loren Leidheiser, chairman and director of emergency medicine at Mount Carmel St. Ann’s emergency department. But as more Mount Carmel St. Ann’s ER doctors began incorporating the speech recognition capabilities into their workflow— whether speaking notes into a lapel microphone or into a computer in the patient room or hallway—the efficiency picked up tremendously, says Leidheiser. Also, before using the Dragon software, the ER department spent about $500,000 annually in traditional dictation transcription costs for the care associated with the hospital’s 60,000 to 70,000 patient visits yearly at the time. That was cut down “to zero,” he says. The return on investment on the speech recognition, combined with the use of the e-health record system, was “within a year and a half,” notes Leidheiser. Leidheiser also makes use of time stuck in traffic to dictate notes that are later incorporated into patient records or turned into e-mails or letters. Using a Sony digital recorder, Leidheiser can dictate a letter or note while in his car, then later plug the recorder into his desktop computer, where his spoken words are converted to text. Speech recognition technology is also helping U.S. military doctors keep more detailed patients notes while cutting the time they spend typing on their computers. By 2011, the U.S. Department of Defense expects to have implemented its integrated, interoperable electronic medical record system— AHLTA—at more than 500 military medical facilities and hospitals worldwide. The system will be used for the care of more than 9 million active military personnel, retirees, and their dependents. Military doctors using the AHLTA system also have access to Dragon NaturallySpeaking Medical speech recognition technology from Nuance Communications’ Dictaphone health care division, allowing doctors to speak “notes” into patient records, as an alternative to typing and dictation. Over the last year, the adoption of Dragon has doubled, with about 6,000 U.S. military doctors using the software at health care facilities of all military branches, including the Air Force, Army, Navy, and Marine Corps. The use of Dragon Naturally Speaking voice recognition software with the AHLTA e-health record systems is freeing doctors from several hours of typing their various patient notes each week into the AHLTA, he said. Being able to speak notes into an e-health record at the patient beside— rather than staring at a computer screen typing—also helps improves doctors’ bedside manner and allows them to narrate more comprehensive notes, either while the patients are there or right after a visit. That cuts down on mistakes caused by memory lapses and boosts the level of details that are included in a patient record, says Dr. Robert Bell Walker, European Regional Medical Command AHLTA consultant and a family practice physician for the military. The voice capability “saves a lot of time and adds to the thoroughness of notes from a medical and legal aspect,” says Dr. Craig Rohan, a U.S. Air Force pediatrician at Peterson Air Force base in Colorado. The ability to speak notes directly into a patient’s electronic chart is particularly helpful in complicated cases, where a patient’s medical history is complex, he says. Text pops up on the computer screen immediately after words are spoken into the system, so doctors can check the accuracy, make changes, or add other details. Also, because spoken words are immediately turned into text, the medical record has “a better flow” to document patient visits. Previously, “the notes that had been created by [entering] structured text into the AHLTA system looks more like a ransom note,” says Walker, with information seemingly randomly pasted together. Doctors can speak into a microphone on their lapels to capture notes in tablet PCs during patient visits, or speak into headsets attached to desktop or wall-mounted computers. The storage requirement of voice notes is “small,” especially when compared with other records, such as medical images, says Walker. By adding spoken notes to medical records, e-mails, and letters, “it’s easier to tell the story,” remarks Leidheiser.