Dear Director,
How are you using AI at work and when will we see meaningful use of it clinically?
You’re two hours into your shift and you step into your next ED room already knowing more about the patient than you used to after ten minutes of chart biopsy. While you were finishing up your previous case, the AI-driven intake kiosk verified the patient’s ID, pulled in outside hospital records, updated medications, and produced a concise one-page summary, waiting on your tablet.
As you walk into the room, the ambient AI quietly activates, capturing the conversation as you greet Mr. Smith, who explains his chest discomfort in his own words. On your screen, a real-time draft of the note forms—organized, accurate, and capturing nuances you used to type/dictate later in your shift, sometimes even a day or two later when the details really blurred with other chest pain patients you saw. The system highlights a missed question about his recent travel and gently nudges you to consider various differential diagnoses including a PE based on his risk profile.
With a verbal confirmation, you approve a suggested clinical pathway and order set: EKG, high sensitivity troponin, CXR, d-dimer, etc, and weight-based medication dosing, all checked for interactions before they ever hit the chart. The heart score is calculated based on the history obtained, the past medical history in the chart, and the AI EKG interpretation. At the end of the visit, you’ve concluded the patient is safe for follow up this week with their cardiologist. When you’re ready, the AI automatically generates discharge instructions (or admission summaries) tailored to his condition and literacy level, arranges follow-up, and finalizes the documentation. A concise 2 paragraph summary of the ED visit is electronically sent to the patient’s PCP and cardiologist as well, instead of the 32 page after visit summary being faxed. Post-ED discharge monitoring enables follow-up texts, symptom checking, pain levels, and red flags to avoid revisits
I’m really excited for the future and think our job will be different and possibly improved in just the next year or two, as AI allows us to practice medicine without so much of the clutter that strips the joy and frustration out of it for us.
AI Scribes
I’ve been following this topic for a couple of years and am very excited about what I’m seeing and hearing about the progress made with AI Scribes. There are numerous companies out there and I think we’re trying to figure out the best for emergency medicine while many hospitals are trying to figure out the best system that will work across specialties.
At the very least right now, the AI Scribes are very good at capturing a history, which frees up the clinician to focus on the patient. I’m typically either taking notes or working hard to memorize the little details of the history, so reducing that cognitive load allows to me to have better eye contact with the patient and be more attentive to them.
A colleague is using an AI Scribe that provides feedback to the doc on how the conversation went and makes recommendations that can improve patient satisfaction. Ultimately, the more you talk to the patient and explain things, the more the computer can capture and then create a functional medical decision-making note. And of course, at this point, be sure to get consent from the patient before using your AI Scribe.
We should see rapid improvements in medical decision making, the ability to pull in recommended order sets, and make recommendations about the diagnosis and discharge instructions.
Since half of us are on Epic, what they do matters. Historically large companies that design tools for the whole system, may not be as good as an ER specific AI Scribe, but costs may be better and it will have more tools connected to it (like EKG interpretation). Cost is a huge component of the decision that hospitals and group are faced with. As the technology becomes more mainstream, we should see decreasing software costs. Epic is rolling out their ED and hospitalist versions later this spring.
Will AI scribes make us more productive? Maybe. I don’t proofread my dictations that closely but with AI scribes, we remain responsible for the content and are therefore responsible for the “hallucinations” that can occur. It should be easier to proofread that than create a note. The current literature suggests that using an AI scribe can reduce on shift documentation time by 25-33%. Perhaps more importantly, studies show that AI Scribes can reduce burnout.
It seems possible that AI documentation will lead to more RVUs/chart. That was the case with human scribes. There is limited data to show the impact of AI Scribes. The current data suggests that using AI scribes does lead to an increase in total RVU generation, likely from a small increase in productivity (pts/hr). It seems likely that prompts or reminders to document key features, such as critical care based on a diagnosis or an EKG interpretation, can lead to increased RVUs. For those that struggle with charting in real time or before they leave, it’s likely that there will be less “pajama time” charting as charts are closer to completion during the shift.
AI EMR Summary
We love the problem list in our EMR. Sarcasm doesn’t always translate, but of course that’s not what we need when we’re with a critical patient who has had multiple admissions over the past 6 months. What we need is a paragraph summarizing their cardiac history with dates and details, a paragraph summarizing the bacterial resistance they had during their last hospitalization, and putting it in context with their last ICU admission.
Ideally, our EMR generates a one page summary that captures the key visits, admissions, surgeries, and other data points, including things like the EF on the CHF patient or the antibacterial resistance of the organism that caused the patient’s recent septic shock admission.
I saw the Epic version of this recently and can’t wait to have access to it. It looks back on the last 100 encounters and pulls key data that is sorted by problems. It’s not live yet in my system but I can’t wait, and honestly don’t know why it’s taken so long to design.
AI Radiology
My group reads our own plain images at night, and I can always sense a little anxiety when I’m recruiting docs who are completing residency. I have no doubt they read CTs better than I do but may not have the same comfort read plain imaging. Fortunately, most patients who have fractures or CHF have the diagnosis made clinically before the image is even shot. However, we’ve all missed subtle extremity fractures or a small pneumothorax and AI is a possible solution to add an extra layer of security to our interpretation.
AI is already being used in radiology and is rapidly becoming a silent partner in emergency radiology, offering emergency physicians faster, more reliable access to critical findings. Many radiology groups now use AI algorithms that automatically screen incoming CTs and X-rays for time-sensitive conditions such as intracranial hemorrhage, pulmonary embolism, pneumothorax, or fractures and use this software to push those studies to the top of the radiologist’s worklist or send an instant alert, shortening the time to diagnosis and intervention. For emergency physicians, this means the sickest patients can be identified earlier, imaging can be prioritized more effectively, and diagnostic delays—especially during high-volume or overnight shifts—can be reduced.
Beyond triage, AI tools assist in image interpretation by highlighting subtle abnormalities, measuring lesion size, and comparing current studies to priors to detect progression or change. These systems also enhance image quality, enabling lower-dose CT scans and faster MRI sequences without compromising diagnostic accuracy. Importantly, AI does not replace radiologists; instead, it strengthens the collaboration between EM and radiology by adding another layer of safety and consistency, including automated processes for managing and escalating to both the provider and patients, via their preferred method of communication, imaging discrepancies and missed incidental findings to ensure appropriate follow-up care. For the emergency physician, AI offers a practical advantage: more rapid, more accurate imaging information at the bedside, supporting quicker decision-making in an environment where minutes matter.
AI EKG Reads
I’ve never counted the number of EKGs from triage that I review from during a shift. During my last shift, I got handed 3 from triage at the same time. All of them were somewhat abnormal—one with a LBBB, one with a RBBB, and one with nonspecific changes. All three required going into the EMR to look at old studies. And this was just once of the dozen or more times the tech came to me to review the EKG.
We know we are interrupted frequently to review EKGs and that the computerized reading from the machine is not accurate, often overcalling or under calling STEMIs or ischemia. But the same companies that make our EKG machines are working on AI solutions that are working to improve the machine’s ability to interpret an EKG and perhaps even offer new insights into cardiac function.
I’m anything but a computer programmer but this seems pretty easy—load in several of Amal Mattu’s lectures and examples and a computer should be able to read an EKG as well as him. And if Amal is reading our triage EKGs, then there shouldn’t be a need for us to have to review them in real time. Or perhaps, the computer can identify a subset that requires a human over read and accurate enough to identify true normal and compare the EKG to previous ones.
Some AI companies are making progress on rhythm determination. Others are making progress on subtle ischemic changes. I heard of one app where the ER doctor said it was extremely helpful and accurate to determine ST elevation, but my sense is there’s some issues with false positives or confusion by artifact.
With that said, some of the exciting areas in development are potential algorithms to identify pulmonary hypertension, cardiac amyloidosis, and hyperkalemia. Another company is working to identify coronary anatomy, while another is looking at low ejection fractions based on the 12 lead EKG. That’s pretty interesting stuff.
Hospitals and Nursing
Hospitals and nursing are jumping in, too. AI is moving into their workflows, and in many ways, the operational impact may be even bigger there.
Working with our nursing partners, risk-stratification tools could reduce both under- and over-triage. High-risk patients can be identified earlier, not just from obvious vital sign abnormalities but from subtle patterns we often don’t recognize until hours later. AI-assisted models may help identify patients at risk for sepsis, stroke, ACS, and other high-acuity conditions — especially those likely to require hospitalization. With the overcrowding most of us experience, that early alert matters.
At home I’ve had a running conversation about this with my wife, an aerospace engineer. She’s long been surprised at how rudimentary our predictive analytics are — particularly the ones driving staffing models. In aviation, forecasting demand and risk is foundational. In emergency medicine, we still mostly react, and we should be able to do better.
AI gives us an opportunity to forecast arrivals, anticipate admission and boarding surges, and align staffing with expected volume. Earlier awareness means reserving beds sooner, mobilizing resources proactively, and blunting the overcrowding that erodes safety and morale.
Virtual nursing is another area worth watching. The goal isn’t to replace bedside nurses — it’s to extend them. Remote observation can identify early deterioration, monitor patient movement, and trigger earlier intervention. Done thoughtfully, this could reduce falls, ease cognitive load, support retention, and improve communication across the care team.
None of this replaces clinical judgment. But used well, it gives us something emergency medicine has always struggled with — operational visibility.
Conclusions:
I had a doc that didn’t use Dragon for the first year (or two) that we had it. He just wasn’t quite sure how it worked or if he could trust it. Not every doc is tech savvy and wants to be an early adopter of technology. (A five minute lesson with my doc and he felt like his life was changed). As we transition to AI Scribes, not every one of my docs is interested. It will take time, experience, and education to get everyone comfortable with the technology and to trust that in the long run we will be more productive and probably provide better quality of care by incorporating AI into our workspace. As leaders, we need to be educated on what’s coming and look for opportunities to make bedside medicine better for our teams. Part of our job will be to get everyone on board with the technology so the entire department benefits.
Coming up in part 2 is a discussion about Large Language Models like ChatGPT and how physicians must continue to take ownership of the decision making when AI is used.