Have you been asked by a medical provider recently for consent to have an "AI" scribe record your visit? Us, too. And we have **thoughts**
https://buttondown.com/maiht3k/archive/why-you-should-refuse-to-let-your-doctor-record/
Post
Have you been asked by a medical provider recently for consent to have an "AI" scribe record your visit? Us, too. And we have **thoughts**
https://buttondown.com/maiht3k/archive/why-you-should-refuse-to-let-your-doctor-record/
@emilymbender I would have thoughts, too. My privacy. There is never any guarantee but with A.I. involved once privacy is shreds, guaranteed.
Article on the unsanctioned use of AI in medical settings:
https://www.medpagetoday.com/opinion/second-opinions/119842?xid=nl_secondopinion
@emilymbender I always want to say no, but there’s an unspoken social pressure to just agree, especially when you’re lucky to have 30 seconds of time with your doctor.
@emilymbender No, I was told at the end of the consultation that everything was recorded and that 'it' was making a report now.
@emilymbender
After reading your post I had to watch again Ortho Artificial Intelligence short Video by DGlaucomflecken 😅🤣 https://youtu.be/WgnWgIOer6s?is=62_p13To_penRg-S
@emilymbender
Got round to reading this article and it's provided food for thought.
Any medical practitioner will be enticed by a solution that appears to reduce workload, particularly if it reduces the interaction with crappy electronic record systems like Powerchart (owned by Larry Ellison's Oracle).
For me, dictating a clinic letter at the end of a specialist consultation is an opportunity to take stock of all the important features, sift through the information and highlight the point of importance for other practitioners involved (including future me).
I'm not sure AI will do that properly, even if it understands my accent.
@emilymbender yuuuuup... dental office. I politely declined and they were cool with it. But will it be that way next time?
@emilymbender ah yeah I remember the diagnosed gender dysphoria situation because some machine learning program heard "male man" when a woman described herself as mailman
@emilymbender Luckily not been asked that
@emilymbender Excellent post. I worked for many years in healthcare. I know firsthand the incredible pressures on providers to find the time they need to give high-quality care while completing all of their administrative tasks. So I get why these AI tools are attractive. I have consented to have providers use them in my care. But I won’t any longer. The problems you describe are serious & potentially dangerous. I appreciate the perspective that documenting is part of care.
@emilymbender Yes, and I certainly decline. Fortunately, I have a good relationship with my GP, so it hasn't been an issue so far.
@emilymbender @DevlinLeathercraft The orthopedic surgeon who will be taking care of my trigger thumb asked to record our last session. I can't remember whether I asked him if an AI was going to transcribe it, but I will next time.
@emilymbender These kinds of scribes have been commonplace in medical scenarios for a long time at this point
1. At my last vet visit, there was a small typed notice across the exam room from the person/animal seating area that said AI is now being used by the practice for all visits, and to assume that if staff are in the room, recording is happening.
2. At my last primary provider visit, I asked the medical assistant if AI was being used, and if she could opt me out. She agreed.
When the PA came into the exam room, her first words were that I needed to prioritize my questions/issues, as she would only be able to deal with two, since she would have to manually chart the whole visit.
(I had come hoping for a prescription refill and two referrals for specialist care.)
@emilymbender Agree. GPs are low availability now, saying no to this means being viewed as difficult, maybe ejected from patient roster. So you can't really say no.
Also in two visits where reports were prepped from specialists, there were errors from AI transcription mishearing that I think a human would not have made (age cited quite differently in different paragraphs, a operation claimed as had which was spoken as DID NOT have, etc) Correction required my time, effort, Dr disfavor 🫤
Yes, and of course said no.
But then I discovered they used AI transcribing when adding notes to my journal, after the meetings, as it was full of obvious errors. So needed to lecture them again about my right it is not used on my medical record.
What makes this even worse is that they all know how bad it works, as it is frequently reported in media about complaints from the medical community about horrific errors, as well as inefficiency this overhyped piece of crap creates.
@emilymbender no but yesterday, I did see a poster mentioning it at a local clinic
@emilymbender thankfully my therapist was like "yeah dude don't worry about it it's weird" but i still get an email alongside every 'upcoming appointment' email reminding me to sign the permission form
@emilymbender
One of my first jobs was providing tech support to doctors in a hospital setting. They were some of the most tech-illiterate folks I've ever encountered. They have no concept of operational security.
No doctor has ever asked me for permission to store any information about me in whatever systems they're using. For all I know they store it in plain text on an insecure S3 bucket.
@emilymbender no but in the agreement they ask us to sign periodically it said that they might use AI. So I said I wasn’t signing if they were going to. They asked the doc and she said no I don’t use AI transcription at all and I didn’t know that was in there!
@emilymbender
I have-- and refused!
@emilymbender It turns out some doctors, too, are noticing negative side effects of using AI for taking notes.
https://benngooch.substack.com/p/i-was-an-enthusiastic-early-adopter
@emilymbender Funnily enough, transcribing can preserve privacy without issues. Whisper.cpp runs decently well on phones, and can be run on servers that process patient records under the same security constraints. Could easily be run locally even.
Problem is, that’s extremely hard to prove in the current „just slap a gear on it and call it steampunk” climate. I would definitely not trust a random provider.
And if they do „summarization”, forget privacy.
@emilymbender psychiatry did it without informed consent. I am livid
@emilymbender for a doctors perspective on the more profound side effects of “efficiency”
https://benngooch.substack.com/p/i-was-an-enthusiastic-early-adopter
“I felt myself becoming a passive observer in encounters where I had previously been an active architect. I felt my clinical memory, my narrative identity, and my sense of connection to my patients beginning to erode at the edges.”
@emilymbender Really helpful. Thanks for sharing.
I often reason about it this way. There are very few things like this where, if you opt out this time, you can’t opt in next time. On the other hand, there are LOTS of situations, this being a good example, where opting out after you opted in is substantially more effort (or even impossible).
Opting out by default is usually a safe thing to do. You can always opt in later if you change your mind.
@emilymbender
Evidence shows litigation decrease if Drs have scribes. A Dr isn’t allowed to remember things in defence. It’s said “If it is not documented, it didn’t happen” even if it did happen & recall can be verified.
The direct effect:
1: more litigation = more insurance cost for the Dr & thus higher consult fees.
2: Drs who have psychological & emotional injury from spurious claims reduce/stop practice.
So there is high motivation for having a scribe.
@emilymbender I opted out at my physical therapist last week. They told me all their patients who work in tech have opted out.
yes i was a declined. when she asked, i said because
1. the companies used sites with CSAM and other abuses
2. it’s spyware. each prompt acts like a honey-pot. since you are giving them the info, it by-passes HIPPA. in turn they get to use and sell that info however they please
3. as an antifascist activist, it puts my life in danger by giving companies ran by fascists access to my whole medical history.
my MD was shocked. they had no idea about the spyware angle or CSAM
@emilymbender Yes, I was asked to sign a consent (stuck in with the other standard consents) authorizing the doctor’s practice to use an AI scribe. I left the room, went up to the front desk and told them I would not sign the consent under any circumstances. They looked a little surprised, but agreed to have one of the techs act as a scribe as normal. Glad I stood my ground - there is no way in Hell I would let a Doc use AI for anything medical related
@emilymbender Im fortunate my gp doesn't even trust the national health database.
@emilymbender Usually I ask my doctor to turn that off.
@emilymbender
Completely agree! I have started having conversations with my patients about the fact that I will never use this technology and to warn them that some institutions/practices have decided they don’t even need patient consent. Aside from all of the privacy issues, it is literally a major part of my job to create detailed, accurate and nuanced notes.
@emilymbender I tried to subscribe to that newsletter and it said "this e-mail address can not be subscribed" Why does it not like my email address?
@emilymbender I do share a lot of AI skepticism, but physician perspective (I use it about 25-30% of visits), there are many highly speculative aspects of this take:
🧵 1/2
1) Point #1 is valid, however, the same data safety questions can be asked regarding other integrated systems. Like where is your EMR data stored, how does your radiology data integrate (reviewed in 3rd party software), etc.
2) Consent: valid concern, but the fullest version would be a long EULA-like text with a checkbox...
@P__X Your experience is your experience, but I am **appalled** at what you're saying about consent here. The fullest version would be too long, so we're not actually doing informed consent? No thank you.
@emilymbender "he fullest version would be too long, so we're not actually doing informed consent?"
No, that is not what is being said there. Unlike a blog post, I am restricted in space. I explicitly said that is is a valid concern. A basic research consent form is 8+ pages of legalese and I'm afraid that the future solution will be to add it as a checkbox for 30 pages of text at check-in that nobody reads and doesn't actually inform better. And again, my point #1.
@P__X You are not restricted in space -- you wrote a whole thread.
My point is: if patients do not know what they are consenting to, it is not consent. If it is not possible in the context of the visit to convey the detail, then we shouldn't do the thing.
I encourage you to read the rest of the replies to my post, including the quotes, to see the lack of consent and how that is landing.
Frankly, I'm surprised & **disappointed** by your eagerness to jump to conclusions and make biased inferences. Eg: "an AI scribe will change how physicians speak", but *character limits don't impact how ppl write here*. Sets how seriously I should take this.
My inference: you've had minimal input from actual providers familiar w/ the tech (point #4 and 7 were dead giveaways) or who spent >10,000 hours writing notes (even #9 seems to be from a non-provider).
No thank you.
@emilymbender Agreed. For any points that were valid, none of them necessitate the use of LLMs. Never mind without consent. Disgusting.
@emilymbender Thanks for the write-up! How do you feel about human scribes? I've been saying "yes to human scribes, no to AI scribes" for a while now, but your list makes me realize a lot of the concerns apply there, too.
@emilymbender My provider always asks and always said that the days "doesn't leave our server." Are there really transcription/summarizing AIs that are completely local?
@emilymbender I finally said "no" the last time I went in... and I found out the nurse also wasn't really fond of it either.
@emilymbender I've spotted medically significant errors in the transcription during the session. It's galling.
@emilymbender I have been asked multiple times by medical providers this year if I would consent to AI transcription, and I have said no every time. I'm tempted to ask next time if they have information from an independent audit of the system performance and privacy policy
@emilymbender YES, multiple times, and with wording and timing of the "request" that amounted to coercion. Very little information given in response to questions about what tool exactly was being used, where my data would be sent, their privacy policies etc. Just "this makes it easier for me to help you."
Also recently at the emergency vet hospital, and again in a context that made it extremely difficult to refuse, lest you be seen as a difficult or problem client that creates extra work for the poor vets.
So glad you are writing about this.
@emilymbender I've been asked multiple times, and I always chicken out when they ask and say yes, I think it's the power imbalance, and I don't think it's fair.
Refuse! Don't do the doctor's job of taking notes, let alone support their choice to have AI transcribe notes for them. The accuracy rate is poor, but incorrect notes would still be used to treat you.
Yes, and by a couple of specialists who it has taken a lot of time/money to get in to see, so I've felt almost coerced into agreeing as I didn't want to risk them declining to see me so I consented against my better judgement.
Do you think there are other ways we can fight this outside of directly declining consent in these situations? One of the providers I see who has been using one for ~2 years now is the only person I've been able to see locally who can prescribe a tightly controlled medication, so I'm quite reluctant to risk the (tenuous) relationship I have established (I already don't feel like I can be completely open and honest with her, and fear what may happen if I decline the use of the scribe).
@emilymbender Great article. The privacy of my protected health information is my immediate concern. There are many ways it can and does leak. It’s not the provider’s fault, it is in the whole chain of info systems backing them up. Using an LLM multiplies the risk. Training data is EXTREMELY valuable. There is no way the recording or transcript are getting deleted. It’s getting sold. The incentives to are all wrong to ensure HIPAA compliance.
@emilymbender Yes. I think this is becoming quite common. Medical groups are providing this service through enterprise-wide medical information systems. I was grateful to be asked. I’ll just add that doctors do not like lectures from patients.
@meltedcheese Nowhere am I suggesting that patients lecture their doctors, though?
This information can help patients make decisions for themselves about this. If consent is being requested (as it should be), then 'no' is a complete sentence.
@emilymbender Agreed. I’m sorry that I miscommunicated. => I am the one who “lectured” and only because AI is my area of deep expertise. If I can convince a doctor or two to at least ask the right questions and consult with other doctors before simply accepting the use of LLM technology, that’s a good thing. Patients should have the info, as you say, to make their own decisions.
@emilymbender THANK YOU for point 9. It drives me batty how poorly people (read: management) rate the importance of mental review, slower moments, and backburner thinking for improving or maintaining skills. Something something working out is 10% active work and 90% fueling/rest.
Not to mention that a rushed/stressed provider makes ME stressed and less likely to be candid or “be complicated”, which is not the point of a visit!
@emilymbender yes. I said no. That I was very much against it.
@emilymbender yes. I said no. Said I was very against it.
My doc did say she was going to use it, but I said no. She said it's easier for her. I said no. She no longer brings the thing in the room. My cardiologist, however, just put the device on the desk & I said no. He started to hem & haw, reached to turn it on, then said it wasn't working. I talked about lots of things, and how it felt like I had a basketball in my gut. The summary "he" wrote said that I play basketball for exercise. I'm 71 with health issues. He lied.
@emilymbender The booking system wouldn't let me proceed without providing consent. I'm allowed to opt-out by providing a written request; I've not (yet) done that.
@emilymbender My biggest concern is the potential for psychiatric violence. Inaccurate medical notes produced by these systems could very easily be used as evidence of psychosis or some other kind of psychopathology, leading to forced medical treatment. Having already experienced some of that system, it really worries me. I don’t let medical providers use these systems with me.
I just read (in a JAMA newsletter, I'll try to track it down -- it's not in my email or trash) about a Doctor who as been an early adapter. He did it "right", going over the notes in the evening to clean up the errors in transcription.
He found:
1) He could just focus on the patient, rather then the screen.
2) He got off track and was less focused, and spent more time with the patients without providing better information.
3) Most importantly, when someone came back 6 months later for a follow up, he realized that the notes were not that good. Accurate, but without insight -- they read like someone else had written them and did not help him recall what was going on.
@emilymbender there are signs at the doctor's offoce saying you can refuse, but when I did I got a lecture on how this helps, and acting like I had no clue what I was talking about. I mentioned I worked in tech and it was dismissed. As I am in an area with few doctors accepting new patients at the moment.... how do I really refuse?
My therapist asked for permission, I declined, and after my session we got into a long conversation about why. At least they were curious about it.
@emilymbender other than "AI is just fucking wrong a lot?"
@the_turtle You could read the post that I was linking to, or you could add to the deluge of mansplaining characteristic of the Fediverse.
You chose option #2.
@emilymbender still, AI is just fucking wrong a lot.
@the_turtle And this is still mansplaining.
@emilymbender@dair-community.social see that block buttton? FUCKING USE IT THEN, SELF-IMPORTANT SHINY LIGHTS ON A SCREEN!
Doctor at Kaiser did ask. I asked what happened if I refused, and they actually weren't able to tell me. So I assume they are recording everything regardless of what I say.
Not a doctor. But have had parent's accountant/fin advisor had had AI transcribe last couple of meetings.
@emilymbender I had an appointment last week (Kaiser Permanente) and my doctor asked if I was fine with being recorded and she explained that it made it easier for her to write up a report later.
No mention of AI but that's not to say that's the real reason.
@netopwibby Oof -- so she asked if you were okay being recorded but did not provide info on what was going to happen to the recording?
@emilymbender Had a similar experience to
@netopwibby one with a cardiologist, I am in Canada. But at the last appointment she didn’t seem to use it? Will try to think to ask her about the use of it next time if I don’t forget.
@emilymbender Aside from “taking notes,” nope. Didn't seem weird at the time so I didn't probe.
Yep. And I said no. She initially said not to worry because it's all deleted afterwards. I said that, no, it is not. That's not how LLM's work. All that data remains in there somewhere and can be hacked, plus I don't want anything about me used to train those things on principal. She didn't argue.
@emilymbender I've noticed a lot of this use in veterinary medicine recently as well, just FYI.
@emilymbender And, now that I'm thinking about it- I have rarely seen any question put to the client if they want to consent or not. (This is some from work in a clinic and as a client/observer from a couple of clinics.)