Koda Health reposted this
The AI-generated headshot on the right can get close to “Tatiana” but it’s clearly not ME. “Close enough” might work for headshots, but would I trust AI to accurately represent my stance on CPR? Several folks have sent me this article from JAMA where the authors suggest using Artificial Intelligence to help make decisions for incapacitated patients by analyzing their digital footprint and comparing it to the “general population.” (https://lnkd.in/gqfJ62wk) Here’s why I find this concerning: 1. Online Personas Are Often NOT Authentic: Research shows that even our spouses have only a 50/50 chance of choosing the care options we would select for ourselves. Our curated social media profiles are not true reflections of who we are, especially for those who must code-switch or mask aspects of their identity to navigate the world. 2. Media Influences Perceptions: At Koda Health, we've observed that many people initially opt for CPR based on media portrayals, despite the fact that only 15.9% of adults survive in-hospital CPR, with even lower rates for older adults (PMID: 20047786). Our data shows that 95% of patients overestimate their chances of survival by threefold and are often unaware of complications associated with the intervention. When patients are fully informed, they often make very different choices, such as refusing CPR or signing an Out of Hospital DNR. Training AI on population-level preferences influenced by media could lead to biased outcomes. 3. Digital Footprints Are Vulnerable: An estimated 300,000 Facebook accounts are hacked daily. Whether it’s a scammer hijacking your account or a content #algorithm influencing your online presence, your digital footprint can easily be compromised. 4. Training Data Introduces Bias: The authors acknowledge that marginalized communities have less access to healthcare and smaller digital footprints, meaning such an AI would predominantly represent white people. This isn’t just theoretical—when data is missing or limited, a biased narrative emerges. Here is one of MANY scientific papers that explores this issue (PMID: 31664201). As a 2024 Rock Health Innovation Fellow, I’m excited to explore these and other issues affecting overlooked and underserved communities. Digital health is changing so quickly - the last thing we want to do is actively worsen disparities, especially in serious illness care. My (current) position is that AI may have more immediate potential in assisting patients before they lose capacity. For example, if a patient is stuck on whether to opt for a feeding tube, AI might be able to report on what questions or considerations others in their position found most impactful to their decision making. What do you think? #ArtificialIntelligence #digitalhealth #Healthcareinnovation #healthtech #AdvanceCarePlanning #patientadvocacy #algorithm #bias #acpforall
Well, to be perfectly honest, close enough doesn’t work for head shots either. 😁
Thank you for writing this post. ACP is a sacred journey. If I had learned that AI was the guide to get my loved one's ACP would I trust it to convey his/her wishes? Nope.
Wow, this is very insightful. I had no idea this was even a thing. This is why platforms like Koda and educating the public on end of life processes are critical, you don’t want AI to be making these important, life changing decisions for you!
Very cool work! Excited to see the future of where it goes!
Healthcare Digital Marketing Expert l Spot On knows healthcare. It isn’t what we do – it’s ALL we do. We know how to reach your buyers! | Founder @ #thespotonagency
2moThank you for sharing such a thoughtful post Tatiana Fofanova, Ph.D.! You raise several important concerns about using AI in healthcare, particularly in decision-making for incapacitated patients. I completely agree with your skepticism, especially regarding the authenticity of online personas and the potential for AI to misinterpret deeply personal healthcare preferences. I love your approach to exploring AI’s potential to assist patients before they lose capacity, where it could support them by providing questions or considerations relevant to their situation. This sounds like a safer and more thoughtful use of AI. AI certainly has potential, but as you said, we need to be incredibly careful. I look forward to seeing the impact you’ll have as a 2024 Rock Health Innovation Fellow. Thanks again for sparking this important discussion!