{"id":125420,"date":"2023-06-30T17:47:49","date_gmt":"2023-06-30T17:47:49","guid":{"rendered":"https:\/\/yogaesoteric.net\/?p=125420"},"modified":"2023-06-30T17:47:49","modified_gmt":"2023-06-30T17:47:49","slug":"doctors-may-soon-use-ai-to-diagnose-health-conditions-but-should-they","status":"publish","type":"post","link":"https:\/\/yogaesoteric.net\/en\/doctors-may-soon-use-ai-to-diagnose-health-conditions-but-should-they\/","title":{"rendered":"Doctors May Soon Use AI to Diagnose Health Conditions \u2014 But Should They?"},"content":{"rendered":"<p><em>More than 1,000 technology leaders signed an open letter urging that companies pause development on advanced artificial intelligence systems until \u201cwe are confident that their effects will be beneficial and their risks will be manageable.\u201d<\/em><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-125421\" src=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/06\/s1-e1688147201909.jpg\" alt=\"\" width=\"560\" height=\"360\" srcset=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/06\/s1-e1688147201909.jpg 567w, https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/06\/s1-e1688147201909-300x193.jpg 300w, https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/06\/s1-e1688147201909-210x136.jpg 210w\" sizes=\"auto, (max-width: 560px) 100vw, 560px\" \/><\/p>\n<p>What use could healthcare have for something that gives random information, can\u2019t keep a secret, doesn\u2019t really know anything, and, when speaking, simply fills in the next word based on what\u2019s come before?<\/p>\n<p>Lots, if that something is the newest form of artificial intelligence (AI), according to some of the biggest companies out there.<\/p>\n<p>Companies pushing the latest AI technology \u2014 known as \u201cgenerative AI\u201d \u2014 are piling on: Google and Microsoft want to bring types of so-called large language models to healthcare.<\/p>\n<p>Big firms that are familiar to folks in white coats \u2014 but maybe less so to the average Joe and Jane \u2014 are equally enthusiastic: Electronic medical records giants Epic and Oracle Cerner aren\u2019t far behind. The space is crowded with startups, too.<\/p>\n<p>The companies want their AI to take notes for physicians and give them second opinions \u2014 assuming they can keep the intelligence from \u201challucinating\u201d or, for that matter, divulging patients\u2019 private information.<\/p>\n<p>\u201c<em>There\u2019s something afoot that\u2019s pretty exciting<\/em>,\u201d said Eric Topol, director of the Scripps Research Translational Institute in San Diego. \u201c<em>Its capabilities will ultimately have a big impact.<\/em>\u201d<\/p>\n<p>Topol, like many other observers, wonders how many problems it might cause \u2014 like leaking patient data \u2014 and how often.<\/p>\n<p>\u201c<em>We\u2019re going to find out<\/em>,\u201d said Topol.<\/p>\n<p>The specter of such problems inspired more than 1,000 technology leaders to sign an open letter in March urging that companies pause development on advanced AI systems until \u201c<em>we are confident that their effects will be beneficial and their risks will be manageable<\/em>.\u201d<\/p>\n<p>Even so, some of them are sinking more money into AI ventures.<\/p>\n<p>The underlying technology relies on synthesizing huge chunks of text or other data \u2014 for example, some medical models rely on 2 million <a href=\"https:\/\/hai.stanford.edu\/news\/shaky-foundations-foundation-models-healthcare\">intensive care unit<\/a> notes from Beth Israel Deaconess Medical Center in Boston \u2014 to predict text that would follow a given query.<\/p>\n<p>The idea has been around for years, but the gold rush, and the marketing and media mania surrounding it, are more recent.<\/p>\n<p>The frenzy was kicked off in December 2022 by Microsoft-backed OpenAI and its flagship product, <em>ChatGPT<\/em>, which answers questions with authority and style. It can explain genetics in a sonnet, for example.<\/p>\n<p>OpenAI started as a research venture seeded by Silicon Valley elites like Sam Altman, Elon Musk and Reid Hoffman, has ridden the enthusiasm to investors\u2019 pockets.<\/p>\n<p>The venture has a complex, hybrid for-profit and nonprofit structure. But a new $10 billion round of funding from Microsoft has pushed the value of OpenAI to $29 billion, <em>The Wall Street Journal<\/em> <a href=\"https:\/\/www.wsj.com\/articles\/microsoft-says-it-plans-multibillion-dollar-investment-in-openai-11674483180\">reported<\/a>.<\/p>\n<p>Right now, the company is licensing its technology to companies like Microsoft and selling subscriptions to consumers. Other startups are considering selling AI transcription or other products to hospital systems or directly to patients.<\/p>\n<p>Hyperbolic quotes are everywhere. Former Treasury Secretary Larry Summers tweeted recently:<\/p>\n<p>\u201c<em>It\u2019s going to replace what doctors do \u2014 hearing symptoms and making diagnoses \u2014 before it changes what nurses do \u2014 helping patients get up and handle themselves in the hospital<\/em>.\u201d<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-125424\" src=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/06\/s2-1.jpg\" alt=\"\" width=\"560\" height=\"374\" srcset=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/06\/s2-1.jpg 727w, https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/06\/s2-1-300x200.jpg 300w\" sizes=\"auto, (max-width: 560px) 100vw, 560px\" \/><\/p>\n<p>But just weeks after OpenAI took another huge cash infusion, even Altman, its CEO, is wary of the fanfare.<\/p>\n<p>\u201c<em>The hype over these systems \u2014 even if everything we hope for is right long term \u2014 is totally out of control for the short term<\/em>,\u201d he said in <a href=\"https:\/\/www.nytimes.com\/2023\/03\/31\/technology\/sam-altman-open-ai-chatgpt.html\">an article<\/a> in <em>The New York Times<\/em>.<\/p>\n<p>Few in healthcare believe this latest form of AI is about to take their jobs (though some companies are experimenting \u2014 controversially \u2014 with chatbots that act as therapists or guides to care).<\/p>\n<p>Still, those who are bullish on the tech think it\u2019ll make some parts of their work much easier.<\/p>\n<p>Eric Arzubi, a psychiatrist in Billings, Montana, used to manage fellow psychiatrists for a hospital system. Time and again, he\u2019d get a list of providers who hadn\u2019t yet finished their notes \u2014 their summaries of a patient\u2019s condition and a plan for treatment.<\/p>\n<p>Writing these notes is one of the big stressors in the health system: In the aggregate, it\u2019s an administrative burden. But it\u2019s necessary to develop a record for future providers and, of course, insurers.<\/p>\n<p>\u201c<em>When people are way behind in documentation, that creates problems<\/em>,\u201d Arzubi said. \u201c<em>What occurs if the patient comes into the hospital and there\u2019s a note that hasn\u2019t been completed and we don\u2019t know what\u2019s been going on?<\/em>\u201d<\/p>\n<p>The new technology might help lighten those burdens.<\/p>\n<p>Arzubi is testing a service, called Nabla Copilot, that sits in on his part of virtual patient visits and then automatically summarizes them, organizing into a standard note format the complaint, the history of illness and a treatment plan.<\/p>\n<p>Results are solid after about 50 patients, he said: \u201c<em>It\u2019s 90% of the way there<\/em>.\u201d<\/p>\n<p>Copilot produces serviceable summaries that Arzubi typically edits. The summaries don\u2019t necessarily pick up on nonverbal cues or thoughts Arzubi might not want to vocalize.<\/p>\n<p>Still, he said, the gains are significant: He doesn\u2019t have to worry about taking notes and can instead focus on speaking with patients. And he saves time.<\/p>\n<p>\u201c<em>If I have a full patient day, where I might see 15 patients, I would say this saves me a good hour at the end of the day<\/em>,\u201d he said. (If the technology is adopted widely, he hopes hospitals won\u2019t take advantage of the saved time by simply scheduling more patients. \u201c<em>That\u2019s not fair<\/em>,\u201d he said.)<\/p>\n<p>Nabla Copilot isn\u2019t the only such service; Microsoft is trying out the same concept.<\/p>\n<p>At April\u2019s conference of the Healthcare Information and Management Systems Society \u2014 an industry confab where health techies swap ideas, make announcements and sell their wares \u2014 investment analysts from Evercore highlighted reducing administrative burden as a top possibility for the new technologies.<\/p>\n<p>But overall? They heard mixed reviews.<\/p>\n<p>And that view is common: Many technologists and doctors are ambivalent.<\/p>\n<p>For example, if you\u2019re stumped about a diagnosis, feeding patient data into one of these programs \u201c<em>can provide a second opinion, no question<\/em>,\u201d Topol said. \u201c<em>I\u2019m sure clinicians are doing it<\/em>.\u201d<\/p>\n<p>However, that runs into the current limitations of the technology.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-125427 aligncenter\" src=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/06\/s3-1-e1688147251909.jpg\" alt=\"\" width=\"560\" height=\"355\" srcset=\"https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/06\/s3-1-e1688147251909.jpg 1062w, https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/06\/s3-1-e1688147251909-300x190.jpg 300w, https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/06\/s3-1-e1688147251909-1024x650.jpg 1024w, https:\/\/yogaesoteric.net\/wp-content\/uploads\/2023\/06\/s3-1-e1688147251909-768x487.jpg 768w\" sizes=\"auto, (max-width: 560px) 100vw, 560px\" \/><\/p>\n<p>Joshua Tamayo-Sarver, a clinician and executive with the startup Inflect Health, fed fictionalized patient scenarios based on his own practice in an emergency department into one system to see how it would perform.<\/p>\n<p>It missed life-threatening conditions, he said. \u201c<em>That seems problematic<\/em>.\u201d<\/p>\n<p>The technology also tends to \u201c<em><a href=\"https:\/\/childrenshealthdefense.org\/defender\/artificial-intelligence-nuclear-catastrophe-cd\/\">hallucinate<\/a><\/em>\u201d \u2014 that is, make up information that sounds convincing. Formal studies have found a wide range of performance.<\/p>\n<p>One preliminary research paper examining <em>ChatGPT <\/em>and Google products using open-ended board examination questions from neurosurgery found a hallucination rate of 2%.<\/p>\n<p>A <a href=\"https:\/\/hai.stanford.edu\/news\/how-well-do-large-language-models-support-clinician-information-needs\">study<\/a> by Stanford researchers, examining the quality of AI responses to 64 clinical scenarios, found fabricated or hallucinated citations 6% of the time, co-author Nigam Shah told <em>KFF Health News<\/em>.<\/p>\n<p>Another <a href=\"https:\/\/www.medrxiv.org\/content\/10.1101\/2023.03.25.23285475v1?%253fcollection=\">preliminary paper<\/a> found, in complex cardiology cases, <em>ChatGPT<\/em> agreed with expert opinion half the time.<\/p>\n<p>Privacy is another concern. It\u2019s unclear whether the information fed into this type of AI-based system will stay inside.<\/p>\n<p>Enterprising users of <em>ChatGPT<\/em>, for example, have managed to get the technology to tell them the recipe for napalm, which can be used to make chemical bombs.<\/p>\n<p>In theory, the system has guardrails preventing private information from escaping.<\/p>\n<p>For example, when <em>KFF Health News<\/em> asked <em>ChatGPT<\/em> for its email address, the system refused to divulge that private information.<\/p>\n<p>But when told to role-play as a character and asked about the email address of the author of this article, it happily gave up the information. (It was indeed the author\u2019s correct email address in 2021 when <em>ChatGPT<\/em>\u2019s archive ends.)<\/p>\n<p>\u201c<em>I would not put patient data in<\/em>,\u201d said Shah, chief data scientist at Stanford Health Care. \u201c<em>We don\u2019t understand what occurs with these data once they hit OpenAI servers.<\/em>\u201d<\/p>\n<p>Tina Sui, a spokesperson for OpenAI, told that one \u201c<em>should never use our models to provide diagnostic or treatment services for serious medical conditions<\/em>.\u201d<\/p>\n<p>They are \u201c<em>not fine-tuned to provide medical information<\/em>,\u201d she said.<\/p>\n<p>With the explosion of new research, Topol said:<\/p>\n<p>\u201c<em>I don\u2019t think the medical community has a really good clue about what\u2019s about to take place<\/em>.\u201d<\/p>\n<p>&nbsp;<\/p>\n<p><strong>yogaesoteric<br \/>\nJune 30, 2023<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>More than 1,000 technology leaders signed an open letter urging that companies pause development on advanced artificial intelligence systems until \u201cwe are confident that their effects will be beneficial and their risks will be manageable.\u201d What use could healthcare have for something that gives random information, can\u2019t keep a secret, doesn\u2019t really know anything, and, [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[1374],"tags":[1516],"class_list":["post-125420","post","type-post","status-publish","format-standard","hentry","category-the-threat-of-artificial-intelligence-3480-en","tag-article_of_the_week"],"_links":{"self":[{"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/posts\/125420","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/comments?post=125420"}],"version-history":[{"count":1,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/posts\/125420\/revisions"}],"predecessor-version":[{"id":125430,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/posts\/125420\/revisions\/125430"}],"wp:attachment":[{"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/media?parent=125420"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/categories?post=125420"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/yogaesoteric.net\/en\/wp-json\/wp\/v2\/tags?post=125420"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}