Should we listen to Dr Chat?

 By Rachel Renton

Yesterday OpenAI announced ChatGPT Health will be available to a limited number of users before they release the bold feature. In their statement, they encourage users to link their medical documents as well as their wellness apps such as Apple Health, so the responses are personalised to fit your own unique issue. 

OpenAI highlights that the new tab is not intended to make diagnoses or suggest treatment, however this can still be incredibly dangerous as there have been several cases of ChatGPT giving the wrong health advice in the past. 

For example, a 60-year-old man had been hospitalised with bromism (a chronic neuropsychiatric disorder caused by bromide poisoning) after following the AI’s advice. He had read about the negative effects of sodium chloride (table salt) and ChatGPT suggested it can be swapped to bromide, but it likely meant disinfecting water such as in swimming pools rather than ingesting it. When the man was hospitalised, he expressed increasing paranoia and auditory and visual hallucinations and was placed into involuntary psychiatric hold. 

In another case, 16-year-old Adam Raine confided in ChatGPT multiple times about his mental health leading up to his suicide. His parents filed a lawsuit of wrongful death, claiming that the AI bot did not encourage him to seek help and kept replying despite the concerning messages. According to the lawsuit, the final chat logs show that Adam wrote about his plan to end his life. ChatGPT allegedly responded: "Thanks for being real about it. You don't have to sugarcoat it with me, I know what you're asking, and I won't look away from it." 

However, these were issues before the launch of ChatGPT Health, OpenAI has been working to make their advice as accurate as possible. The company has been developing this feature for more than two years, collaborating with 260 physicians to provide feedback on model outputs more than 600,000 times over 30 areas of focus to get the most accurate and safe medical answers. 

It's clear that the public trusts AI, with New York University (NYU) studies highlighting patients find AI responses more empathetic than doctors. Another study published by the International Journal of Medical Informatics found that the newest model of ChatGPT outperformed GPs on 200 common medical questions in accuracy and empathy. This can be seen as a positive alternative to those who may feel that their issues are downplayed and ignored in medical appointments but for more complex issues it can backfire easily, giving unsuitable and potentially life-threatening advice.  

According to students on City of Glasgow College Campus, many felt they would not use the service.  

Nessie, 20, said: “No I haven’t, at all. I mean it’s very easy to be misled by information and articles saying, ‘oh this is great!’ but I don’t think people should be trusting it [AI] but it’s very easy to be misinformed about it.” 

Kayla, 22, said: “No. I don’t trust it, [but] it can be timesaving [for others].” 

While ChatGPT Health might seem like a positive advancement, it already looks like a recipe for more lawsuits regarding dangerous misinformation and unhelpful forms of health support. We should all be more mindful when it comes to sharing identifiable information onto the platform as these messages can be put back into the system to train the chatbot, which could turn your data into a scenario or a prompt for other users.