New Delhi, May 1: Although OpenAI’s ChatGPT could pass several medical exams, it lacks potential in assessing heart risk, found a study on Wednesday. Research, published in the journal PLOS ONE, showed that “it would be unwise to rely on it for some health assessments, such as whether a patient with chest pain needs to be hospitalised”. WhatsApp To Temporarily Block Your Account if...; Know How New Feature Will Affect Accounts.
ChatGPT’s predictions in cases of patients with chest pain were “inconsistent”. They also provided different heart risk assessment levels for the same patient data -- from low to intermediate, and occasionally a high risk. The variation “can be dangerous” said lead author Dr. Thomas Heston, a researcher with Washington State University’s Elson S. Floyd College of Medicine. OpenAI’s First Employee in India: Eager To Harness AI To Bring Societal Benefit for ‘Bharat’, Says Pragya Misra.
Further, the generative AI system also failed to match the traditional methods physicians use to judge a patient’s cardiac risk. “ChatGPT was not acting in a consistent manner,” said Heston. However, Heston sees great potential for generative AI in healthcare, but with further development. “It can be a useful tool, but I think the technology is going a lot faster than our understanding of it, so it's critically important that we do a lot of research, especially in these high-stakes clinical situations.”
(The above story first appeared on LatestLY on May 01, 2024 06:27 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).