New Delhi, Dec 25 (PTI) A new set of internationally-agreed recommendations might help patients benefit better from AI-based medical innovations, such as by minimising the risk of bias, according to researchers.

Studies have shown that medical innovations, based on artificial intelligence (AI) technologies, can be biased -- they work well for some people and not for others, suggesting that some individuals and communities may be "left behind", or may even be harmed.

Also Read | Kolkata Fatafat Result Today: Kolkata FF Result for December 25, 2024 Declared, Check Winning Numbers and Result Chart of Satta Matka-Type Lottery Game.

The recommendations, published in The Lancet Digital Health journal and New England Journal of Medicine AI, are aimed at improving how datasets -- used to build AI health technologies -- can reduce the risk of potential AI bias.

"Data is like a mirror, providing a reflection of reality. And when distorted, data can magnify societal biases. But trying to fix the data to fix the problem is like wiping the mirror to remove a stain on your shirt," lead author Xiaoxuan Liu, an associate professor of AI and Digital Health Technologies at the University of Birmingham, UK, said.

Also Read | Shillong Teer Results Today, December 25 2024: Winning Numbers, Result Chart for Shillong Morning Teer, Shillong Night Teer, Khanapara Teer, Juwai Teer and Jowai Ladrymbai.

"To create lasting change in health equity, we must focus on fixing the source, not just the reflection," Liu said.

Key recommendations include preparing summaries of dataset and presenting them in plain language, researchers forming the international initiative 'STANDING Together (STANdards for data Diversity, INclusivity and Generalisability)' involving more than 350 experts from 58 countries.

Known or expected sources of bias, error, or other factors that affect the dataset should also be identified, the authors said.

Further, the performance of an AI health technology should be evaluated and compared between contextualised groups of interest, along with the overall study population.

Uncertainties identified in AI performance should be managed through mitigation plans, ensuring the clinical implications of these findings are clearly stated, along with documenting strategies to monitor, manage and reduce these risks while implementing the technology, the authors said.

"We hope to raise awareness that no dataset is free of limitations, so transparent communication of data limitations should be perceived as valuable, and absence of this information as a limitation," they said.

"We hope that adoption of the STANDING Together recommendations by stakeholders across the AI health technology lifecycle will enable everyone in society to benefit from technologies that are safe and effective," they said.

(This is an unedited and auto-generated story from Syndicated News feed, LatestLY Staff may not have modified or edited the content body)