Testing generative AI in healthcare continues

by / ⠀News / August 23, 2024
Generative Healthcare

Artificial intelligence (AI) is rapidly advancing in healthcare, but testing these technologies is a complex process that requires careful consideration. Devin Singh, a pediatric resident, experienced firsthand the devastating consequences of long wait times in emergency departments. This motivated him to explore how AI could help reduce these delays.

Singh and his colleagues developed AI models using data from the Hospital for Sick Children in Toronto. These models provide potential diagnoses and indicate which tests may be needed for patients. A study using retrospective data suggested that these models could expedite care for over 20% of emergency department visits, reducing wait times by nearly three hours for each person requiring medical tests.

However, the success of an AI algorithm in a study is only the first step in determining its real-world effectiveness. Proper testing of AI in medicine involves a multiphase process, but few developers publish the results of these analyses. Meanwhile, regulators like the FDA have approved hundreds of AI-powered medical devices for use in hospitals and clinics, often with less rigorous criteria than those for drugs.

Testing AI models for hospitals

Hospitals sometimes choose to test these devices themselves, but the process is challenging and depends on how well healthcare professionals interact with the algorithms. AI programs can be sensitive to differences between the populations they were trained on and those they aim to help.

It’s also unclear how best to inform patients about these technologies and obtain their consent for testing. Financial incentives can play a role in the adoption of AI tools, as health insurance programs may reimburse hospitals for their use, even if they don’t necessarily improve patient care. This could discourage AI companies from investing in clinical trials.

See also  Deconstructing Construction: Embracing Diversity and Inclusion

Some institutions, like Amsterdam University Medical Center, have conducted their own tests on approved AI products to ensure their effectiveness. They found that the success of an algorithm can depend on how healthcare professionals respond to its alerts and recommendations. As AI continues to advance in healthcare, it is crucial to ensure that these technologies are safe, reliable, and used responsibly.

Rigorous testing and evaluation of AI tools are essential to maintain trust and improve patient outcomes.

About The Author

Kimberly Zhang

Editor in Chief of Under30CEO. I have a passion for helping educate the next generation of leaders. MBA from Graduate School of Business. Former tech startup founder. Regular speaker at entrepreneurship conferences and events.

x

Get Funded Faster!

Proven Pitch Deck

Signup for our newsletter to get access to our proven pitch deck template.