Researchers find flaws in the algorithm used to identify atypical drug...

Researchers find flaws in the algorithm used to identify atypical drug...
Researchers find flaws in the algorithm used to identify atypical drug...
Can algorithms identify unusual drug orders or profiles more accurately than humans? Not necessarily. A study co-authored by researchers from Université Laval and CHU Sainte-Justine in Montreal found that a model used by doctors to screen patients performed poorly on some assignments. It’s a cautionary story about the use of AI and machine learning in medicine, where unverified technologies have the potential to negatively impact outcomes.

Pharmacists review active drug lists – that is, pharmacological profiles – for inpatients they care for. This process aims to identify drugs that could be misused, but most drug orders do not show any drug-related issues. Publications from over a decade ago illustrate the potential of the technology to help pharmacists streamline workflows like order review. While recent research has explored the potential of AI in pharmacology, few studies have shown its effectiveness.

The co-authors of this latest paper examined a model that was deployed in a maternal and child tertiary academic hospital between April 2020 and August 2020. The model was trained using a data set of 2,846,502 drug orders from 2005 to 2018, which was extracted from a pharmacy database and preprocessed into 1,063,173 profiles. Prior to data collection, the model was retrained every month with the latest data from the ten-year database to minimize the drift that occurs when a model loses its predictive power.

Pharmacists at the academic hospital rated the drug order in the database as “typical” or “atypical” before observing the predictions. Patients were examined only once to minimize the risk of taking profiles that the pharmacists had previously rated. Atypical prescriptions were defined as those that, to the knowledge of the pharmacist, did not correspond to the usual prescription patterns, while profiles were considered atypical if at least one drug order in them was marked as atypical.

The profile predictions of the model were made available to the pharmacists and they indicated whether or not they agreed with each prediction. A total of 25 pharmacists from seven departments of the academic hospital were shown 12,471 drug orders and 1,356 profiles, mainly from obstetrics and gynecology.

The researchers report that the model performed poorly on drug orders, achieving an F1 score of 0.30 (lower is worse). On the other hand, the model’s profile predictions achieved a “satisfactory” performance with an F1 score of 0.59.

One reason could be a lack of representative data; Research has shown that biased diagnostic algorithms can sustain inequalities. A team of scientists recently found that almost all eye disease records come from patients in North America, Europe, and China, which means algorithms for diagnosing eye diseases among racial groups from underrepresented countries are less certain. In another study, Stanford University researchers claimed that most of the U.S. data for studies on the medical use of AI came from California, New York, and Massachusetts.

Knowing this, the co-authors of this study say they do not believe that the model could be used as a standalone decision support tool. However, they believe this could be combined with rule-based approaches to identify drug ordering issues regardless of common practice. “Conceptually, it should be better to present pharmacists with a prediction for each order as it clearly identifies which prescription is atypical as opposed to profile predictions which only inform the pharmacist that something in the profile is atypical,” they wrote. “Although [our] Focus groups pointed to a lack of confidence in the pharmacists’ order predictions. They were happy to use this as a safeguard to make sure they didn’t miss any unusual assignments. This leads us to believe that even a moderate improvement in the quality of these predictions could be beneficial in future work. ”


How startups scale communication:

The pandemic is causing startups to take a closer look at their communication solutions. Learn how


These were the details of the news Researchers find flaws in the algorithm used to identify atypical drug... for this day. We hope that we have succeeded by giving you the full details and information. To follow all our news, you can subscribe to the alerts system or to one of our different systems to provide you with all that is new.

It is also worth noting that the original news has been published and is available at de24.news and the editorial team at AlKhaleej Today has confirmed it and it has been modified, and it may have been completely transferred or quoted from it and you can read and follow this news from its main source.