Code Rot and Data Biases in Healthcare Apps Code Rot and Data Biases in Healthcare Apps Code Rot and Data Biases in Healthcare Apps

On code rot and data bias in health devices: https://apple.news/A8IvTES9NRPWijHhtnUG89Q

Vaughan was spurred into action when the continuous glucose monitor (CGM) function on a mobile app used by his daughter, who has had Type-1 Diabetes her entire life, failed. “Features were disappearing, critical alerts weren’t working, and notifications just stopped,” he stated.

As a result, his nine-year-old daughter, who relied on the CGM alerts, had to rely on their own instincts. The apps, which Vaughan had downloaded in 2016, were “completely useless” by the end of 2018. “The Vaughans felt alone, but suspected they weren’t. They took to the reviews on Google Play and Apple App store and discovered hundreds of patients and caregivers complaining about similar issues.”

Code rot isn’t the only issue lurking in medical device software. A recent study out of Stanford University finds the training data used for the AI algorithms in medical devices are only based on a small sample of patients.
Most algorithms, 71 percent, are trained on datasets from patients in only three geographic areas – California, Massachusetts and New York – “and that the majority of states have no represented patients whatsoever.” While the Stanford research didn’t expose bad outcomes from AI trained on the geographies, but raised questions about the validity of the algorithms for patients in other areas.