Scientists marked the 1970s and 1990s as two distinct “AI winters,” when sunny forecasts for artificial intelligence gave way to gloomy pessimism as projects failed to live up to the hype. IBM sold its AI-powered Watson Health to a private equity firm earlier this year for what analysts describe as salvage value. Could this transaction signal a third AI winter?
Artificial intelligence has been with us longer than most people realize, reaching mass audiences with Rosey the robot on the 1960s TV show “The Jetsons.” This app of the AI - the all-knowing maid who runs the house – is the sci-fi version. In a healthcare environment, artificial intelligence is limited.
Intended to work in a task-specific way, the concept is similar to real-world scenarios, such as when a computerized machine defeats a human chess champion. Chess is structured data with predefined rules of where to move, how to move, and when the game is won. Electronic medical records, on which artificial intelligence relies, are not suited to the neat confines of a chessboard.
The problem is collecting and reporting accurate patient data. MedStar Health sees shoddy electronic health record practices hurting doctors, nurses and patients. The hospital system took initial steps to bring the issue to public attention in 2010, and the effort continues today. MedStar’s awareness campaign spoofs the acronym “EHR,” changing it to “mistakes happen regularly” to clarify the mission.
In analyzing software from leading EHR vendors, MedStar found that data entry is often not intuitive and displays make it difficult for clinicians to interpret the information. Patient record software often has no connection to how doctors and nurses actually work, leading to even more errors.
Examples of medical data errors appear in medical journals, news media, and court cases, and they range from faulty code removing critical information to mysterious patient sex changes. Since there is no formal reporting system, there is no definitive number of medical errors based on the data. The high likelihood of bad data being dumped into artificial intelligence applications derails its potential.
The development of artificial intelligence begins with training an algorithm to detect patterns. Data is entered and when a sufficiently large sample is made, the algorithm is tested to see if it correctly identifies certain attributes of the patient. Despite the term “machine learning,” which implies an ever-evolving process, the technology is tested and deployed like traditional software development. If the underlying data is correct, properly trained algorithms will automate functions making doctors more efficient.
Take, for example, diagnosing medical conditions based on eye images. In one patient, the eye is healthy; in another, the eye shows signs of diabetic retinopathy. Images of healthy and “diseased” eyes are captured. When enough patient data is fed into the AI system, the algorithm will learn to identify patients with the disease.
Andrew Beam, a professor at Harvard University with a private sector background in machine learning, presented a disturbing scenario of what could go wrong without anyone even knowing. Using the eye example above, let’s say that the more patients are seen, the more eye images are fed into the system which is now integrated into the clinical workflow as an automated process. So far, so good. But let’s say the images include treated patients with diabetic retinopathy. These treated patients have a small scar from a laser incision. Now the algorithm is tricked into looking for small scars.
Adding to the data confusion, doctors disagree among themselves about what thousands of patient data points really mean. Human intervention is needed to tell the algorithm what data to look for, and it’s hard-coded as tags for machine reading. Other concerns include EHR software updates which can create errors. A hospital may change software vendors, resulting in what is called a data transfer, as information moves elsewhere.
This is what happened at the MD Anderson Cancer Center and is the technical reason why IBM’s first partnership ended. Ginni Rometty, then-CEO of IBM, described the arrangement, announced in 2013, as the company’s “moonshot” in health care. MD Anderson’s said in a press release that it would use Watson Health in its mission to eradicate cancer. Two years later, the partnership failed. Moving forward, both parties would have had to retrain the system to understand the data from the new software. It was the beginning of the end for IBM’s Watson Health.
Artificial intelligence in healthcare is as good as data. Accurate patient data management is not science fiction or a “moonshot,” but it is critical to the success of AI. The alternative is a promising health technology that freezes in time.
Photo: MF3d, Getty Images