Ideally, such systems would improve the efficiency of the health care system.
In a paper published on Thursday in the journal Science, the researchers raise the prospect of “Adversarial attacks” – manipulations that can change the behavior of A.I. systems using tiny pieces of digital data.
Samuel Finlayson, a researcher at Harvard Medical School and M.I.T. and one of the authors of the paper, warned that because so much money changes hands across the health care industry, stakeholders are already bilking the system by subtly changing billing codes and other data in computer systems that track health care visits.
Now, Mr. Finlayson and his colleagues have raised the same alarm in the medical field: As regulators, insurance providers and billing companies begin using A.I. in their software systems, businesses can learn to game the underlying algorithms.
If regulators build A.I. systems to evaluate new technology, device makers could alter images and other data in an effort to trick the system into granting regulatory approval.
In their paper, the researchers demonstrated that, by changing a small number of pixels in an image of a benign skin lesion, a diagnostic A.I system could be tricked into identifying the lesion as malignant.
Once A.I. is deeply rooted in the health care system, the researchers argue, business will gradually adopt behavior that brings in the most money.
This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.