Mitigating Artificial Intelligence Risks in the COVID-19 Response

Artificial intelligence (A.I.) has been touted as the future of technology, and the coronavirus pandemic has spurred considerable investment in these methods. There are many upsides to using A.I. to help fight the pandemic, but there are also risks at every point in the process. This memo will introduce some of the potential risks artificial intelligence poses, and then outline steps regulators can take to mitigate those risks.

First, algorithms demand large volumes of high-quality data, and they are extremely susceptible to biases in that data. For example, if the data for a model were gathered in academic medical centers, the resulting AI systems would know less about–and less effectively treat–patients from populations that do not typically frequent academic medical centers (e.g. young, low-income individuals). Furthermore, even if A.I. systems are fed representative data, that data can still show inequalities. African-American patients receive, on average, less treatment for pain than white patients. Consequently, an AI system learning from prior treatment records might learn to suggest lower doses of painkillers to African-American patients even though that decision reflects systemic bias rather than biological reality.

To mitigate these liabilities, governments should provide resources for data infrastructure and set homogeneous standards for electronic health records. Currently, due to the decentralized nature of American health care, large datasets are hard to consolidate between different insurance companies and hospital networks. By providing a centralized and universal system of health records, researchers would be able to access patient information at every potential venue of care. Moreover, the federal government, through categorical grants to the states, should provide technology support for data gathering efforts in low-income and rural areas. This would ensure that the datasets used by AI developers are more representative of the American population.

The second risk is the difficulty in understanding and validating the patterns that A.I. algorithms identify. If not meticulously managed, these algorithms will go to extraordinary lengths to generate convoluted patterns that only correspond to the input data and cannot be applied to the real-world. To illustrate, researchers recently tried to diagnose malignant moles with AI. They quickly noticed that in the training data the pictures of benign moles frequently had rulers present, while malignant mole pictures were missing rulers. As a result, the model appeared to be highly accurate during development but failed in an actual health-care setting.

To break this “black box,” AI algorithms should be open-source, meaning anyone can view the original code used to make predictions. This crowd-sourcing method borrows from existing research norms and would allow researchers to validate and reproduce models on their own. In addition, developers should build models that provide justifications for their predictions. For example, a COVID-19 risk model could use a heat map approach, letting radiologists zoom into areas of the CT scan that the model pays attention to when it makes a prediction. The model can then extract and highlight snippets of text that describe what it sees. 

The final risk is in implementation. Machine learning is often advertised as a way to eliminate human bias, but it compensates by eliminating the most important bias humans have in decision making: morality. As a result, implementations of these algorithms often have troubling consequences. In South Korea, neighbors of confirmed COVID-19 patients were given details of that person’s travel and commute history. Taiwan used cell phone data to monitor individuals who were ordered to stay in their homes, and this approach is being replicated in Israel and Italy. China is notorious for exploiting A.I. for surveillance

This problem must be addressed proactively by regulators by building on the existing health privacy framework outlined by HIPAA. Currently, patients’ health information is only “protected” when it is recorded or used by specific groups (e.g. insurance companies, hospitals, clearinghouses). But this allows non-designated entities like social media platforms or wearable fitness trackers to collect sensitive health data without regulation. Lawmakers should make individually identifiable health data inherently protected, rather than a class protected only when used by certain entities. Lawmakers should also codify the permitted uses of such data. A heart rate monitor worn for fitness, for example, could be permitted to use the data in an AI that predicts risk for heart attacks, but could not use the same data in an AI to target advertisements.

Ultimately, it is important to remember that A.I. is simply a tool. Like a hammer, if it is given a specific task like hitting a nail into a wall, it works well. But if the person wielding it chooses to swing out of control, they are likely to cause more harm than good. There are risks in every facet of artificial intelligence, but they can be mitigated by proactive regulation and careful development.

Leave a Reply

Your email address will not be published. Required fields are marked *