Future Tense

Fever-Detecting Drones Don’t Work

Despite a lot of hype, using drones to get information about the spread of COVID-19 isn’t a great idea.

A drone flying over and taking the temperatures of a group of people
Photo illustration by Natalie Matthews-Ramo/Slate. Photos by Jacob Wackerhausen and coddy/iStock/Getty Images Plus.

This article is part of Privacy in the Pandemic, a Future Tense series.

Since the pandemic began, authorities in New Delhi, Italy, Oman, Connecticut, and China have begun to experiment with fever-finding drones as a means of mass COVID-19 screening. They’re claiming the aircraft can be used to better understand the health of the population at large and even to identify potentially sick individuals, who can then be pulled aside for further diagnostic testing. In Italy, police forces are reportedly using drones to read the temperatures of people who are out and about during quarantine, while officials in India are hoping to use thermal-scanner-equipped drones to search for “temperature anomalies” in people on the ground. A Lithuanian drone pilot even used a thermal-scanning drone to read the temperature of a sick friend who didn’t own a thermometer.

Unfortunately, there’s almost no evidence that these fever-detecting drones actually work.

There are, broadly, two types of drone sensors that look for fevers: thermographic and computer-vision. The first type relies upon infrared (or thermographic) cameras to search for signs of elevated human skin temperatures. They detect the infrared radiation emitted by an object and then convert those readings into images, with warmer objects showing up as redder and cooler objects as bluer. In the drone world, thermographic cameras are used primarily for infrastructure inspection, though they do have other applications.

In medicine, thermographic cameras and scanners first became used for fever screening after the SARS outbreak in the early 2000s. Thermographic camera imagery can be used to estimate a person’s core body temperature by measuring their skin surface temperature: People who read above a certain average number can be pulled aside for more comprehensive screening. Now, they’re a fairly standard means of conducting mass screening at borders during disease outbreaks—although experts have long questioned their reliability, and these critiques are only growing, as is evidence that temperature screening is a less than reliable means of identifying the potentially sick. Still, some people think that they can just stick a thermal scanner on a drone for the ultimate mass-fever-screening innovation.

But it’s not so simple. While fever-drone users and makers may claim that they can use their aircraft to identify possible COVID-19 symptoms, I could find no scientific studies or reports that proved this was true. In the absence of this evidence, we must rely upon what’s already known about reliability issues with thermal imaging. And the evidence doesn’t look good for drones.

First, there are issues with distance. Thermographic cameras must be able to see a certain number of pixels on a person’s face to get an accurate reading. Typically, cameras used for fever screening try to get a clear shot of the corner of a person’s eye. The camera must be very close to the subject to capture enough pixels of this tiny target.

FLIR, a leading manufacturer of thermal cameras, states in its literature that its cameras should “in general” be used no more than 1 to 1.6 meters away from the subject, to ensure consistent temperature measurements. While the company does sell thermal cameras designed for drones, it does not market them for medical purposes. FLIR employee and drone expert Randall Warnas told me that this proximity is “not suitable” for drones and that he does not “suggest going down that road with the available tech today.” Simply flying drones closer to the subject won’t help either. For obvious safety reasons—no one wants a plastic propeller in the eyeball—drones usually fly much farther away from people on the ground than a mere 1 to 1.6 meters. In many countries, including the United States, they’re even barred by law from flying directly over people.

Camera stabilization, winds, and outdoor temperature can also cause problems. A 2019 paper that tested the accuracy of drone thermographic camera readings for nonmedical purposes found the camera in flight achieved a temperature reading accuracy (or uncertainty) of plus or minus 5 degrees Celsius, likely due to winds and temperature changes. That range is far too wide to detect the small differences that mark human fevers. The Food and Drug Administration recommends that fever-screening thermal devices’ accuracy should be less than or equal to plus or minus 0.5 degrees Celsius.

Furthermore, it’s unclear whether it’s even possible to accurately screen multiple people for fever at once with these methods—even stationary on the ground. While the FDA has temporarily relaxed some of its regulations on thermal cameras during the pandemic, its most recent fever-screening guidelines clearly recommend that these technologies should “be used to measure only one subject’s temperature a time,” and so do the standards published by the International Organization for Standardization.

Taken together, these factors conspire to make thermographic “fever screening” drones a deeply dubious technology. You’d get basically the same results if you mounted a thermal camera on a pole next to the grocery store.

The second type of COVID-19 screening drone relies upon computer vision. The drone company Draganfly made headlines when it announced on April 21 that it would be collaborating with the Westport, Connecticut, police on an ambitious pilot program that would use its systems to detect not just elevated body temperature, but also human heart rate, breathing rate, proximity to others, and coughing. In a press release, the company claimed the drone system (which is not yet available commercially) could “accurately detect infectious conditions from a distance of 190 feet as well as measure social distancing for proactive public safety practices.” (Westport did not respond to my question as to whether the town paid for the drones or whether they were provided for free by Draganfly.)

The Westport project was, however, brief. On April 23, the police department announced that it was pulling out of the initial pilot program, based on negative feedback from the community and from the ACLU.

Draganfly is helmed by CEO Cameron Chell, a controversial tech entrepreneur whose résumé covers everything from drilling rigs to space cameras to a Catholic Bitcoin project in collaboration with Rick Santorum. It gets its symptom-finding technology from the Vital Intelligence Project, a new health care data services startup that has a research and commercialization agreement with the University of South Australia. Draganfly has been contracted through Vital Intelligence to help bring the technology—which was developed by University of South Australia researchers and the Australian Department of Defence Science and Technology Group—to market. Vital Intelligence’s “touchless health measurement” technology uses computer vision, artificial intelligence, and high-resolution digital cameras to try to monitor vital signs in humans (and zoo animals) at a distance. It analyzes video footage to pick up on subtle movements, like a rising chest as a person takes a breath, which can then be used to estimate heart and respiratory rates or to identify survivors after disasters.

It’s indisputably promising research. But, per the papers I read from the University of South Australia research team, the technology has only been tested in controlled, experimental settings—which is not the impression you’d get if you solely paid attention to Draganfly’s sweeping promises about the technology’s accuracy. (I asked Draganfly if it had produced any research to support its claims, but it did not provide me with additional information.)

For instance, Chell said in an interview that the system is able to detect human physiological signs “between 50 and 60 meters” away from the subject, but the research paper he was referring to used a stationary digital camera, not one mounted on a moving drone. Although one phase of that experiment did achieve success in simultaneously reading vital signs in a group of six people of varying ages and skin tones with a drone camera, the drone hovered in place at a distance of just 3 meters from the subjects. The subjects were instructed in one scenario to talk, blink, and move their heads, but they were still asked not to move from their locations during the test. Javaan Chahl, the leader of the University of South Australia research team and co-author of the paper, told me in an email interview that the “fundamental limiting factor is that we need to be able to see the faces of the subjects for at least several cardiorespiratory cycles.”

These are conditions that will be hard to meet in a real-world setting, which will likely involve a moving drone camera trying to read the vital signs of groups of people walking from place to place, at distances substantially greater than just 3 meters. This screening technology—which Draganfly’s Chell has explicitly said is designed to collect “population samples”—will need to address these not inconsiderable technical challenges before it hits the market.

To this point, I asked Chahl whether Draganfly’s vital sign–monitoring drone technology has been validated in real-world settings that might more closely resemble those of busy public places. He told me in an email interview that it is “very hard to publish about uncontrolled/non-experimental settings,” although he thinks that the locations that his research team chose for its studies (prior to the Draganfly partnership) were “a reasonable proof of concept—outdoors, in gardens, fields etc.” He also told me the team “will be doing larger studies both in Australia and the U.S.,” and that he thinks newer drone technologies will be able to overcome the technical barriers at hand. Draganfly has yet to release papers, reports, or other means of independently assessing if its technology can really capture the population COVID-19 data that it says it can. Two more pilot projects, according to Chell, are planned for the near future.

Perhaps all this comes off as an unfair critique of well-intended technologists who are just trying to innovate society out of global lockdown. But it matters whether technology does what its sellers say it does—and it matters even more when it affects our health and freedom.

Disaster often prompts companies and researchers to release largely untested technologies, concepts, and medical interventions into the wild, operating under the argument that there isn’t enough time to apply the usual amount of rigor and scrutiny to innovations that might help save lives. Disaster response experts refer to this as “humanitarian experimentation,” in which ambitious technologists test new innovations on vulnerable and disempowered people. Thanks to the pandemic, the entire world is now subject to this disquieting “what could go wrong” ethos.

While Chell claimed in an interview that the biometric data Draganfly’s drones collect is all “anonymous,” the company’s promotional videos for its technology are shot at very low altitudes and clearly show individuals’ features. That’s not anonymized population data; that’s data that can easily be used to identify and detain people by its users, including police. Even if the company does find some way to better obscure individual information in its drone data—in such a way that users can’t deanonymize it—that doesn’t mean it’s harmless. What happens if the drone observes (to an unclear degree of certainty) that a poorer neighborhood, or one dominated by a minority group, has higher rates of COVID-19 symptoms? Will those people collectively receive extra health support, or will they be met with extra surveillance and blame?

Ethical and civil rights issues like these are of enormous consequence when it comes to a fever-drone-detecting future. “To diagnose or even partially diagnose people unwillingly feels wholly unethical to me at the moment, in our society and in the type of medicine we practice,” says Michael Mina, an assistant professor of epidemiology at the Harvard T.H. Chan School of Public Health. “What do you do with that information? … We don’t have a society that’s grappled with that in any way at scale.”

Early data is already showing that black Americans are being arrested in disproportionally large numbers for social distancing violations. “What we have learned through long experience is if you have bogus tech that doesn’t work well and generates random and ambiguous results, that often ends up turning into racial and other discrimination,” says Jay Stanley, a senior policy analyst with the ACLU. He points to the example of how black Americans “fail” lie detector tests disproportionately often, likely due to bias on the part of the examiner.

There are well-established issues with facial recognition algorithms and other computer-vision methods reading results from people of color and women less accurately. In a recent interview with VentureBeat, Chell himself admitted that there were some “challenges” with Draganfly’s technology and that “darker skin tones and different types of lights and the rest of it can create some problems.”

There are a few things we can do to ensure that fever-detecting drones aren’t rolled out in unethical and resource-wasting ways. Authorities need to be absolutely certain that the new and impressive-sounding pandemic tech that they use (and pay taxpayer money for) actually works. If public agencies want to study unproven fever-drone technologies, then those tests should have no connection whatsoever to real-world enforcement activities. This research should be accompanied by transparency about how the authorities who buy these drones are interpreting the data that they collect, how they’re protecting people’s privacy, and what happens to individuals—and to groups of people—who are flagged. Finally, drone companies should be very wary of using language that makes COVID-19-finding drone technology sound more proven than it really is. Promoting these solutions to unknowing customers is a suspect practice in the best of times. It is downright unethical during a disaster. Drone sellers have long struggled to win public trust in their technologies, and rushing unproven systems to market during a disaster may make people even more suspicious of drones than they already are.

In our increasingly desperate present, spending time and money on untested tech like COVID-19-detecting drones is time and money that could be funneled toward more promising measures. Until there’s data that proves their worth in the real world, fever drones will be little more than wishful thinking, objects that make us feel more in control of the pandemic than we actually are.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.