Skip to Main Content

A new study that infected willing participants with common cold and flu viruses provides the most rigorous evidence yet that wearable health monitors could predict infections, even before a person starts experiencing symptoms.

If the wearables can similarly predict infections in real-world conditions, the technology could add to existing disease surveillance and testing methods. But unresolved issues with standardizing wearables and testing them on diverse populations raise questions about their immediate utility.

advertisement

The new study, published Wednesday in JAMA Network Open, took aim at a research problem that has plagued other efforts to study wearables as infection detectors: small sample size. In two previous studies that looked at wearable devices like Apple Watches and Fitbits, tens of thousands of enrolled individuals corresponded to around 50 cases of Covid-19. In these studies and similar ones, it wasn’t clear when infected people first contracted the virus, further constraining the possibility of making predictions.

Here, researchers recorded biometric data from young people before and after they were inoculated with H1N1 influenza and human rhinovirus. By comparing each participant to their uninfected baseline metrics, researchers detected infection with up to 92% accuracy and distinguished between mild and moderate disease with up to 90% accuracy.

“The beauty of the challenge study is that we know the time of exposure to the pathogen, which is not true in these real-world studies. That makes this study uniquely powerful,” said Dunn, who was the senior author of the new study.

advertisement

The study participants were given an E4 wristband, made by the company Empatica, which records information on a wearer’s heart rate, skin temperature, movement, and electrodermal activity — a measure of electrical activity on the skin. Once exposed to the viruses, they reported daily symptoms and researchers quantified their viral shedding.

A machine learning algorithm then predicted the presence of infection and its severity based on the change of each participant’s biometric data after exposure compared to their baselines. When the data were pooled, the algorithm correctly predicted the presence of infection 12 hours after exposure with 78% accuracy when the median time of symptom onset was 48 hours for flu and 36 hours for rhinovirus. These accuracies rose to 92% for flu alone after 24 hours, and 88% for rhinovirus after 36 hours, or at the median time symptoms began. The model also predicted severity of illness (moderate, mild, or asymptomatic/noninfected) with at least 80% accuracy 12 hours after viral exposure.

The study design was particularly useful for studying presymptomatic and asymptomatic cases, said Jennifer Radin, an epidemiologist at the Scripps Research Translational Institute who was not involved in the research. Being able to recognize these infections would be “a very useful public health tool” for a disease like Covid-19, she added.

In fact, the E4 wristband is already part of a Covid-19 detection system from Empatica that has been backed and deployed by the U.S. Army. The system, called Aura, couples the wearable with an algorithm and app to give users a daily risk assessment, and the company plans to apply for Food and Drug Administration approval to use the device to diagnose Covid-19, in parallel with an ongoing prospective clinical trial.

Still, the authors of the new research stressed that further work is needed to evaluate any wearable as a Covid-19 detector. Biometric data follow a circadian rhythm, making it difficult to tease out signals from noise without enough baseline readings, said Emilia Grzesiak, the first author of the study and a data scientist at Lawrence Livermore National Laboratory. Machine-learning algorithms will also have to be trained on samples that are representative of the devices’ target populations, she added: For instance, people tend to have elevated skin temperature when menstruating, which could result in false positives if not factored into an algorithm’s training.

Additionally, commercial wearables vary in performance — some use lights to collect biometric data that have been shown to perform worse on people with darker skin. Representation must also cross the “digital divide” and include people who have difficulty accessing and affording wearable technology, said Geoffrey Ginsburg, a study co-author and director of the Duke Center for Applied Genomics and Precision Medicine.

Even if approved and standardized, biometric sensors would not be a replacement for diagnostic tests, but rather a way to augment and prioritize their allocation. Ginsburg likened using interpretations from sensors as a way for a person to “stack the deck” as a sign to get a viral antigen or PCR test.

“I don’t think that this is going to be a standalone tool, but when we’re talking about resource allocation and triage, this can provide some very useful insight for how to go about prioritizing who should be tested and who should get what care when,” Dunn said.

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.