Eating garlic. Drinking bleach. Using hand dryers. None of those are proven cures for the coronavirus, but this kind of misinformation has been spreading online, in some places seemingly faster than the disease itself.
Internet giants like Facebook, Google, Twitter and TikTok have all pledged to promote fact-based information on the epidemic. And the World Health Organization has pledged to partner with technology firms to push out authoritative data. It won’t be easy, experts say.
If you were online when the virus broke, you may have seen that … bat video. We won’t link to it here, but the video, of a Chinese woman eating a whole cooked bat, went, well, viral. Some snarky commentators suggested that event started the whole thing.
The thing is, that video was filmed outside China, on the Pacific island of Palau. In 2016.
Internet companies say their fact-checkers are busy weeding out posts with false information on the cause of the disease. But mere humans are not enough for a problem of this scale.
“It’s clear that the flood of data around this is of a scale that we need the tools that technology can offer just to get our hands around it,” said Ben Oppenheim, senior director at the health data science company Metabiota.
One of those tools is artificial intelligence: using machines to process data and learn to recognize, over time, what’s important. AI algorithms can suss out which web pages tend to be accurate, which words are sensational or panicky, which online sources are deemed authoritative and which posts likely come from robots rather than humans.
One example of using AI is in searches for information on vaccines.
“When people are searching … they get routed to scientific authorities instead of purveyors of misinformation.”Ben Oppenheim, Metabiota
“So that when people are searching for information on social media sites, [seeing] whether vaccines are safe or have side effects,” Oppenheim said, “they get routed to scientific authorities instead of purveyors of misinformation.”
This may sound easier than it actually is. Sure, there are clear cases of fake news, as in the suggestion that coronavirus can be “cured” with ultraviolet lamps or sesame oil. Or that the virus was created as part of a plot to thin out the world’s population. Or videos created before the virus was even detected.
“Those kinds of links are easy to take down through machine learning,” said Sarah Kreps, professor of government and technology fellow at Cornell University.
Still, there are risks to internet companies built on reputations for sharing the world’s information rather than censoring it. What if they take down information that turns out to be true? What if they block a legitimate expert?
“As soon as these companies do take more interventionist measures,” Kreps said, “they’re always raked over the coals by counterarguments that they’re limiting free speech.”
“As soon as these companies do take more interventionist measures, they’re always raked over the coals … that they’re limiting free speech.”Sarah Kreps, Cornell
According to Kreps, here is one strategy of Google’s:
“One thing they have been known to do — that there was some sense they could do in this case — is essentially bury links on page four of a Google search which, no one really goes beyond page two.”
We contacted Google, which pointed Marketplace to a company policy of de-emphasizing “lower quality” results.
“Lower quality or outright malicious results (such as disinformation or otherwise deceptive pages) are relegated to less visible positions,” the policy states.
“Lower quality or outright malicious results … are relegated to less visible positions.”Google company document, “How Google Fights Disinformation”
Artificial intelligence, used effectively, not only can identify false news; it can emphasize and push out authoritative science. At the website Buoy Health, which users can come across by searching key words associated with a disease and its symptoms, individuals can get answers about the coronavirus from an AI chatbot — answers based on science and protocols from the Centers for Disease Control and Prevention.
“It starts to piece together really important details from your story,” said Dr. Andrew Le, Buoy Health CEO. “Whether you’ve traveled to China recently. Whether you’ve been around someone who has been confirmed as having novel coronavirus. What symptoms do you have”?
The algorithm even suggests next steps for patients, or how to deal with insurance coverage. And if this outbreak lands significantly in a big American city, Buoy Health’s AI will target people there.
“We would buy ad words in that area for searches consistent with coronavirus,” Le said. “Whether that be ‘coronavirus near me,’ ‘urgent care near me,’ ‘flu-like symptoms.'”
In the end, though, AI and public health experts said the use of this technology to fend off fake news is in its infancy. Many questions remain: What about people who don’t trust the CDC? Or the World Health Organization? Who is the best digital “influencer” for this epidemic?
“It’s not just going to be pure technological solutions,” Metabiota’s Oppenheim said. “It’ll be maybe experiments that are run to try to figure out which kinds of messages and which kinds of messengers are going to help settle people.”