Healthcare Perspectives 360
Healthcare Perspectives 360
The Medical Misinformation Infodemic
Today we’re talking about the sociological drivers behind medical misinformation occurrence and spread.
During this episode you will:
- Hear the sociological science behind the spread of medical misinformation
- Learn about the scope of influence that social media companies have regarding medical misinformation
- Explore how medical misinformation may mutate and spread across multiple social media platforms
Coverys companies are a leading provider of medical professional liability insurance for medical practitioners and health systems. Coverys provides a full range of healthcare liability insurance options, advanced risk analytics, and best-in-class risk mitigation and education resources to help clients anticipate, identify, and manage risk to reduce errors and improve outcomes.
Med-IQ, a Coverys company, is a leading provider of clinical and risk management education, consulting services, and quality improvement solutions, empowering individuals at every level of the healthcare delivery system with the knowledge they need to continuously improve provider performance and patient outcomes.
The information provided through this activity is for educational purposes only. It is not intended and should not be construed as legal or medical advice. Opinions of the panelists are their own and do not reflect the opinions of Coverys or Med-IQ.
Music and lyrics: Nancy Burger and Scott Weber
[music]
Geri Amori, PhD, DFASHRM: Hello, everyone, and welcome to Healthcare Perspectives 360, a podcast dedicated to exploring contemporary healthcare issues from multiple perspectives. I’m Geri Amori, and today I’m joined by Michelle Mello, JD, PhD, a health law scholar at Stanford, John Robert Bautista, RN, MPH, PhD, a postdoctoral teaching fellow at University of Texas, Austin, focusing on health misinformation, and Brian Southwell, PhD, who is a scientist focusing on science misinformation and the public sphere at RTI International. Welcome.
Today, we’re talking about the sociological drivers behind medical misinformation occurrence and spread. So, we’ll begin. Robert, let’s start with you. Medical misinformation is everywhere. There are people who, despite millions of deaths, truly believe that COVID was a hoax. What is driving the intensity and impact of medical misinformation within social media?
John Robert Bautista, RN, MPH, PhD: Well, we have lots of factors to consider here. I think one of the most important ones that, even before COVID, we know that misinformation easily goes viral on social media. For instance, there is a study published by a group of scientists from MIT, in 2018, that was published in Science wherein they found that misinformation in the form of false news spread more faster than the truth, that is, in the case of Twitter. And the problem is that false news is spread much faster simply because it’s much more novel. It triggers people’s emotions like fear, disgust, and surprise. So that’s one factor.
Another factor would be media habits of people. For instance, a study on November 2021 by the Kaiser Family Foundation found that those who trust news from Newsmax, One American News, and Fox News hold many misconceptions about COVID-19. People’s media habit is really affecting how they perceive reality. And another aspect that’s in relation to that are people’s political beliefs. For instance, there is the September 2022 report by the National Bureau of Economic Research wherein they found a link between political party affiliation and vaccination use.
Specifically, Republican-leaning counties in the context of this research in Ohio and Florida had higher COVID-19 deaths than Democrat-leaning counties. In relation to the earlier study that I mentioned about the Kaiser Family Foundation is that belief in misinformation is much more common among unvaccinated adults and Republicans. So there are lots of factors that affect them, but those are some of the things that really drive the spread of misinformation on social media.
Amori: That’s a lot of things that are going on in our country, right? So, Brian, how do you think this all gets started? What is the genesis of this medical misinformation in the social media?
Brian Southwell, PhD: Yeah thanks, Geri. You know, I think it’s important to keep in mind actually that misinformation is not anything new. We’ve been dealing with deception and inaccuracy for just about as long as humans have been able to talk to one another. And even if you just look at the history of our media systems, there are lots of prominent examples that we can point to if we just think about medical misinformation, you know, as one arena. Look back at the, you know, the turn of the 20th century, the beginning of the 20th century.
And at that point in time, you know, there was a lot of human cry and concern over the fraudulent promotion of snake oil liniment. The whole phrase “snake oil salesmen,” you know, was actually rooted in historical incidents. And that actually, in a lot of ways, led to the regulatory oversight that we currently have in this country with regards to medical product advertising. So we’ve been dealing with this for quite some time.
I think that, certainly in the last few years, we’ve seen some dramatic examples during the pandemic that’ve been problematic. And I think it’s also important to keep in mind that we currently live in an information environment, you know, where we are connected to each other through social media. So, any one piece of misinformation that you or I get a hold of and want to share, we can quickly do that with a thousand of our closest friends on social media. That’s different than before.
Amori: Yeah, that’s definitely different than before. I mean there was misinformation about the flu of 1918. There was misinformation about the food production, and that’s where the FDA came. So it’s always been there, but what you’re saying is it’s spreading more fast, more quickly. So it seems like social media has really played a big role in this. So, Michelle, social media companies currently seem to have largely unrestricted power to regulate the content on the platforms as they see fit. To what extent does our dearly held freedom—like the freedom of speech—support or not support the perpetuation of misinformation? Is that, like, it’s free speech, so I can say anything I want, and it can be all wrong? Can you address that?
Michelle Mello, JD, PhD: Well, you’re right that the constitutional protection for freedom of speech, our First Amendment, doesn’t apply to social media companies; it only applies to so-called “state actors,” government officials and agencies of the government. So from a legal perspective, they are pretty much free to allow whatever information they choose either to spread or to cease to appear on their platforms. They could suppress more speech, but the incentives really are not to.
Part of that is that they, you know, may truly believe that the more speech the better, and that’s not a deviant belief in the American polity; it’s a core tenant of the Supreme Court, as well. Part of it may be that they are not sure they know how to police misinformation effectively, and so they would prefer to focus kind of on the worst cases and let gray areas kind of be heard in the marketplace of ideas. And I think part of it, we understand, is that just as a technical matter, they cannot effectively police 100% of speech. That is not possible for the number of humans who have butts in chair to do content moderation to do, and the algorithms that they use are pretty blunt instruments.
So, there’s a constellation of things for social media platforms that converge on them, you know, making efforts to get rid of the worst misinformation but not being very thorough about it. Now the First Amendment does, of course, apply to government, and that’s the reason why we don’t have laws that prohibit posting or spreading misinformation online. And that’s not universally true around the globe; there are countries who have chosen to use the law—even criminal law—to try to address this problem of misinformation by making it a crime to spread it. But we can’t do that.
You know, if we could, those kind of laws would really help social media platforms do their job by setting out, among other things, agreed-upon standards. You know, the government sets a standard for what constitutes misinformation, and that helps address some of the concerns about what they ought to be doing. But that’s just not going to happen. Some people are surprised to hear that the First Amendment actually protects misinformation in many forms from suppression by the government. There is a 2012 case that the Supreme Court heard called United States versus Alvarez in which it struck down a law that made it a criminal offense to lie about having received military medals.
There’s a guy who had claimed that he was a valorous solider, and that upset a lot of people, and so some of those people got Congress to pass a law prohibiting that stolen valor, as it was called. And the Supreme Court refused to hold in that case that a statement’s falsity put it entirely outside the realm of First Amendment protection, so it might be protected. Now some kinds of false speech actually can be penalized by the government, and hopefully we all know that lying on government forms or in a court of law or impersonating a government official, committing commercial fraud—these are all forms of speech that can be regulated.
But it’s a pretty limited list. I think the government’s general feeling is that false statements can often be valuable in terms of allowing people to challenge widely held beliefs without fear of repercussions, and that things could go pretty wrong if we gave the government a wider berth to regulate them. How free speech intersects here is that there are very few legal controls on the spread of misinformation, and there are some good reasons for that lack of presence of the law in this area, but it kind of compounds the problem that social media platforms have in policing because it means that, not only is it up to them to do the actual work of policing, they have to set the standards as well, and boy, that’s not easy.
Amori: Boy, you’ve really opened my eyes to a whole different perspective, Michelle. I never thought of it quite from that way. I was just thinking: those people aren’t telling the truth; we need to squash it, you know. And boy, I guess we can’t; it’s not that easy. Well, that leads me back to Robert, which is, has misinformation affected the perception that healthcare professionals are experts? I mean we used to look to our doctors and nurses and people and kind of trust them to at least be up to date on the science. And now, we hear things that, you know, don’t agree with that. Is that a good thing or a bad thing? Michelle just said maybe sometimes not having only the truth out there is sometimes possibly stimulating in some way. What do you think?
Bautista: Yeah, I mean, misinformation is fueled by the internet and social media, and those platforms gave people access to health information not only within healthcare providers. To be honest, for that, we don’t have any published study to answer whether the healthcare professionals are not considered experts anymore. But what we do have is a recent 2022 Gallup poll survey that was released in this January 2023 wherein nurses, medical doctors, and pharmacists are rated as the top 3 professionals with the highest honesty and ethical standards, so that’s what we know.
Unfortunately, in my study that I did in 2020, healthcare professionals feel that patients recognize their clinical expertise but does not necessarily trust them in terms of providing medical advice, such as in the case of giving them advice on vaccination. I think this is one of the concerns that healthcare professionals have. And there’s this recent study from the COVID (11:06) published in 2022 wherein 72% of healthcare workers perceived misinformation to be negatively influencing both patient/physicians to get vaccinated against COVID-19. And around 30% of healthcare workers think that misinformation is an urgent problem that needs to be resolved.
It's really important that healthcare professionals do think that misinformation is the single most important factor influencing unvaccinated patients’ decision not to get a COVID-19 vaccine. So, in general, misinformation may not necessarily reduce healthcare professional status as experts, but people do not necessarily trust them now, perhaps because they have other people to trust. Those people might be those people that they share the same world view. And that is affected by their media habits, their political identity, and other sort of factors. So that’s what I think about the status of healthcare professionals now as experts and trustworthy source of health information.
Amori: But we do know, psychologically, that people tend to believe people who are like themselves. It’s sort of like I…if it works, my tribe. You know, this is what the people that I hang out with believe or think, and we tend to do that. Humans tend to do that. It’s kind of wired in deeply. But at the same time, we want them to trust the people we’ve set apart to be our experts, which are our healthcare providers, who are very tired right now.
But, you know, that also leads me to wonder about who decides, Michelle, what legally—from a legal definition—is misinformation and what isn’t medical misinformation. Like really very specifically, we’re thinking about that California law regarding the dissemination of false information to patients. I mean, it feels like a very slippery slope.
Mello: Yeah, I think you’re right about that. And I think that courts agree with you about it. However much of a proponent you might be of regulating this kind of speech in principle, when you start to sit down and think about well what would that look like, you run into some problems pretty quickly. One is that we might not all agree about how demonstrably false something needs to be in order to be considered misinformation and be restricted. Think about vaccine risks, for example. Now, there are some alleged links to health harms that have been persuasively disproven.
And then there are others that have simply not been studied. If I claim that a vaccine was the reason my hair fell out, is that false, or is it just not demonstrably true? And does that distinction make a difference as to whether I should be allowed to say it? A related problem is that for some kinds of claims, especially scientific ones, the knowledge base that bears on their truth or falsity changes over time. And so, something can be misinformation one day and not the next.
And then a third complication is that some people who disseminate false statements know that they are false, while others believe that they are true. So there’s questions about whether, when we define a legal standard, it should include what lawyers would call a “scienter requirement,” a requirement about state of mind. And we do that for some kinds of false speech that we do attach legal penalties to.
Now you asked about the California medical misinformation law. I can talk about that briefly. I should start by saying this law is currently under a preliminary injunction from a court. And so it is not operative in California at the moment, although it was passed. The bill’s called AB 2098, and it was effective in 2023. And that statute provided that it shall constitute unprofessional conduct for a physician or surgeon to disseminate misinformation or disinformation related to COVID-19, including false and misleading information regarding the nature and risks of the virus, its prevention and treatment, and the development, safety, and effectiveness of COVID-19 vaccines.
And then the law goes on to define misinformation as false information that is contradicted by contemporary scientific consensus contrary to the standard of care. And it defines disinformation as misinformation that is deliberately disseminated with malicious intent or an intent to mislead. Not surprisingly, this law was legally challenged. We have two decisions that reached opposite conclusions about how workable that standard was. Because the state hasn’t chosen to appeal, the one that effectively is controlling in California was the unfavorable one for the lawmakers.
And the judge in that case expressed several worries about how the law defined misinformation. One was that the phrase “contemporary scientific consensus” was problematic because there’s no established meaning for that phrase in the medical community, and it’s not clear what a “consensus” means or what “contemporary” means. Another problem the judge had was with the phrase “contrary to the standard of care.” Now, that actually does have an established legal meaning because that’s what we use to define medical malpractice/medical negligence. But the way the act referred to it was kind of incoherent.
And so overall, what was running through the opinion was this fear about chilling. That when you are vague about what the legal standard is for misinformation, not only do you get rid of the speech that you’re targeting, but you will chill all kinds of misinformation-adjacent speech that you don’t want to, you didn’t intend to chill. But because people are worried about legal penalties—and boy, are doctors ever worried about legal penalties, that’s one thing we really know about them—that the effect will go farther than you intended and farther than the First Amendment would allow. The courts, they’re really, in evaluating California’s legal standard, really agree with you about the slippery slope.
Amori: Okay. All right well, that’s really interesting that the law even has a hard time with this because of what’s going on. So, Brian, you lived in the media world here. Imagine a piece of news that has an element of truth and a lot of misinformation. Then it seems to take off and grow into bigger misinformation. How does that mutation process happen—quickly? Can you tell us quickly?
Southwell: Yeah so, Geri, you know, here I actually want to go back to the study that Robert raised here because I actually think that that study itself introduced and generated a lot of headlines—this notion that false information travels further and faster than true. And I think it’s really important for us to not misinterpret, you know, those study results. Because there’s nothing magic about misinformation per se. It’s really important for us to keep in mind that misinformation, in order to be shared by people, has to resonate with their everyday lives, has to be part of everyday conversation.
And the folks that generate misinformation often have the advantage of being able to frame it to be as sensational and compelling as they want. They don’t have to play by the same rules the folks that are putting together, you know, carefully peer-reviewed evidence do. And so that’s really important to keep in mind—that basically misinformation has been framed to travel further and faster. And that’s really the…there’s no magic to it. Somehow people aren’t, you know, misled by the falsity of it. It’s really that it’s just information that’s engineered, you know, to travel. And I think that’s often what’s happening and what we see happen on social media when things mutate in that way.
The other just quick thing to keep in mind is that, you know, often what people are sharing kind of, you know, saws the rough edges off of the original source. And so, they might just be sharing a headline or a piece of information that we know, kind of through the telephone game, that sometimes things get 2 or 3 steps removed from their original source, they can sometimes, you know, mutate in that particular way, too. But nothing magic, but certainly a really important phenomenon to keep track of.
Amori: Thank you. That’s good information. All right. We only have a couple of minutes left, so I’d like you each, if you would, to take no more than a couple of minutes and share with our audience: what is the one thing you want them to takeaway from today’s conversation from your perspective? So why don’t we start with you, Brian.
Southwell: Sure. I’ll just reiterate that point. I think it’s really important to keep in mind that there is nothing magic about misinformation even though it is a prevalent, you know, concern for us. In order for misinformation to travel, it needs to be attractive and compelling, you know, to people. And so, I think that’s important for us to keep in mind that many times misinformation is fulfilling some sort of need, some sort of function for people. It’s not just magically, you know, spreading. And I think that really will help us better understand misinformation.
Amori: Good, thank you. And Michelle.
Mello: I think from a legal perspective, my takeaway is that there are obsessive and real concerns about line drawing here when we think about trying to regulate misinformation. And those concerns are especially serious for scientific information because science is always evolving. But I don’t think these concerns are totally intractable because there are other areas where we do regulate scientific speech.
For example, the FDA says companies can’t make unfounded health claims about their products. State attorneys general will get you for marketing, you know, sham cures and making false statements about what they can do. There’s ways of thinking about this problem. And although I think we have to be very careful, I don’t think we need to throw up our hands at the prospect of ever being able to regulate certain kinds of speech, for example, speech that is making demonstrably false statements.
Amori: Okay, thank you. And Robert, what is your takeaway for our audience today?
Bautista: COVID-19 will go down in history not only as a pandemic, just like other viruses before, but also an infodemic wherein there was strong spread of misinformation that led to vaccine-preventable deaths. I think that’s a takeaway that I would share.
Amori: Thank you. Thank you—all of you—this has been an amazing discussion. And I want to thank our panelists for joining us and participating. So I hope you, our audience, have found this valuable and interesting. Thank you for joining us, and we’ll see you again next time on Perspectives 360.
[music]