Perceptions of Misinformation on Social Media Related to COVID-19

Presenter(s)

Tammy Swenson Lepper and Heidi Hanson (student co-presenter)

Abstract

Misinformation on social media was widespread during the COVID-19 pandemic. To understand how a convenience sample of people in the United States perceived and defined misinformation and how those definitions and perceptions relate to ethical issues, we conducted a survey, approved by the WSU IRB, in the spring of 2022. We asked participants to define misinformation in their own words and lay out whether there should be consequences for social media organizations or individuals sharing misinformation. We also asked them which social media platform they believed had the most misinformation and several other news-related issues. This mixed methods study, combining the results of both open- and closed-ended items provides a breadth of information about these issues.

Based on a content analysis of participants’ (N=300) definitions of misinformation, we could put their definitions into six categories: intentionally lying or exaggerating information; information that has a bias in it, or that pushes one side’s agenda; 2) either intentionally or unintentionally sharing information that isn’t true; 3) providing information that is based on opinion and not on credible evidence, facts, or science; not having credible sources; 4) information provided by a celebrity/famous person that is outside their area of expertise; 5) no definition is provided, only examples; for instance, saying that misinformation is anything from the political parties or a politician; and 6) information that is perceived as fake/false by the audience even though it might be true. The first three definitions were the most common and share commonalities with the scholars’ definitions of misinformation.

Using thematic analysis, we categorized responses to the question about whether social media companies should face consequences into four themes: 1) no, social media companies are private organizations, and they should not interfere with the postings of their users, 2) yes, social media companies should face consequences for posting blatant falsehoods, 3) maybe, but there’s a tension between free speech of users and social media companies and the consequences of misinformation to users, and 4) companies (not the social media companies), that post misinformation on social media should be punished.

Participants (N = 260) believe that Facebook (N = 174, 67%) was the most common source of misinformation, followed by Twitter (N = 29, 11%), TikTok (N = 26, 10%), Instagram (N = 19, 7%), and other social media platforms (N = 12, 7%).

In sum, our study found that ethical tensions about free speech versus protecting the welfare of others were prominent in early 2022 when people were thinking about misinformation. Missing from most participants’ discussions were the role of algorithms and troll farms, which may have a larger effect on the information people see than participants were aware of.

College

College of Liberal Arts

Department

Communication Studies

Campus

Winona

First Advisor/Mentor

Tammy Swenson Lepper

Location

Oak Rooms E/F - Kryzsko Commons

Start Date

4-18-2024 11:00 AM

End Date

4-18-2024 11:20 AM

Presentation Type

Oral Presentation

Format of Presentation or Performance

In-Person

Share

COinS
 
Apr 18th, 11:00 AM Apr 18th, 11:20 AM

Perceptions of Misinformation on Social Media Related to COVID-19

Oak Rooms E/F - Kryzsko Commons

Misinformation on social media was widespread during the COVID-19 pandemic. To understand how a convenience sample of people in the United States perceived and defined misinformation and how those definitions and perceptions relate to ethical issues, we conducted a survey, approved by the WSU IRB, in the spring of 2022. We asked participants to define misinformation in their own words and lay out whether there should be consequences for social media organizations or individuals sharing misinformation. We also asked them which social media platform they believed had the most misinformation and several other news-related issues. This mixed methods study, combining the results of both open- and closed-ended items provides a breadth of information about these issues.

Based on a content analysis of participants’ (N=300) definitions of misinformation, we could put their definitions into six categories: intentionally lying or exaggerating information; information that has a bias in it, or that pushes one side’s agenda; 2) either intentionally or unintentionally sharing information that isn’t true; 3) providing information that is based on opinion and not on credible evidence, facts, or science; not having credible sources; 4) information provided by a celebrity/famous person that is outside their area of expertise; 5) no definition is provided, only examples; for instance, saying that misinformation is anything from the political parties or a politician; and 6) information that is perceived as fake/false by the audience even though it might be true. The first three definitions were the most common and share commonalities with the scholars’ definitions of misinformation.

Using thematic analysis, we categorized responses to the question about whether social media companies should face consequences into four themes: 1) no, social media companies are private organizations, and they should not interfere with the postings of their users, 2) yes, social media companies should face consequences for posting blatant falsehoods, 3) maybe, but there’s a tension between free speech of users and social media companies and the consequences of misinformation to users, and 4) companies (not the social media companies), that post misinformation on social media should be punished.

Participants (N = 260) believe that Facebook (N = 174, 67%) was the most common source of misinformation, followed by Twitter (N = 29, 11%), TikTok (N = 26, 10%), Instagram (N = 19, 7%), and other social media platforms (N = 12, 7%).

In sum, our study found that ethical tensions about free speech versus protecting the welfare of others were prominent in early 2022 when people were thinking about misinformation. Missing from most participants’ discussions were the role of algorithms and troll farms, which may have a larger effect on the information people see than participants were aware of.