Helping Students Identify Hallucinations and Misinformation in Common Public Issues

Abstract

This presentation will demonstrate how I blend education on identifying misinformation, on writing coherence, and on detecting AI hallucinations into a single assignment. I'll explain how I build purposeful hallucinations into a report by using current cultural or political topics in which misinformation factors heavily. I'll review a report in which I asked ChatGPT to write a CDC-style report on vaccine safety, then have ChatGPT write a similar report from the vantage point of Robert Kennedy, Jr. and the anti-vaxx movement. I then blended the reports together, planting small segments of anti-vaxx argument amongst the larger pro-vaccine report. Students were told the report was written by AI, and the students' task was to edit, expand, or verify the information. I'll detail how students generally identified the hallucinations, but did so by identifying incoherent leaps in style or through examples that didn't match claims or main ideas. Students did not engage heavily or debunk health-content misinformation, such as the AI hallucination claiming that terrain theory, a debunked theory used by RFK-style anti-vaxxers, was an adequate replacement for germ theory. Thus, the talk will demonstrate how to create hallucinations that students must identify as part of their AI education, but I'll also demonstrate how students relied more on information cues than fact-checked content to identify hallucinations.

College

College of Liberal Arts

Department

English

Location

Kryzsko, Solarium, Winona State University, Winona, Minnesota; United States

Start Date

4-23-2026 9:00 AM

End Date

4-23-2026 12:00 PM

Presentation Type

Event

Share

COinS
 
Apr 23rd, 9:00 AM Apr 23rd, 12:00 PM

Helping Students Identify Hallucinations and Misinformation in Common Public Issues

Kryzsko, Solarium, Winona State University, Winona, Minnesota; United States

This presentation will demonstrate how I blend education on identifying misinformation, on writing coherence, and on detecting AI hallucinations into a single assignment. I'll explain how I build purposeful hallucinations into a report by using current cultural or political topics in which misinformation factors heavily. I'll review a report in which I asked ChatGPT to write a CDC-style report on vaccine safety, then have ChatGPT write a similar report from the vantage point of Robert Kennedy, Jr. and the anti-vaxx movement. I then blended the reports together, planting small segments of anti-vaxx argument amongst the larger pro-vaccine report. Students were told the report was written by AI, and the students' task was to edit, expand, or verify the information. I'll detail how students generally identified the hallucinations, but did so by identifying incoherent leaps in style or through examples that didn't match claims or main ideas. Students did not engage heavily or debunk health-content misinformation, such as the AI hallucination claiming that terrain theory, a debunked theory used by RFK-style anti-vaxxers, was an adequate replacement for germ theory. Thus, the talk will demonstrate how to create hallucinations that students must identify as part of their AI education, but I'll also demonstrate how students relied more on information cues than fact-checked content to identify hallucinations.