Abstract
A student submits a paper that reads smooth and confident. With generative AI, that polish no longer proves learning. Higher education now needs course designs that keep student thinking visible and assessable.
This literature review synthesizes 39 empirical studies (2024 through early 2026) on generative AI in higher education. It treats 22 learner-facing classroom studies as design cases and codes task type, permitted AI use, instructor guardrails, and measured outcomes. It uses the remaining studies to situate adoption, policy, and academic integrity risks.
Across the evidence, outcomes follow the design more than the tool. AI supports learning when it expands guided practice and feedback while students still retrieve, explain, and justify. Unrestricted use can weaken durable understanding when it replaces retrieval practice and explanation. Studies of AI-generated feedback show stronger results when students revise with clear criteria and document what they accepted, rejected, and changed. Adoption remains uneven, which raises the value of shared routines, disclosure norms, and lightweight process evidence.
Based on these patterns, the review offers four practical design patterns for teaching practitioners: draft-then-audit writing, verification as a graded step, product-plus-process grading, and critique-based “teach the model” tasks. Together, these patterns help educators integrate generative AI while protecting learning, integrity, and instructional clarity.
Recommended Citation
Charles, Jeanne
(2026)
"Empowering Educators with Generative AI in Higher Education: Design Patterns for Visible Learning,"
Essays in Education: Vol. 32:
Iss.
1, Article 9.
Available at:
https://openriver.winona.edu/eie/vol32/iss1/9
Primary Author Bio Sketch
Jeanne Charles is a doctoral student in educational technology at Boise State University. Her research explores how artificial intelligence can enhance learning and strengthen performance across education and business settings.
