Allison Raymundo1, Nicole DiVittorio2, and Heather Heiman3
1BS, Medical student, University of Illinois College of Medicine, Chicago, United States
2MEd, Educational Program Evaluation Coordinator, University of Illinois College of Medicine, Chicago, United States
3MD, Senior Associate Dean for Medical Education, Department of Medicine and Medical Education, University of Illinois College of Medicine, Chicago, United States
ABSTRACT
Background: Student evaluation are a critical tool for improving medical education, yet low response rates limit their utility. While QR codes have been used in various educational contexts, their role in promoting continuous student engagement with curriculum evaluation, particularly in the preclinical setting, remains underexplored.
Methods: We conducted a cohort comparison study to evaluate whether placing QR codes and hyperlinks on lecture slides could increase student participation in routine course evaluations. The intervention group, Class of 2027 (the class graduating in 2027, enrolled in 2023), received this access enhancement during the latter half of the academic year, while Class of 2026 (class graduating in 2026, enrolled in 2022) served as a historical control. Weekly evaluation response rates and number of written comments per lecture were compared between groups using independent t-tests, and effect sizes were calculated with Cohen’s d.
Results: Baseline comparison showed no significant differences between groups. During the intervention period, the QR code group demonstrated significantly higher response rates (12% vs. 7%) and more written comments per lecture (4.3 vs. 1.6) compared to the historical control, with p < 0.001 for both. Effect sizes were large (Cohen’s d = 1.78 and 1.90, respectively).
Discussion: This is the first published study to demonstrate that QR codes integrated into preclinical lectures can meaningfully increase both the response rates and amount of written feedback. The intervention’s simplicity, scalability, and effectiveness suggest it is a promising strategy for improving feedback from students in medical education.
Key Words: Evaluation, medical student, assessment, Medical School, undergraduate medical education, feedback
Date submitted: 18-May-2025
Email: Allison Raymundo (araymu2@uic.edu)
This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-Non Commercial-Share Alike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
Citation: Raymundo A, DiVittorio N, and Heiman H. Improving student engagement in curriculum evaluation: A QR code intervention to boost response rates and feedback quality. Educ Health 2025;38:431-434
Online access: www.educationforhealthjournal.org
DOI: 10.62694/efh.2025.345
Published by The Network: Towards Unity for Health
Student evaluations of the curriculum are essential in medical education, providing critical feedback for continuous improvement. However, obtaining sufficient feedback remains a challenge due to low response rates and limited constructive comments.1 While Quick Response (QR) codes have emerged as a convenient and low-cost technology to facilitate surveys and feedback, their application for ongoing, longitudinal student feedback within medical education remains understudied. Existing literature predominantly describes QR codes used by instructors to evaluate student performance, as interactive tools within educational sessions, or for immediate evaluation of those sessions, overall focusing on the clinical curriculum.2–5 Thus, their potential as a continuous feedback mechanism embedded routinely into preclinical curricula warrants further exploration. This cohort comparison study investigates whether incorporating QR codes into regular lectures can enhance student engagement with curriculum evaluation at a U.S. medical school.
At the study institution, the preclinical curriculum includes seven systems-based courses, called Blocks. Students are encouraged to complete weekly online evaluations listing each session from that week, including asynchronous modules, in-person lectures, and remote synchronous lectures. For each session, students rate lecture content and delivery on a 1–5 Likert scale and may provide written comments. Students can open and revise the evaluation form multiple times during the week, but final submission is expected at the end of the week. While participation is optional and anonymous, lecturers and support staff encourage students to provide feedback through the evaluation form after each class. These evaluations are reviewed by course directors and faculty and are used to improve teaching and curricular content. Despite the importance of this feedback, engagement has been historically low, with response rates tending to decline as the academic year progresses.
To improve participation, a QR code linking directly to the school’s evaluation website was created and placed on the first and last PowerPoint slide of every lecture, along with a hyperlink for accessibility. This allowed students to scan the QR code during class or click the hyperlink after downloading the slides, serving as a visual reminder in every class to complete the evaluation and providing quick, convenient access to the evaluation website. This intervention was selected due to its low cost, ease of implementation, and ability to reduce barriers to feedback by prompting students at the point of instruction. No additional incentives were offered. The goal of the intervention was to increase the number of weekly evaluations submitted (response rate) and the number of written comments submitted for each lecture.
The QR code intervention was implemented in Blocks 5 (Digestion & Homeostasis) and 6 (Brain & Behavior) for the Class of 2027, while the Class of 2026 served as a historical control. The Class of 2026 had 178 students, and the Class of 2027 had 174 students. A historical comparison was chosen over a within-cohort pre/post analysis because evaluation response rates tend to decrease over the course of the academic year. Comparing end-of-year data to early-year data within a single class would have introduced seasonal confounding, making trends difficult to interpret. In contrast, historical control allowed us to evaluate the intervention’s effect on participation during a comparable curricular time period.
Two key metrics were used to assess the intervention: mean weekly evaluation response rate and mean number of written comments per lecture. The weekly response rate was calculated by dividing the number of evaluations completed each week by the total number of students in the class. These weekly percentages were then averaged to produce overall response rates for the pre-intervention and intervention periods. Specifically, the pre-intervention rate reflects the mean of 26 weeks across Blocks 1–4, and the intervention rate reflects the mean of 15 weeks across Blocks 5–6. Each curriculum block ranged from 5 to 9 weeks in length.
These metrics were compared between the Class of 2027 (intervention group) and the Class of 2026 (control group). Independent t-tests were performed on data from Blocks 1–4 (pre-intervention period) to assess baseline comparability between the two classes and repeated for Blocks 5–6 (intervention period) to evaluate the intervention’s impact. A p-value of < 0.01 was considered statistically significant. Cohen’s d was calculated to determine effect size for both response rates and comments per lecture.
This study was reviewed by the institutional review board and determined to be exempt as it involved analysis of de-identified data collected through routine educational program evaluation.
During the pre-intervention period (Blocks 1–4), response rates and comment frequency between the two student cohorts were statistically similar, indicating baseline comparability (p > 0.01). In contrast, significant improvements were observed during the intervention period (Blocks 5–6). Students in the Class of 2027 submitted significantly more evaluations per week and comments per lecture (p < 0.001).
Additionally, for the Class of 2026, the average weekly evaluation response rate declined by 8% from the pre-intervention period (Blocks 1–4) to the latter part of the academic year (Blocks 5–6). In contrast, the Class of 2027 experienced a smaller decline of only 5% over the same time frame.
Cohen’s d values for Blocks 5–6 were 1.78 for response rates and 1.90 for comments, indicating a large effect size. Table 1 summarizes these results.
Table 1 Student evaluation response rates and comment frequency before and during QR code intervention
To our knowledge, this is the first published study to demonstrate that integrating QR codes into lecture materials can significantly enhance medical student engagement with course evaluations in a preclinical setting. Although QR codes are by now a familiar technology, their intentional use to facilitate longitudinal feedback collection represents a novel and practical application in medical education.
Consistent with previous research and institutional trends, we observed a natural decline in student participation over the course of the academic year.6 However, the QR code intervention appeared to mitigate this trend, sustaining higher evaluation response rates during the intervention period for the Class of 2027 compared to the control. These results suggest that the QR code intervention can help counteract the natural drop in engagement that often occurs. In addition, the intervention was associated with a significant rise in the number of written comments submitted per lecture. This increase in written feedback enriches the evaluation process by amplifying student voices and capturing a broader range of perspectives.
Overall, these findings indicate that the intervention was associated with higher levels of student engagement, both in frequency and in the likelihood of providing written feedback.
The simplicity, accessibility, and low cost of the intervention likely contributed to its effectiveness. By embedding the QR code directly into the learning environment, we reduced barriers to engagement and introduced a passive but consistent prompt for students to participate in the feedback process. The large effect sizes observed reinforce the intervention’s potential as a meaningful tool to enhance evaluation practices. The approach is easily adaptable and particularly well-suited to institutions seeking scalable, low-resource strategies to strengthen evaluation systems.
This study has several limitations. It was conducted at a single U.S. institution, which may limit generalizability due to factors such as faculty behavior, institutional culture, peer influence, and student motivation. Lectures varied in format— including asynchronous, in-person, and remote synchronous sessions—and instructors typically reminded students to complete evaluations during each session. However, these reminders were a routine part of class delivery both before and after the intervention and were not modified as part of the study. Evaluation completion was voluntary, though the institution emphasizes a strong culture of feedback.
While written comments certainly enrich the feedback provided, the presence of comments is not a substitute for assessing their quality. We did not analyze the content of student feedback, so future studies should explore both the quantity and substance of comments. Long-term sustainability is also unknown; students may become desensitized to the QR code over time. Future research should assess whether effects persist across academic years and whether adapting the intervention, such as rotating QR code visuals or incorporating verbal prompts, can sustain engagement. Qualitative studies should also explore student motivations for providing feedback and their perceptions of this intervention. Additional studies across different educational settings are also needed to evaluate the generalizability of these findings.
The QR code intervention appears to have significantly improved student evaluation response rates and comment frequency, offering a simple, low-cost strategy to enhance medical education feedback. This approach can be easily replicated to increase student engagement in curriculum evaluation.
1. Javidan AP, Rai Y, Cheung J, Patel RV, Kulasegaram KM. Six ways to maximize survey response rates: lessons from a medical school accreditation survey in a Canadian setting. Canadian Medical Education Journal. 2023;14(3):107–110. https://doi.org/10.36834/cmej.75380
PubMed PMC
2. Mo R, Wright E, MacGarrow I, Pathak S. Quick response codes for virtual learner evaluation of teaching and attendance monitoring. Canadian Medical Education Journal. Published online March 22, 2021. https://doi.org/10.36834/cmej.71708
3. Snyder MJ, Womack JJ, Nguyen D, et al. Testing quick response (QR) codes as an innovation to improve feedback among geographically-separated clerkship sites. Family Medicine. 2018;50(3):188–194. https://doi.org/10.22454/FamMed.2018.936023
Crossref PubMed
4. Kane SK, Wetzel EA, Niehaus JZ, et al. Development and implementation of a quick response (QR) code system to streamline the process for fellows’ evaluation in the Pediatric Intensive Care Unit (PICU) and the Neonatal Intensive Care Unit (NICU) at a large academic center. Cureus. Published online October 22, 2023. https://doi.org/10.7759/cureus.47462
5. Karia CT, Hughes A, Carr S. Uses of quick response codes in healthcare education: a scoping review. BMC Medical Education. 2019;19(1):456. https://doi.org/10.1186/s12909-019-1876-4
Crossref PubMed PMC
6. Kruidering-Hall M, O’Sullivan PS, Chou CL. Teaching feedback to first-year medical students: long-term skill retention and accuracy of student self-assessment. Journal of General Internal Medicine. 2009;24(6):721–726. https://doi.org/10.1007/s11606-009-0983-z
Crossref PubMed PMC
© Education for Health.
Education for Health | Volume 38, No. 4, October-December 2025