Prior to 1996, our 2nd year course in Medical Pharmacology was taught between February and May, 2-3 hrs of lecture per day, with laboratories typically occurring on Tuesday & Thursday afternoons. Our course did not overlap with any other core courses, and was taught as a traditional "silo" course. During the 1996-97 academic year, we began to "integrate" our 2nd yr medical curriculum into thematic system-based blocks where topics in several specialties (Med Pharm, Path, Pathophysiology, Microbiology, Immunology, Clinical Diagnosis) were coordinated. As a result, in 1996-97 we moved ~25% of our Medical Pharmacology curriculum to the Fall block (Inflammation, Cancer Chemotherapy & Antimicrobials). The remainder of the course took place in March thru May after students had completed the Pathology course. The same design was used during the 1997-98 academic year as well.
During the 1998-1999 academic year the extent of curricular coordination & integration was increased further, and during the Spring of 1999 we conducted our first experiment where the content covered in all courses (e.g. Pathophysiology, Pathology & Medical Pharmacology) was assessed during a single “mega” exam given on a Friday morning. Because of the amount of material covered on the exam, this exam was very long (~160 questions), requiring ~4 hours for many students to complete. The questions from the different sub-disciplines were randomly arranged throughout the exam.
Noticing the visual signs of fatigue on our student’s faces during the latter half of the exam, I became alarmed and conducted a statistical analysis of exam performance following the exam. The analysis revealed several significant trends.
Following this analysis, as a partial solution to reduce fatigue, we began separating our exams by course, with 15-30 minute breaks in-between exams. However the Mechanisms of Disease (Pathology & Pathophysiology) and Pharmacology exams were still given on the same last Friday morning of a given coordinated block.
Nevertheless, the use of large Friday block exams continued to result in decreased student performance (see attached figure), and an progressive increase in the number of students “failing” Medical Pharmacology with each block exam during the 1998-1999 academic year.
My conclusion from these results is that “fatigue” is a real (reproducible & statistically significant) phenomena associated with the “block exam” model, even when mini-breaks are given between examination of different subjects.
One question that must be addressed before we can legitimately use scores obtained from the National Board of Medical Examiners (NBME) Shelf Exam as a valid assessment as a final exam is whether there is a good correlation between a student’s grade at the end of the course (before the final exam), and their performance on the NBME final exam. If, for example, we were teaching a course on plant biology, I would expect that there would not be a good correlation between the course grade & performance on a standardized pharmacology exam, since the two topics are “apples vs. oranges”. To assess this, I periodically compare the correlation of average student scores with their score on the NBME shelf exam. To date, there has always been a good correlation (R ≥0.70, P<0.0001), as illustrated in the analyses conducted at the end of the 2004, 2005, 2006, 2008 and 2009 academic years. I therefore conclude that the NBME score can be used as a valid assessment of a student’s knowledge of pharmacology, and that a students knowledge of pharmacology at the end of our course correlates well with their performance (on average) on the NBME exam.
At the end of the first two years of curriculum integration (1998-99 & 1999-2000) a trend of decreased student performance in Medical Pharmacology was also observed when analyzing student scores obtained on the NBME shelf exam:
One might ask “why” there was a decrease in NBME test scores? It seems unlikely that longer “fatiguing” exams (a series of acute events) could decrease student performance on a standardized exam given at the end of the year. While there is more than one possible explanation, my best guess is that the decrease in NBME scores reflects the fact that the pharmacology course became significantly more “spread out” over the entire academic year (from August to May), resulting in lower student retention of the subjects covered at in the Fall semester. Prior to 1996-97, the pharmacology course began in February and ended in May (~ 3 hours per day), with no other significant competition in the curriculum. My hypothesis is that giving the entire course just prior to a standardized NBME exam most likely increases student performance, due to a more significant contribution of short-term retention. The fact that the end-of-course class average did not show a similar dip at the end of the 1999 & 200 academic year suggests that integration of the curriculum per se did not decrease exam performance (as one separate indicator of learning).
After recognizing the “potential” negative impact of curricular change on exam performance in our Medical Pharmacology course I began an ongoing dialog (over several years) with our Owl Club representatives to try and determine what resources we could possibly provide that might increase student learning & help students to review & master our material. As a result of these discussions I made a series of changes to the Medical Pharmacology curriculum that included:
After two years the downward trend in NBME scores reversed. As shown in the attached figure, our students performance on the shelf exam rebounded during the 2000-2001 academic year & remained between the 70th and 80th national percentile during the next 5 years until Katrina. While I would like to believe that the changes we made are directly responsible for the rebound, I know that education is an in-exact science. Part of the explanation could very well be that students (in response to feedback from classmates ahead of them) “adapted” their study habits, thus resulting in enhance performance on the NBME exam. Since we did not have a separate randomly selected “control group” that was un-exposed to our newly provided resources, we will probably never know the real answer. (I considered such a design impractical).
During the 2008-09 academic year I initiated the development & implementation of active learning strategies in classroom instruction for both our graduate & medical curriculum. For the graduate curriculum I developed a one-credit pilot course on "Concepts in Pharmacology". In the medical curriculum I collaborated with two department colleagues to convert 7 traditional Medical Pharmacology lectures to "Just-in-Time-Teaching" sessions with "Peer Instruction". (These active learning strategies are described at the end of Section 5). The use of Peer Instruction significantly increased class performance on questions from 63.3% (1st vote) to 89.4% (2nd vote)(n=20, p<0.0001). Further expansion of these techniques is planned for the 2009-10 academic year.
Tulane University, New Orleans, LA 70118 504-865-5000 firstname.lastname@example.org