Shut up and listen to your students

Are conventional lectures the best method of teaching scientific concepts?

After all, we’ve been using the same lecture model for hundreds of years — since before books were invented. We have professors concoct their own explanations of scientific concepts, deliver them by lecture, create their own diagrams on the chalkboard, and answer student questions. Surely there must be some merit to this method, if we’ve used it for centuries.

Note that I say “scientific concepts”. I want to talk about lectures used to teach introductory physics or chemistry or mathematics, not lectures used to make people remember some facts.

The answer, of course, is “no”. Stop lecturing. Shut up and listen to your students.

[Note: This post is based on a presentation I gave in a seminar course today, with some expansion.]

“Traditional” lectures

We have a “traditional” model of teaching. The professor understands the subject, and devises an explanation of the crucial concepts and facts the students need to understand. The professor delivers this explanation as a lecture: stands in front of the class and tells the students. The students, if they are attentive, will hear the explanation, and if the explanation is good, they will now understand the concept. They may need some practice, provided through homework problems, but otherwise, improvement is to be derived simply from improving explanations and keeping student attention.

We like professors that devise particularly clear or elegant explanations, and we dread professors who confuse different ideas and make a mess of the concepts.

Essentially, the professor is an explanation delivery device, and the students are explanation receptacles.

So, how well is this model doing?

Evaluating physics teaching

In physics, there’s a conceptual test of introductory Newtonian mechanics called the Force Concept Inventory. We can administer these tests before and after a course to determine how much benefit the students get out of the class. We use something called the normalized gain:

[ langle g rangle = frac{%langle mbox{post} rangle – % langlembox{pre}rangle}{100 – % langle mbox{pre} rangle }]

The normalized gain tells us how much scores improved, out of how much they could possibly have improved. That is, if a student gets eight out of ten questions right the first time, and nine out of ten the second time, he has improved half as much as he could have.

There have been many surveys of normalized gain in regular old lecture-style introductory physics classes. The largest sample, including 2084 students, gives this result:[ref name=”hake”]Hake, R. (1998). “Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses.” American Journal of Physics, 66(1), 64-74.[/ref]

[ langle g rangle = 0.23 pm 0.04]

That is: students started off with a gap in their knowledge. At the end of the semester, they had filled about a quarter of it.

It’s worth noting at this point that many instructors refuse to use FCI questions on exams, because they think they’re too easy. And before you object that perhaps the students already knew enough about physics: initial FCI scores were often lower than 50%.

It’s hard to draw conclusions about what causes these low gains, though — we just have data over the entire semester’s worth of teaching, and don’t know which techniques work and which don’t. Let’s focus on one specific technique common to lectures: lecture demos. Perhaps an example will illuminate the problem.

Lecture demo intuition

This all seems perfectly reasonable:

  • Lecture demonstrations turn abstract concepts into concrete ones
  • Demos show real-world applications and keep student interest
  • Demos spark questions and correct misconceptions
  • Demos are good for people who think visually

Hence, lecture demos should be a good thing. But we can test this.

Suppose you have two classes: one which sees the lecture demo, then hears an explanation of it from their professor, and one course which never sees the demo. At the end of the semester, you ask them to (a) predict the results of an experiment just like the lecture demo, and (b) explain those results. You compare the students in each class against each other. You see something like this:[ref name=”demos]Crouch, C., Fagen, A. P., Callan, J. P., & Mazur, E. (2004). “Classroom demonstrations: Learning tools or entertainment?” American Journal of Physics, 72(6), 835.[/ref]

Graph of lecture demo benefits -- rather small benefits
Improvement over the control group, which did not see the demo. Watching the demo provided only a small benefit.

Simply watching the demo provided no statistically significant benefit in student ability to explain the demonstration’s results (that small increase is too small to be significant), although they become slightly better at predicting its outcome. Yay, our students can remember what they saw! But only 24% of the students who watched the demo could explain it.

This is worrisome. Surely students who watched a demo and heard an explanation will be able to regurgitate that explanation later; why isn’t it working? Perhaps we can try something.

The study actually used lecture demos three different ways:

  1. Just show the demo and let students observe
  2. Ask students to predict the outcome before the demo
  3. Have students predict the outcome, observe the demo, then discuss the results with their peers

Each group also heard an explanation of the demo from their professor afterward. The result:

Comparison between the three methods. Correct explanation rate in the "discuss" group is still just 32%.

Despite the obvious benefits of discussion over no demo at all, the explanations provided by the “discuss” group are only correct 32% of the time — they’re a big improvement over the “no demo” group, but they’re not exactly spectacular. And this is at the end of the semester, after the demo had been explained by the professor and the students had learned all the material.

How is it that students who heard good explanations couldn’t reproduce them? Why can students regurgitate facts, but fail to explain them?

Because misconceptions are resilient.


Misconceptions are like cockroaches.

You have no idea where they came from, but they’re everywhere — often where you don’t expect them — and they’re impervious to nuclear weapons.

Many scientific concepts contradict what we experience day-to-day, or don’t relate to our experience at all. To learn science, you need a strong foundation of basic concepts. And students come to your class with their own ideas about those basic concepts — and in science, their ideas are usually wrong. Very, very wrong.

There have been a number of studies which enumerate these misconceptions. These studies are physically painful to read, but they’re important. A teacher can’t communicate with students without a shared vocabulary; when students have such deep misconceptions about fundamental concepts, your explanations cannot work! You can’t communicate in a language that the students understand.

Here’s a fun list of misconceptions students often have in introductory courses:[ref]McDermott, L. C. (1984). “Research on conceptual understanding in mechanics.” Physics Today, 37(7), 24.[/ref] [ref]Peters, P. C. (1982). “Even honors students have conceptual difficulties with physics.” American Journal of Physics, 50(6), 501-508.[/ref] [ref]Clement, J. (1982). “Students’ preconceptions in introductory mechanics.” American Journal of Physics, 50(1), 66-71.[/ref]

  • Students think force is always necessary to sustain motion
  • Students confuse velocity and acceleration ((a = frac{dv}{dt}))
  • Students confuse force and momentum ((F= frac{dp}{dt}))
  • Students don’t believe in “passive” forces, such as normal forces (and neither do future teachers)

And students who’ve studied physics for years still have problems.

It’s hard to predict these misconceptions. The first misconception on the list is directly refuted by Newton’s First Law, which is the first thing taught in the course, but students don’t connect the First Law to their misconception. Just lecturing students does not restructure their thought and eliminate faulty mental models.

You can’t hope to teach the rest of the course when students hold such deep misconceptions. You can’t hope to teach circular motion when students can’t distinguish between velocity and acceleration.

We saw, however, that making predictions and discussing results helped improve the effectiveness of lecture demos. By making predictions — which turned out to be false — the students were forced to realize their view is false, and the professor could expose their misconceptions for what they are. If the predictions had never been made and the discussion never held, the professor would likely never know, and the students would continue trying to fit course content into a faulty model.

How can we apply the lessons from the lecture demo experiment to improve lectures in general?

Lecture inversion

Students should learn before class. Class is a time to diagnose problems and fix misunderstanding. Professors and teachers should force students to confront their misconceptions.

How to do this? One popular method is peer instruction, a method largely devised by professor Eric Mazur of Harvard University.

The basic method is simple:

  • Use pre-class readings/videos to cover the material
  • Use class to review basic concepts, then ask conceptual questions
  • Students choose answers and discuss before you explain the correct answer
  • Listen to students as they discuss, to spot problems

The miracle of textbooks — and iPads, and video lectures, and so on — is that students can learn from a carefully-crafted explanation that is developed once and widely distributed, rather than forcing professors to devise their own lectures. Professors can instead focus on drawing out the misconceptions and exterminating them, by forcing students to make predictions, discuss concepts with peers, and see when they’re wrong.

(For example, this is where the Khan Academy becomes extremely useful — it can’t replace the professor, but it can provide the pre-class explanations. Used by itself, however, it falls prey to the same problems that ordinary lectures do. You can’t figure out your misconceptions by watching a video and answering some questions; you need feedback from someone who knows the subject.)

Does it work?

Several universities have compared interactive teaching and peer instruction to their conventional lectures:

  • Harvard: improved from (langle g rangle = 0.25) to (langle g rangle = 0.62)[ref]Crouch, C. H., & Mazur, E. (2001). “Peer Instruction: Ten years of experience and results.” American Journal of Physics, 69(9), 970.[/ref]
  • John Abbot College: improved from (langle g rangle = 0.33) to (langle g rangle = 0.50)[ref]Lasry, N., Mazur, E., & Watkins, J. (2008). “Peer instruction: From Harvard to the two-year college.” American Journal of Physics, 76(11), 1066.[/ref]
  • University of British Columbia (quantum physics course): improved from (langle g rangle approx 0.53) to (langle g rangle approx 0.79) (estimated)[ref]Deslauriers, L., & Wieman, C. (2011). “Learning and retention of quantum concepts with different teaching methods.” Physical Review Special Topics – Physics Education Research, 7(1). The authors don’t compute a normalized gain for their results, so I estimated it from their data.[/ref]
  • A survey of 48 courses at various colleges and high schools: (langle g rangle approx 0.23 pm 0.04) for traditional courses, and  (langle g rangle approx 0.48 pm 0.14) for interactive courses[backref name=”hake” /]

These results show a decided advantage for peer instruction and similar interactive courses over traditional courses, for both intro physics and quantum mechanics.

Now, if you’re still having a hard time believing this, then I have one more study. In this study, students taught in an inquiry-based course (where students discover concepts for themselves, guided by a professor or teaching assistants) were compared against honors physics students, engineering students, and non-science majors, all taught in traditional courses.

They were evaluated on four questions. Two were synthesis questions, requiring the students to predict — without calculations — what would happen in a simple electric circuit when various bits were added or removed. The other two questions were analysis questions, asking for computations of resistance and current in a simple circuit.

Here are the results:[ref]Thacker, B., Kim, E., & Trefz, K. (1994). “Comparing problem solving performance of physics students in inquiry-based and traditional introductory physics courses.” American Journal of Physics, 62(7), 627-633.[/ref]

Inquiry-based physics students outperform engineers.

The inquiry-based students creamed everyone else on the synthesis questions, and were only bested by honors physics students on the analysis question.

You might wonder what kind of students were in the inquiry-based physics class. Physics majors? No. Engineers? No. Science majors? No.

It was a physics course for elementary education majors.


By engaging your students, drawing out their misconceptions, and listening to them, you can make them perform better.

Shut up and listen to your students.

Further reading

Beyond the references (listed below) I used in this post, I can recommend quite a few other papers if you’re interested in digging deeper. If you can’t find a copy of a paper you’d like to read, let me know and I may be able to help.

  1. Deslauriers, L., Schelew, E., & Wieman, C. (2011). Improved Learning in a Large-Enrollment Physics Class. Science, 332(6031), 862–864. doi:10.1126/science.1201783. Interactive teaching techniques in a huge introductory course double the learning, despite the teachers being inexperienced TAs.
  2. Wittmann, M. C., Steinberg, R. N., & Redish, E. F. (2002). Investigating student understanding of quantum physics: Spontaneous models of conductivity. American Journal of Physics, 70(3), 218. doi:10.1119/1.1447542. Students build misconceptions from their confusion about the models they’re taught, which are sometimes contradictory,
  3. Roth, W., McRobbie, C., & Lucas, K. (1997). Why may students fail to learn from demonstrations? A social practice perspective on learning in physics. Journal of Research in Science Teaching, 34(5), 509–533. A long exploration of why students don’t learn from demonstrations, with many interesting examples.
  4. Lorenzo, M., Crouch, C. H., & Mazur, E. (2006). Reducing the gender gap in the physics classroom. American Journal of Physics, 74(2), 118. doi:10.1119/1.2162549. Interactive courses significantly reduce the gap between male and female students on tests like the Force Concept Inventory.
  5. Arons, A. (1981). Thinking, Reasoning and Understanding in Introductory Physics Courses. Physics Teacher, 19(3), 166–172. Useful examples of ways to teach difficult concepts interactively.
  6. Canpolat, N. (2006). Turkish Undergraduates’ Misconceptions of Evaporation, Evaporation Rate, and Vapour Pressure. International Journal of Science Education, 28(1), 1757–1770. Examples of fundamental misconceptions in chemistry.
  7. McDermott, L. C. (1999). Resource Letter: PER-1: Physics Education Research. American Journal of Physics, 67(9), 755. If you want to read even more, there are 224 citations waiting for you here.




  1. Two of my physics professors now focus their research on physics education. They implement a similar classroom of “physics inquiry” and a lot of us upperclassmen to teach the intro courses. The improvements have been phenomenal. So much that the new science building on campus has entire floors dedicated to these sorts of classrooms–round tables, computer hook ups, big screens, and plenty of floor space.

    Students that aren’t usually interested actually enjoy the courses now. I really think this is a change that should be implemented across the country.

  2. I’m a physics student. I got the title out of the book Failure is Not an Option, by Gene Kranz, which tells the story of the creation of the “Captain Refsmmat” character at NASA.

Leave a Comment

Your email address will not be published. Required fields are marked *