I’ve been reading

*The Physics Teacher*journal for over a decade. Every issue contains at least one interesting idea that’s somewhat new to me. I encourage you not to be put off by the frequent buzzword-heavy piece by someone trying to show “scientifically” that his pet new teaching method works… mine each issue for the experiments, demonstrations, and new ways of thinking about old topics.

This month’s issue (May 2009) contains what, to me, is the most revolutionary article I’ve ever read in TPT. Mikhail Agrest, of the College of Charleston, writes about his approach to introductory physics labs, which he calls the “Recurrent” method. Agrest presents a fully developed method that includes pieces of things that I have done, but never completely in the way he suggests. I’m going to try a Recurrent lab tomorrow.

According to Agrest, a Recurrent lab consists of three separate stages. First, an essentially traditional lab is conducted in which a parameter (like the focal length of a lens) is measured. Next, students are asked to use that parameter to predict the results of a slightly different experiment – for example, use the measured focal length to predict the location of an image given an object distance. Finally, students must perform that very experiment in front of the teacher to verify their prediction. Students’ grades are based in part on the accuracy of the prediction.

I’ve done similar experiments in the past, in which students predict an unknown quantity for a grade. The major inspiration provided by Agrest is to let the students develop their experimental method first, before challenging them to make a high-stakes prediction. My own contribution is to make the final prediction into a sort of competitive game: the lower the uncertainty in the prediction, the more credit the lab group can earn.

**Stage I**: We conduct a standard laboratory exercise with a convex lens. Students project the image of a candle onto a screen, and measure image and object distances. I ask each partnership to set up a graph of 1/do vs. 1/di before they start collecting data – each data point is to be graphed immediately. This way the students better see the relationship between the physical measurements they make and the graph… if they just make a table and graph it later, the lab becomes an exercise in arithmetic manipulation. A substantial part of their grade will be earned for the quality of the graph’s presentation.

**Stage II**: Once a lab group and I agree that they have investigated a reasonable range of object and image distances, I give them a new object distance: 5 meters. They are asked to use their graph to predict an image distance, including an uncertainty. They will do some calculation, and discover that they’re really looking for the x-intercept of their graph. (Tomorrow, I’ll explain that they’ve found the focal length of their lens.)

I give guidance as to the format of the image distance prediction (i.e. “30 +/- 2 cm), but I let them estimate the uncertainty in any way they please. The rules for stage III will guide their determination of uncertainty.

**Stage III**: I will compare their measured focal length to the value stated on the box. Eight of twenty points for the lab will come from the accuracy of their measurement. I set up a system of rewards for these eight points:

0 points are earned if the box’s focal length does not fall within the stated uncertainty.

4 points are earned if the measurement matches the box’s focal length, no matter how large or crazy the uncertainty.

7 points are earned if the measurement matches the box’s focal length, and the uncertainty is 10% or less of the measured value.

Then, for all groups whose measurement matches the box’s focal length, bonus points are awarded: everyone gets one point for each group with a larger uncertainty.

I suggest the students imagine that I have hired them to predict the image distance… it is most important that they be RIGHT. After that, the more precise the prediction, the better. My own thought is that this kind of game teaches the deep meaning of experimental uncertainty better than any mathematical exercise. Much credit to Mr. Agrest for the inspiration to refine the experimental approach described here.

(And yes, folks, I'm aware that the picture at the top of the post is emphatically NOT a convex lens. Please feel free to explain how I know that in your comment.)

GCJ

## No comments:

## Post a Comment