30 July 2016

Why four rather than five choices on AP Physics 1?

Blog- and AP- reader Barbara sends the question:

Any idea on the rationale for moving from five voices on the MC to four?

Barbara, mainly this was a reading density issue. 

Reading, writing, and understanding English* are inescabable and fundamental parts of learning physics.  Nevertheless, we want the language in questions to be straightforward and minimalist, such that the language doesn't become an obstacle to demonstrating physics knowledge.

* Or another language, of course... but the AP exam is in English. :-)

The College Board and ETS do psychometric** research investigating their exams, and their examination techniques.  For example, they've shown that deducting 1/4 of a point for an incorrect multiple choice answer doesn't differentiate between students any more than just scoring the number of correct answers directly.  At the AP reading, investigations have shown that grading a physics problem holistically*** produces scores indistinguishable from traditional grading.  

** I may have made that word up

*** Meaning something like "2 points for a complete answer, one for a partially complete answer, 0 for a lousy answer" as opposed to assigning each point to a specific element of the response

In terms of five vs. four multiple choice choices, data shows that either approach differentiates students of varying ability appropriately.  (I don't know, 'cause I never asked, whether five-choice questions differentiate better.  The statement I'm remembering is that four-choice produces statistically significant and reliable differentiation.)  

Once the case for the statistical validity of a four-choice exam was made, then it was a shoe-in as the superior option.  Statements from test developers suggested that question authors too often seemed stretched to create four incorrect choices that each made sense -- they got too many questions where some choices could be ruled out on the grounds of "this sounds totally silly and made up."  With only four choices, it's easier to create three incorrect yet plausible responses that directly test student misconceptions.

The bigger issue, though, was the reading burden on the student.  Even for a very well constructed five-choice item, the student still must take the time and intellectual effort to read an extra choice.  The psychometric studies suggested that most students were not, in fact, reading and understanding all five choices; and, that students who DID read all five choices often had to read them multiple times to make a reasonable decision as to the best answer.  

It was clear from the beginning of AP Physics 1 that this new exam would require considerably more verbal expression than AP Physics B did.  So the College Board and ETS made several changes to the format of the multiple choice, with the goal of minimizing the reading comprehension burden:

* Item authors are now required to justify the incorrect choices, explaining how each choice helps differentiate students who understand the physics targeted by the item from those who don't

* The multiple choice section has been reduced from 70 questions to 50 questions, giving students more time to digest the more involved language used in the new exam

* The "roman numeral" question type has been replaced by "multiple correct" items.  (You know, those questions that gave I, II, and III, and THEN gave lettered choice such as "I only" or "II and III, only".  The studies showed that the reading comprehension burden was especially high on these.  However, simply choosing the two out of four correct choices does not require significant additional reading over a standard question.)

* And, as we're discussing... the number of choices was reduced from 5 to 4.

Now that I've taught extensively under both four- and five-choice regimes, I do prefer the four-choice.  My observation is that on the occasional wordy conceptual problem, students can more often than before appropriately eliminate three incorrect choices in preference to identifying the correct answer directly.  I think -- based on no evidence but my own decades-honed instinct -- that with fewer choices the test does zoom in more sharply on my students' physics skills than if those students had to wade through and weigh one more option in every item.  If nothing else, I don't perceive the same level of mental fatigue after a practice test.  And that was kinda the whole goal.

GCJ


25 July 2016

Justify the ones you missed for homework -- adapting to an every-other-day schedule

It's time for me to adapt to a new ecosystem.  

For the last nineteen years, my classes have met five days a week.  Thus, my assignments and course structure have been adapted to that schedule.  At boarding school, an assignment has been due every day, because students have structured study time each night; at day school, longer assignments were due twice a week, knowing that the students liked to plan to gather about twice a week to do their problem sets together.  In class, I've saved the longer laboratory exercises for my single 90 minute period each week, using the other meetings for quantitative demonstrations and shorter experimental activities.

This year, though, my class meeting schedule has changed.  My classes will meet for 40 minutes on Mondays... but then two more times in the week for 90 minutes each.  That's less actual meeting time than previously; but I'm not losing much in terms of effective teaching time.  See, 90 minutes straight is much more effective than the two separate 40-minute periods that are being replaced, simply because we don't have to stop working, clean up, and rev up again the next day.

Thus, the way we spend in-class time will hardly change at all.  I already go to great lengths to keep students moving around, focused but relaxed, doing a variety of activities with clearly articulated goals.  Generally, my class already says "aww, crap, can I just finish this real quick?" when I tell them to clean up for departure.  So teaching for 90 minutes straight will be a godsend, not an obstacle.

How I assign homework will have to change, especially in conceptual physics.  The whole theory behind an every-other-day schedule is that without the grind of having to prepare for every class every day, students can pay better attention to engaging intellectually with each night's work.  So, um, that means our faculty have been specifically instructed NOT to simply double the homework we used to assign each night.  I fully support this initiative, as problem solving is a creative process with a law of diminishing returns.  (If you can't lift weights every day in preparation for football season, you can't simply double the number of pounds you're lifting every other day.)

The way I'm thinking now is to divide a night's assignment into two parts.

* The first part is a standard nightly problem set, like I've been assigning for decades.  Remember, a "problem set" is far more similar to an English essay than to a night's worth of math problems.  Written explanations and justifications, not numerical answers, are the dominant feature.

 * The second part begins with a set of multiple choice questions to be done individually.  (The requirement for individual work can be enforced by giving five minutes at the end of class to answer; or, you could use webassign or the equivalent to randomize the questions and the order of the answers, so collaboration would be ineffective.)  I'm going to use socrative to collect student responses electronically.  

Each student will see immediately whether his answer is right or wrong to each question.  The actual assignment, due the next class day, is simply justify the ones you missed.  

Think of the incentive for the students to take these multiple choice questions seriously.  No matter what kind of or how much work you assign, in class or out of class, it is beyond useless unless the students are thoroughly engaged in discovering and understanding the correct response.  Practice doesn't make perfect -- only perfect practice makes perfect.

In this case, the opportunity to avoid doing more homework is what motivates everyone to engage carefully with each multiple choice question.  

Get it right, and it's done and dusted.  

Get it wrong, that's okay.  There's no grade penalty, no disappointed sigh from the teacher, no whipping with a wet noodle.  Every question that's wrong does require some major work to discover, understand, and then write up the correct solution, but that's work that the student knows needs to be done.  After all, he just got the answer wrong, so it's obviously important to figure out how to do it right, right?


08 July 2016

So what does an ohmmeter read when it's directly connected to a non-ohmic bulb?


The previous post describes my students' results showing that a flashlight bulb's resistance varies.  Over the available voltage range of 2 V to 8 V, the resistance (determined by the slope of a voltage vs. current graph) varied from about 50 V to 80 V.


The question was, what does an ohmmeter read when placed directly on this bulb?

Consider how an ohmmeter generally works.  It puts an awfully wee voltage across the bulb, and measures the resulting wee current through the bulb.  Then the meter essentially uses ohm's law to calculate resistance.  (That's why you have to disconnect the bulb from the battery in order to use the ohmmeter.)

In the context of our experimental voltage-vs.-current graph above, the ohmmeter is measuring an out-of-range data point, way off down and to the left of the portion shown.  By extrapolating the curve shown, we could guess that we should get a shallower slope and thus a smaller measured resistance.

Sure enough, the meter measured about 8 ohms, a full order of magnitude less than the resistance in the bulb's operable range.  

Again I caution teachers: this is a cool and somewhat unexpected result.  Nevertheless, it's rather irrelevant to the typical practical analysis of a bulb.  The bulb only glows at all with a volt or two across it; the bulb is only rated to about 6 V, meaning it is likely to burn out over that voltage.  In the operable range, the resistance is reasonably steady.  The resistance only drops by an order of magnitude when the voltage is dinky.

The next question: How can we experimentally extend this graph?

My variable DC supply only goes down to 2 V.  I could get a 1.5 V battery to get one more data point, but that's all I can think of.  Does anyone have a suggestion of a way to explore the parameter space below 1.5 V?

GCJ




05 July 2016

More on the light bulb that doesn't obey Ohm's law

Data collected by my students showing a non-ohmic bulb
Before I get into a discovery about the non-ohmic nature of a flashlight bulb, an important caveat:

Until the very end of your circuit unit, treat bulbs as regular old resistors.

Like everything in introductory physics*, it's important to start simple and build complexities in gradually.  Teach your students to deal with ohmic bulbs.  The only difference between a bulb and a resistor should be that a bulb produces light; the brightness of the light depends on the power dissipated by the bulb.


* And in high-level physics research, as well

Then, ask them in the laboratory for experimental evidence that the bulbs actually do or do not obey ohm's law.  My students' evidence is shown above -- click to enlarge.  Over the available range of voltages of about 2 V to 8 V, the bulb's resistance (determined by the slope of the V-I graph) varies from about 50 ohms to 80 ohms.  

Importantly, that doesn't mean that the first approximation of a constant-resistance light bulb is a bad one, any more than the first approximation of no air resistance invalidates the study of kinematics.  In most laboratory situations in introductory physics, the ~30% difference in resistance -- less difference if the voltage range being used is narrow -- will still produce quantitative and qualitative predictions that can be verified experimentally.  For example, the typical "rank these bulbs by their brightness" will give correct results pretty much irrespective of the non-ohmic nature of the bulbs.

Asking a new question -- what will a resistance meter measure?

In my AP Summer Institute in Georgia last week, a couple of participants set up this experiment (it's based on the 2015 AP Physics 1 exam problem 2), getting results pretty much exactly as reported above.  Then the question came up, what would a resistance meter measure?

Here's where, in class, I'd give everyone a minute or two to write their thoughts down on a piece of paper.  You can do that too.  I'll wait.

In fact, I'm not giving the answer yet.  I've posted a twitter poll here where you can give your thoughts.  Answer coming in a few days.

(Yes, Jordan and Hannah who did this experiment... you may vote.  Just wait to comment here until the votes are tallied.  :-)  )

GCJ

01 July 2016

Cure, don't innoculate

Public health initiatives are perhaps the greatest ever victory for the marriage between civic policy and science.  We don't cure polio -- we get vaccinated against polio.  So, so many diseases have been wiped out.  Many chronic conditions have been mitigated by not just vaccinations, but also by initiatives we take for granted such as employee hand washing and "no shirt, no shoes, no service."

Into this atmosphere dives the physics teacher, someone who stands directly on the boundary between civic policy (in the form of the education establishment) and science.  It's not a surprise that we instinctively take our philosophy from that of public health, that an ounce of prevention is worth a pound of cure.  We forewarn our students about common mistakes.  We take pains in our presentations and instructions to minimize incorrect answers on the problems we assign.  We'd rather students listen to us and avoid mistakes rather than submit silly wrong answers on homework or tests.

Problem is, when it comes to understanding physics, that philosophy is dead wrong.  

Look, I know you don't want your students to mess up.  So you give them hints and warnings ahead of time. "Be sure not to use kinematics when the acceleration isn't constant.", you say.

How effective have those warnings been?  Evaluate objectively.  On one hand, I expect that you've thrown up your hands and screamed at the students* who used kinematics to solve for the maximum speed of an object on a spring, despite your advice.  "They didn't listen," you'd say.  Possibly, possibly... it's equally likely that they did listen but didn't make the connection between your advice and the actual problem solving process when the moment was right. 

* Or at least at their homework papers, which can no more hear your wails than can the Cincinnati Bengals coaching staff when I wail at the television.

Either way, the class time you took attempting to prevent these canonical mistakes has been wasted.  So has the political capital you used in insisting that your students sit and pay attention to your warnings.  (Don't underestimate the concept of "political capital."  You can only demand so much attention from your students; use it wisely.)  

What if, instead of trying to prevent the mistake, you allow your students to make a mistake?  What if you practically set them up to make a canonical mistake?  Then, when they screw up, they have the context for preventing future occurrences of the same mistake.  They used kinematics for non-constant acceleration; they got a wrong answer and lost points.  NOW, you can explain why kinematics doesn't work, that the work-energy theorem is the way to go.  NOW your students will listen, because they have a personal and immediate interest in figuring out how to rectify the mistake they just made.  Next time they're likely to remember both the incorrect and correct approach.  That's a natural learning process.

"Oh, that's cruel, Greg," say some readers.  "We shouldn't punish our students by setting them up to lose points.  Possibly a couple of students would have avoided the mistake if you had gone over this sort of question before assigning it.  

Huh?  I'll leave the emotionally loaded and incorrect language of "punish" for another rant.

My approach makes perfect sense if you're taking a long term view of physics class.  Saving a student a couple of points on this problem set is insignificant compared to building a lasting understanding of physics concepts such that he can perform well on the AP exam, the course final, on his college physics tests, in his job.  Setting a student up to make mistakes, which in turn create contextual learning opportunities, will save the class numerous lost points in far higher-stakes situations.

And finally, consider those couple of students who got the answer right initially due to your warning.  Ask them, "how did you know that you should use energy methods rather than kinematics?"  The answer is very likely to be, "because you warned us about this issue in class yesterday."  How does that build understanding?  You want them to build good problem solving habits and skills.  In introductory mechanics, those habits include, "check whether acceleration is constant when deciding on an approach."  Those habits do NOT include, "get my teacher to tell me how to solve this problem."

In physics teaching, an ounce of cure is worth a pound of prevention.