24 December 2012

An example of a question that *must* be written as multiple coice

The best teacher I've ever met, Matt Boesen, teaches US history and constitutional law.  Debate tournaments have turned because his students can instantly remember and use facts from his classes.  I have every intention of sitting in on the constitutional law class at some point in the future.

Importantly, Matt and I are very DISsimilar in some aspects of our teaching.  The most glaring example is what a typical class looks like:  Matt facilitates discussion while sitting with his students around an elliptical table, while (on the occasions when the students aren't themselves experimenting) I perform live demonstrations on a table elevated above rows of students.  Partly that's a function of our personalities; primarily the difference is one of history vs. physics.  Good physics teaching looks different from good history teaching.  Not even all good physics teaching looks alike...

One point on which Matt and I actively disagree is the utility of multiple choice items on tests.  His own take is that multiple choice is the creation of the devil, the lazy teacher's way of finding out which students are good test takers regardless of which students understand the course content.  I certainly see his point within the teaching of history, where a better alternative to multiple choice is always available.  Recall of facts can be tested more authentically with identification questions that don't give hints to the answer ("What were the provisions of the fugitive slave act?").  Higher-order thinking about history requires writing; not necessarily five-page essays, but paragraphs making connections between concepts ("Explain the political circumstances that led representatives from free states to support the fugitive slave act").  

But physics is different.  Well-designed multiple choice questions *can* authentically test understanding. The advantage then is that the short response time*  allows a test to cover a broad swath of topics in a variety of contexts.  Moreover, some physics questions are best phrased as a multiple choice item.  For example, here's a question from my recent 9th grade conceptual test:

* since there's no writing necessary, you can reasonably give one question per two minutes or less

Ball A is dropped from rest and falls for 2 s.  Ball B is also dropped from rest, but falls for 4 s.  How far does ball B travel?

(A) one-fourth as far as ball A
(B) four times as far as ball A
(C) twice as far as ball A
(D) half as far as ball A
(E) the same distance as ball A

The incorrect answers are not chosen arbitrarily -- I could go through each incorrect choice to explain the misconception or mistake that would lead the student to choose that answer.  In that sense, the item is an authentic test of reasoning with the equation d=(1/2)at2.

But this question doesn't work well at all without the choices!  In response to just the prompt "How far does ball B travel," a student might try to plug in numbers to answer "19.60 m," in which case the question is evaluating a very different skill, the skill of plugging and chugging.  Okay, so suppose I rewrite the stem to read "How much farther does ball A travel than ball B?"  A reasonable answer:  "14.7 m", which is the difference between ball A's 4.9 m and ball B's 19.6 m.  Still, this is a different response than I wanted.

Rewriting again, maybe I say "How many times farther then ball A does ball B travel?"  Or "What is the ratio of the distance ball B travels to the distance ball B travels?"  Now the student's answer might be "4 meters."  Aarrgh... you meant "ball B travels 4 times farther than ball A," but that's not what you said.  Or, perhaps the student rounded weirdly to get "4.0816:1." That's not a physically useful answer.  Grrr.

So on one hand I write this question as multiple choice because the choices frame the style of answer required.  But it's more than that.  The manner of the phrasing of the choices firmly defines the physical meaning of the answer. Any other phrasing I can think of can encourage a student to perform a mathematical manipulation.  Upon doing test corrections, I don't want someone who missed this problem just to figure out how to do the math right to get "4"; the choices emphasize that the question asks about a performable experiment.  I'm not likely to get an argument that "Look, if you play with the equations THIS way you get 2, like I did, so I should get credit."  In the rare case someone tries to argue points with me, I don't engage; I just say, "okay, let's do the experiment."  

Of course, like any teaching tool, multiple choice questions have to be used correctly in order to be useful.  The questions must be well-written, so that it's highly likely that a correct answer comes not from gaming the test, but from good physics knowledge.  Other forms of test questions must be used in concert with the multiple choice in order to get a complete picture of a student's ability.  (An easy way to add variety and higher-order thinking to a test is to ask a multiple choice question, then say "Justify your answer.")  And if you're going to test with multiple choice questions, students have to be used to seeing multiple choice questions on homework and quizzes; I'd say about half my general physics problem set questions are multiple choice with "justify your answer."  

Next time colleagues or administrators challenge your use of multiple choice items, enthusiastically take them aside and show them a few well-designed physics questions which cry out for the multiple choice format.  Show them this post.  Go into a detailed discussion of the pedagogical philosophy of articulating physics concepts, not just solving math problems.  Generally, you'll get one of two responses... virtually every non-physicist you attempt to engage in this discussion will (figuratively) run screaming rather than try to understand how physics teaching actually works.  

Sometimes, though, you'll get the Boesens who enthusiastically listen, recognizing the differences among individuals and disciplines.  These are the colleagues to treasure, because chances are you'll learn as much by listening to them as they learn by listening to you.

GCJ

21 December 2012

Teaching acceleration in conceptual physics

In conceptual physics, I define acceleration with one sentence that we repeat ad nauseum:

Acceleration tells how much an object’s speed changes in one second.

Then we talk separately about the direction of acceleration:

When an object speeds up, its acceleration is in the direction of motion.

When an object slows down, its acceleration is opposite the direction of motion.

In Regents-level and Honors physics, I used to define acceleration via the slope of a velocity-time graph and via the equation a = Δv/Δt.  The conceptual class used neither of these, yet seems to understand acceleration better than my Regents-level folks ever did.  (As of about 2014, I've used this verbal approach at all levels.)

In terms of the magnitude* of acceleration, since all of our problems involve constant acceleration, I ask them to use their mathematical instincts:

* Though I never use the term "magnitude"... I say the "amount of acceleration." 

An eastward-moving roller coaster slows from 25 m/s to 15 m/s in 5 s.  What is the amount of the roller coaster's acceleration?

Using the fundamental definition, student can reason:  "Acceleration tells how much an object's speed changes in one second.  The roller coaster changed speed by 10 m/s in 5 s.  So every second, the coaster lost 2 m/s of speed.  The acceleration is 2 m/s per second."

It helps that I have insisted on everyone writing the full relevant fact of physics in answer to every problem.  When someone struggles, I ask him to repeat the definition of acceleration, and I can guide him to the correct answer.  After doing this a few times, he gets the idea.  And the concept is sticking... since we're not plugging blindly into an equation, I'm having fewer mistakes of the form of "the acceleration is 10 m/s because that's how fast the roller coaster moves."  

Note the unusual statement of units.  Rather than use the mathematical notation of meters per second squared, I'm exclusively writing acceleration units as m/s per second.  When students have to write that out on every problem set, they continue the process of internalizing the meaning of acceleration.

As for the direction of acceleration:  that's pretty easy for the class, given that we've practiced all year justifying from facts of physics.  "When an object slows down, its acceleration is opposite the direction of motion.  This coaster is moving east and slowing down, so its acceleration is west."  

The only tricky part here is that I've had to stamp out the phrase "acceleration is moving west."  Acceleration doesn't "move."  Acceleration simply "is."  My students initially complain when they lose points for saying the acceleration moves west.  Then I show them a classmate's reply in which he says "This coaster is moving east and slowing down, so the roller coaster must be moving in the opposite direction of motion, so is moving west."*  I explain that the language in the explanation is as important as the answer.  (And, they know from experience that those points ain't comin' back, no one in the class has any sympathy for their loss of points, so they might as well just do things my way and get the physics right.)

* Not making this up.

The last fact I teach regarding acceleration is that object in free-fall gain or lose 10 m/s of speed every second.  The next post will discuss quantitative and qualitative demonstrations relating to acceleration and free-fall.

14 December 2012

Do you want a set of handwritten solutions to Tipler volume 4?

I'm cleaning out my office in preparation for the move to the new Manning Science Building.  I'm excited -- the science department is being released from the dungeon and paroled to the palace.  

I'm throwing away a large collection of Physics Today and The Physics Teacher.  I'm throwing away an enormous collection of American Journal of Physics.  Thing is, I can get all of these online, now, and I don't have the shelf space in my new office to waste on hard copy.

One of the few things I can't bring myself to toss is a thick binder with my solutions to many Tipler volume 4 problems.  (This is a widely-used edition of a terrific calculus-based physics text.)  Anyone want this binder? Send me an email with your address, and it's yours.

GCJ

Edit:  Claimed by Staci Babykin.  

10 December 2012

Making students write facts of physics in every answer

I've discussed how "justify your answer" means to use either a fact, equation, or a calculation to support an answer.  All year in 9th grade conceptual physics, I've handed out printed sheets listing the appropriate facts and equations that can be used for justifications.  Still, a significant subset of students have disappointed me with their justifications: they make up facts, misstate facts, or sometimes just write any old fact, whether or not it has any relevance to the problem at hand.  

I got sick of nagging my students to use facts of physics correctly.  Taking a page out of Jen Deschoff's bag of tricks*, I decided to have the STUDENTS grade assignments for completeness.  Then, the class could have an additional day to check their answers with friends before turning in the final version of the assignment.

* Mixed metaphors are legal in physics blogs

For the past two weeks, I've started class by collecting a problem set and redistributing it to the class.  Each student gets a red pen and is asked to check boxes in a rubric that says:

* Is a fact written nearly word-for-word from our sheet?
* Is the fact relevant to the problem?
* Is there a sentence showing how the fact applies to the problem?

We apply the rubric above to each problem on an assignment.  If even one item is missing on one problem, I take off substantial credit; but I award some credit on each set to those who do everything right.

In the first day or two, lots of students lost credit.  But, it wasn't big bad Mr. Jacobs taking off the points -- their own classmates were the ones checking the facts.  I hold everyone accountable for their grading by asking them to initial the page they grade.  (That means I know who graded what, and I can have a word with someone being too strict or too lenient.)  I'm finding, though, that I don't really have to follow up at all.  The students are more careful about grading than I am.  One even asked, "Do I need to take off for spelling?"  Miracle of miracles, after two days I found everyone writing out facts clearly; also miraculously, everyone was doing better on the problems, because by simply being forced to pay attention to writing a proper fact, the logical connection to the correct answer became more apparent.

Perfect for studying position-time graphs

We spent a week working on just position-time graphs.  The most common issue at the beginning of this unit is a failure to separate the "picture" of the graph and the motion represented by the graph.  For example, students commonly say that a graph with negative slope must represent a cart going down a hill, because the graph looks like a hill.

I only had one student make that mistake this year.  Why?  Because everyone had to start every problem with something like "A position-time graph sloped like a back slash \ means the object is moving away from the detector."  It REALLY takes some serious cognitive dissonance to tell me that the cart is thus rolling down a hill, especially since the problem starts with something like "...the detector is pointing north."  

And velocity-time graphs

After a week, we move to velocity-time graphs.  Usually the abstraction that the graph represents motion on a line is clear by the time we move on; the biggest issue I've had teaching velocity-time graphs is students not frickin' paying attention to whether a graph is position-time or velocity-time.  In past years, I've had people telling me for v-t graphs "the cart moves south because the slope is like this \" even after they've made the same mistake countless times already.

No problems yet this year... because of the student grading.  The second bullet point on the rubric says "Is the fact relevant to the problem?"  I talked to the class ahead of time about how facts for the wrong type of graph are by definition not relevant to the problem.  Meaning, if you use position-time graph facts for a velocity-time graph, I'm not marking you wrong, your friends are; and you're losing not just one point for a "good try, honey," but substantial points for laziness.  

Would student grading for written-out facts work for juniors and seniors?

Not sure.  I've never tried this approach with upperclassmen, because some might see it as busy work, and might rebel.*  But the positive correlation to comprehension is so incredibly strong, that I might suggest giving this sort of approach a try.  Let me know if you do.

* Freshmen, I've discovered, are poor rebels.    

03 December 2012

Physics Fights at the Global Physics Department: Wednesday Dec. 5 2012

If you're free this Wednesday night Dec. 5, stop by online at the Global Physics Department.  I presented to the "department"'s weekly meeting back in June.  About 50 people from around the world joined in to see quantitative equilibrium demonstrations.

On Wednesday night at 9:30 PM Eastern time, my students Vinh Hoang and Michael Bauer will be holding a physics fight.  Mr. Hoang will present his 10-minute powerpoint solution to the question "Why do all candles have about the same brightness" - this was one of the problems at the 2012 US Invitational Young Physicists Tournament.   Then, Mr. Bauer will lead a discussion of Mr. Hoang's solution, challenging him to explain each aspect of his research, searching for the truth in the same way that scientists from competing research groups might grill each other at a conference.

You are invited to watch the festivities on Wednesday night.  Just enter the GPD chat room and enjoy.  I think you will love the pedagogy and collaborative yet competitive spirit of the physics fight.

GCJ

30 November 2012

Article link: Keith Williams on the use -- and abuse -- of technology for technology's sake

Keith Williams is an engineering professor at the University of Virginia.  He visited me this week to see our new science building, talk physics and physics teaching, and to see what we do in our research physics course.  I was surprised and amused that, despite the fact that he grew up in and around Botswana, he ended up attending a Kentucky public high school, just like I did.

As we toured the new Manning Family Science Building he noted the enormous, beautiful lounge designed to facilitate collaboration.  Conversation turned to how I had told the architect in no uncertain terms that I didn't WANT network jacks, outlets, and cord guides on the tables to make it easy to plug in laptops.  Not only do I think such built-in technology will be outdated well within the building's lifetime, I object to the mere principal of making laptop use easy in this collaborative space.

Keith won the hearts and minds of the Woodberry science department when he complained about how every time he sees his students studying, they have their heads buried in a computer.  "They don't know how to talk to one another, to explain physics," he said.  

Keith was impressed that we had this section of the building devoted to human contact in the context of physics.  He liked even more our nearby open lounge with two large screens that can instantly connect to any laptop or tablet: this lounge says "Okay, if you're going to use a laptop to do physics, put the screen up where everyone can see it and talk about the physics."  We've found this arrangement to be wonderfully effective as our research team prepares their presentations.  

I'd encourage you to read his article in the Chronicle of Higher Education entitled "A Technological Cloud Hangs Over Higher Education."  He makes points about education technology much more eloquently than I could.  






29 November 2012

I'd do *anything* for a C... except put forth effort on homework, apparently.

Mr. Jacobs, I got a high D on the exam.  I know it's the next trimester, but I'm wondering, can I do anything to make the exam a C?  Can I do corrections, or extra credit?  I was really close to a C.

I actually got an email similar to this after our first trimester exam.  The student in question had done half-arsed homework since the start of school, and thus hadn't gotten the problem solving practice he needed in order to perform well on the exam.  I was rather surprised.  My juniors and seniors don't make such requests.  They know the score: be ready to perform on the exam, 'cause that's what goes on the report card.  I assumed that freshmen would understand that principle as well; my first reaction was to wonder to myself what kind of fluffy middle- and elementary- school teacher had given this student the thought that begging for a grade might even possibly be successful.  

But how to respond?  I'd like to reply simply "NO."  But the last thing I need is for him to complain to his mom, "I asked Mr. Jacobs what I should do about my exam, and he was mean to me!  He doesn't like me because I did badly."

So I put the response in terms of athletics, something like 

"Nope.  Did Episcopal High School ask the referee if there was anything they could do to change the score of the football game they lost to us?  I mean, maybe they could have another chance to catch that touchdown pass they dropped, or maybe they could get a second opportunity to tackle our running back.  Right?*

Learn from the experience.  Prepare better not just right before the next exam, but all trimester.  See if you can improve next time."

* If the student were, say, a Patriots fan, you could say something like "Can the Patriots ask the commissioner if they can do anything to let Wes Welker catch the wide open pass that would have won the Super Bowl?  I mean, they were so close..."  Yes, twist the knife with a very personal sports reference if you can.



25 November 2012

Review packet for conceptual physics trimester exam

My review assignment for the 9th grade first
trimester exam is available here.
It's a fool's errand to expect high school students of any age to simply "study" for an exam without giving them some sort of clear guidance.  And just saying "go over your old tests and problem sets" doesn't cut it.

I've always been in favor of creating assignments -- whether as part of the course right before the exam, or as an extra credit opportunity outside of class -- that themselves serve as effective exam preparation.  Perhaps the most diligent students will do more than just exam review assignments, but you can ensure that everyone has at least done a measure of preparation.

My 9th grade cumulative trimester exam included "justify your answer" questions to the tune of 60% of the exam.  We've certainly practiced justifying answers all year on homework, but since these questions take time to answer, I've only been able to ask a few of these on each test.  I needed to prepare the students for increased scrutiny of their justifications on this exam.  

Yet, I didn't want to make a review sheet entirely of "justify your answer" questions.  A review sheet doesn't help unless it's taken seriously, and done RIGHT.  Without me standing there grading their responses, the students wouldn't have gotten appropriate feedback on a full set of "justify your answer" items.

So instead I mixed in some multiple choice with some more complex items.  Many of the review sheet questions were "multiple correct," in which multiple choices were listed but any number of them could have been a correct answer.  Some review questions were ranking tasks.  Others required a numerical answer, with units.  Here's a copy of the sheet.

Point is, each question came with a huge box for the answer.  At our nacho party before the exam a teacher could quickly mark each response right or wrong; sure, it's not a scantron, but grading this was simple because we could just look at what was written in the big box.

THEN, the students had a few days before the exam to correct their wrong answers with a clear justification.  No credit was awarded unless all the corrections were done; credit was awarded based on how many of these corrections were actually, well, correct.

Interestingly, I discovered that my freshmen were just as diligent (or sometimes not-diligent) as my seniors at doing the assignment and the corrections -- about 90% of the 9th grade class turned in the corrections on time and with appropriately attempted responses, in line with what I used to get from seniors.  But, my freshmen were way LESS diligent about getting the correction right.  A whole bunch of them wrote utter BS as a justification; they failed to collaborate with their classmates or with me.  And, most of these students made mistakes on the exam similar to their mistakes on the review packet.  The moral here:  practice doesn't make perfect, perfect practice makes perfect.  Thank you, Bosco Brown of my old marching band, for etching that saying into my brain's permanent storage.  

So for NEXT trimester's exam, I'm going to have to think of a way to make the students get the corrections right.  I'll let you know what I come up with; the suggestion box is in the comment section.

21 November 2012

First months of 9th grade conceptual physics: non-cumulative material

The diagram to the right shows a mirror.  On the diagram, draw a dotted line representing the normal to the mirror's surface.  Justify your answer.

Conceptual physics covered ray optics as the first topic of the year, back in September.  We then moved on to waves, and to circuits.  Part of the reason for this sequence was because these are easier topics than the typical kinematics and forces opening gambits.  I want freshmen to adjust to boarding school life a good bit before I hit them with the hard stuff.  But the more important reason for this sequence is that it's not sequential at all.

Kinematics and forces are self-referential.  It's important to internalize a definition of acceleration, which is used in every context imaginable.  Many force problems require kinematics to solve fully, and vice versa.  Then in whatever topic is next -- usually either energy or momentum -- it's assumed that students are comfortable with forces and motion.

This approach works fine with my seniors, because they usually are in fact reasonably comfortable with forces and motion by the time I move on; and because even the slower students have enough background that they can become comfortable.  Seeing forces and motion in new contexts provides extra practice and encouragement to review previously-discussed physics.

Freshmen, though, can be absent mentally for much of our first trimester.  It's not that they don't want to do well -- just the sheer overwhelming nature of life without mom and suddenly with 400 siblings, coupled with the rate at which they're growing physically and mentally, can mean they don't remember information day to day, or even minute to minute.  At the senior level, I'm assuming a level of personal organization, daily focus, and self-driven practice that freshmen can simply not fathom.  

We just gave our first cumulative trimester exam.  Some did great; some did terrible.  My point here is, how they did this trimester doesn't matter that much to the students' overall success in the course.  When we move on to kinematics after Thanksgiving, it won't make any difference at all whether they remember whether light bends toward or away from normal.  I don't HAVE to go over the exam, I don't have to review anything; we can move on to new and different stuff, knowing that everyone can understand it in isolation from the first trimester.  Then I can sprinkle some review in over the course of the next six months in preparation for the final exam in June.

The question at the top of today's post shows one of the "justify your answer" questions on the trimester exam.  We've learned that the "normal" is "an imaginary line perpendicular to a mirror's surface," and we've extended that definition in the context of refraction across a boundary between materials.  This question requires the student to recognize that the normal is not perpendicular to the bottom of the page, but rather to the optical instrument in question; the justification just requires some statement of the definition of normal.   

So why do I ask this question on the exam?  Because it is the ONLY question I can think of that is truly cumulative with other topics we will be teaching this year... when we get to "normal forces," we'll have seen the word before; and we'll even have seen an explicit situation when the normal is at an angle to the vertical.  

16 November 2012

Ray diagram practice sheet

Our conceptual classes need to be able to handle ray diagrams for converging and diverging mirrors and lenses.  That's really only six different diagrams:

  • Converging lens with an object inside the focal point
  • Converging lens with an object outside the focal point

  • Diverging lens*

  • Converging mirror with an object inside the focal point
  • Converging mirror with an object outside the focal point

  • Diverging mirror*
*The ray diagrams for diverging instruments are essentially the same no matter where the object is located.

Since this is conceptual physics, I don't ask them ever to use the thin lens or magnification equation to predict the location and size of an image.  We use ray diagrams, and then estimate distances based on the scale of the diagram.

In preparation for our exam, I handed out this practice sheet.  It presents the six different situations above, with an appropriately-sized mirror or lens, with focal and center points already labeled, and an object already drawn.  If a student can fill out this sheet correctly, he's ready for any question I can throw at him on the exam.

You can use these diagrams as a basis for your own questions or your own review sheet:  one idea is to change the focal length in the text of each problem, so that each student has a different focal length.  They will see, then, if they check their answers with each other, how the diagram can look the same but the different scale leads to different values for image and object distance.  

You can also copy the diagrams into "paint" or some graphic design and manipulation program.  That will allow you to change the diagram itself, perhaps by moving the object closer or farther from the lens, or changing the focal lengths.  Every time I need to construct a question about lenses or mirrors, I use these diagrams as a template to adjust whatever parameters I need to adjust.

GCJ








07 November 2012

Circuit building challenge -- a last-minute class idea that worked

On Sunday night when I went to bed, I had no clue what to do on Monday in conceptual physics.

The problems were, about a quarter of my students were going to be on a field trip; and everyone has half-expected the headmaster to declare a free day each day since a week ago Wednesday.  I know enough to plan for the spontaneous one-day vacation by padding my lesson plans.  This year, though, the free day never came.  So I was out of material, and I couldn't just push on to a new topic with so many folks gone.

A day like this calls out for "enrichment."  Show a Julius Sumner Miller video.  Read a chapter of Surely You're Joking, Mr. Feynman.  Play "Crayon Physics Deluxe."  In the shower Monday morning, though, I had an even better brainstorm for freshmen in a circuits unit.

We have taught the freshmen how to deal with resistors in series and in parallel.  However, rather than make explicit calculations, we have taught the art of ESTIMATING the voltage across each series resistor, and the equivalent resistance of parallel resistors.  Why not, I thought, use my extra day of class to refine my students' estimation instincts?

On the board, I listed each of the resistor values I have available in my lab, including 130 ohms, 68 ohms, 57 ohms, 47 ohms, 41 ohms, 20 ohms, and 15 ohms.*

*Okay, really these are all KILOohm values.  I can't use 41 ohms with a 14 V battery, because that would dissipate 4 watts with quarter-watt resistors, and then Bob help us all.  But the 9th graders don't have to know that, since we're not measuring current!  I called a 41 kiloohm resistor a "41 ohm" resistor; all is peaceful, and all measurements are correct.

I handed each group of two a sheet that said:

You have a 13.8 V battery.  Build a circuit in which one resistor takes _____ V (+/- 0.3 V) across it.  When you have it correct, draw a diagram of the circuit in the space below.

In the blank I wrote a random number between 1.0 and 12.0.  They were allowed to use any combination of resistors.  At first they tried to make calculational guesses; finally they figured out that the best strategy was to just choose some resistors, try it, and then choose some new resistors.

What a wonderful exercise!  It was modeling at its purest... eventually each group got the intuitive idea of a proportional distribution of voltage across series resistors.

Next, they got a different sheet:

Build a circuit which has an equivalent resistance of _____ ohms (+/- 1 ohm).  When you have it correct, draw a diagram of the circuit in the space below.

This time, the number was between 3 and 100.  I made sure that the answers were never truly trivial, i.e. equal to one of the resistors in the box.  Some groups even figured out -- with minimal if any prompting! -- that they could get a 10 ohm resistor in series with a 20 ohm resistor by using two 20-ohmers together in parallel to get the 10 ohms.  This even though we never once discussed combinations of resistors in both series and parallel.  

This day turned out even nicer when I found out that too many students hadn't completed their quiz corrections from last week.  The ones who were done with corrections got to play with circuits, earning candy and extra credit for each successfully built circuit; the others got to sit in the meeting room finishing corrections before attacking the circuits.



31 October 2012

Zen and the art of predicting voltage across series resistors

I have a circuit in which a 14 V battery is connected to a 15 ohm and a 25 ohm resistor in series.  What's the current through and the voltage across each resistor?

In my honors-level classes, I teach a mathematical solution using the VIR chart.  They calculate the equivalent resistance of 40 ohms; use ohm's law on the total circuit to get a current of 0.35 A; recognize that series resistors each take that same 0.35 A current; then multiply across the rows of the chart with ohm's law to get 5.3 V and 8.7 V across the resistors.

In conceptual, though, we don't use a calculator, and I want to minimize (not eliminate) calculation, anyway.  So we approach this problem slightly differently.  

Take a look at the worksheet we've used in laboratory to learn how to deal with series resistors.  Students must justify their answers to each question thoroughly.

1. Which resistor carries a larger current through it?

We start with a not-so-subtle reminder of the rule that each resistor carries the same current. 

2. Which resistor takes a larger voltage across it?

Now, I insist on an equation justification with ohm's law.  V = IR; I is the same for each because series resistors carry the same current.  By the equation, then, the larger 25 ohm resistor takes the larger voltage.

3. What is the equivalent resistance of the circuit?

Fact: the equivalent resistance of series resistors is the sum of the individual resistances.  Just add 'em up to get 40 ohms.

4. Calculate the current through the circuit.

Only now do we do a calculation.  We've learned that ohm's law applies for the total voltage and resistance in a circuit.  So, we simply say that I = V/R, with V = 14 V and R = 40 ohms.  This makes the current 14/40 amps. Yes, I allow that as the answer -- we aren't using calculators, and I don't want to mess with issues of significant figures and decimals.  Look, go ahead and haul me before the Klingon Death Tribunal for my sins.  I'm looking forward several years.  When they see circuits in a senior honors class, they'll be able to call this 400 milliamps.  For now, I'm happy that they know which values to plug in to ohm's law.

5. Estimate the voltage across and current through each resistor in the chart below.  You may use fractions for current, but not for voltage.  Answers without units earn no credit.  

Here's the Zen.  Not calculating current through each resistor -- that's the same 14/40 amps through each.  The Zen is the estimate of voltage.  I'm NOT teaching them to multiply the resistance of each resistor by the current.  Nor am I teaching them to proportionalize the voltage according to the resistance.  Nope.  I'm just saying "estimate."  

All I am looking for is an answer that fits the facts they've already stated:  the voltage across each must add to 14 V, and the 25 ohm resistor must take greater voltage.  Some students will guess 2 V and 12 V; some will guess 6 V and 8 V.  I don't care.

6.  Now, set up the circuit, and MEASURE the voltage across each resistor, and across the battery.  Record your results here.

And now the Zen must be reconciled with reality.  They make the voltage measurements, and see how the voltage is distributed:  in this case, about 5 V and 9 V.  Some students got the estimate right -- they get candy.  I praise everyone else for a "good guess;" we look to see whether their guess was high or low for the 25 ohm resistor.  

I don't teach anything, still; instead, I hand out a new sheet, with different resistors.  They fill it in again, all the way from the beginning, steps 1-6.

As you may or may not suspect, by the second or third time most students are getting pretty dang close to the right voltages.  Some folks discover the proportionality rule for themselves.  Others just recognize that "close" resistors demand "close" voltages, and "far apart" resistors demand disparate voltages.  

To me, this process is teaching good physics.  I've taught the calculations for series resistors for ages, and I've been repeatedly frustrated by students who can make calculations well but can't answer simple conceptual questions like "which takes the bigger voltage."  And as well by students who frustrate themselves because they predicted 8.25 V but only measured 8.22 V.

By the second day of filling out these sheets and measuring voltages, these freshmen are getting almost bored with the process.  That's the sign I'm looking for.  When I start to see faces saying "Gawd, not again with the voltage question..." that's when I know it's time to move on.  I give the faster guys a sheet with three, not two, resistors; then, after a multiple choice quiz, we move on to resistors in parallel.  

I'll teach parallel resistors the exact same way.

GCJ

26 October 2012

Circuits: first day introduction for 9th grade

In my 9th grade class, I'm expecting students to deal with simple questions and calculations with series and parallel resistors, though not with a single circuit combining both series and parallel resistors.  We are teaching series resistors first, for a week; we'll deal with parallel resistors later.

My approach to circuits is not pure modeling, but somewhat close.  I don't discuss the subatomic nature of current, or any sort of analogy for current and voltage in a circuit.  Rather, I define terms based on what we can see and measure:  "Voltage is provided by a battery," for example.  My goal is to get, as quickly as possible, to experimental work in which students build circuits and measure voltages.

My colleague Alex Tisch and I made this helpful handout for the first day.  You might want to print it out, or load it in an adjacent window, as you read this post.

I write the basic definitions and units on the board, and students copy these in preparation for a quiz the next day.  The definitions I use are in this file.  We quickly get to two demonstrations.  

First, I help the class make a prediction for what a graph of current vs. voltage at constant resistance should look like.  Since resistance is constant, V=IR suggests we should get a straight line graph.  

Next, we actually do the experiment.  I set my "decade box" variable resistor to 50,000 ohms, and vary the voltage from 2-20 V.  My ammeter measures the current, usually getting in the hundreds of microamps.  Notice that I've already set up reasonable axes in a blank graph on the handout.  I have each student in turn come to the front of the room, adjust the voltage, read the voltmeter, and read the ammeter.  I write a table of values on the board, and everyone graphs at his seat.  Sure enough, 7-8 data form a pretty clear line.  

Okay, then... what if we hold the voltage constant and change the resistance?  We predict... V=IR says that as resistance increases, current decreases.  We decide as a class that a 1/x graph is more likely than a linear upward-sloping graph*.  The experiment and the students' graphs verify the prediction, this time setting the voltage at 10 V and changing the resistance between 1 thousand ohms and 100 thousand ohms.

Though I don't use those terms... I sketch a 1/x graph and a y=x graph, and without naming them just ask:  which sketch is more likely?

On day two, I actually hand out my definitions for series resistors.  We read them and practice using them with an actual circuit with series resistors.  By day three, I'm ready to give them each a breadboard and have them predict and measure voltages across series resistors.  

Using this approach, my students will be slower than most in figuring out how circuits work.  A more traditionally taught class* will build some competence more quickly.  But I'm convinced that after a few days of predicting, connecting, and measuring their own circuits, my class will catch up with and perhaps pass by traditionally-taught classes.  And they'll pick up parallel resistors more easily.  Why?  Because they're not calculating dispassionately.  They're making predictions for their personal circuit, predictions that they personally will test!  It's amazing how much more they care when mother nature is sitting there, ready to prove them right or wrong.

* i.e. one in which you do practice calculations repeatedly in class and for homework

GCJ

[P.S. If I get to it, I'll make a future post explaining how I have everyone calculate the voltage across each resistor without calculating.  Yes, I know, it's Zen.]


17 October 2012

Doppler demo -- with an onion bag

photo credit Graham McBride
Doppler effect problems in textbooks often involve trains and trumpeters.  That's all great, but I want to demonstrate the Doppler effect in my classroom, in which I have neither train nor trumpet.  Furthermore, when I've actually used a trumpeter playing a note out the window of my car, it's been hard to hear the effect -- I can't go much faster on the road here than 20 mph, and I'm not confident that my student trumpeter can maintain a constant pitch while turned sideways in the front seat.*  I need something better.

* Band veterans probably are ready to interject here the ol' wisecrack about the definition of the "minor second" being two high school trumpet players attempting to play the same note.  

My favorite Doppler effect demonstration involves twirling a speaker in a circle.  You get the speaker to play a constant frequency, attach it to a string... then when the speaker is briefly traveling toward the listeners, the pitch increases noticeably, especially compared with the decreased pitch that happens less than a second later.  The "wahh-wahh-wahh"   two-toned pitch is easy for everyone to hear.

The problem I've always had with that demo is the physical setup.  I attach a rather bulky frequency generator to a cannibalized speaker using alligator clips.  If I'm not extra careful, the alligator clips fail, and the speaker goes flying.

Nowadays, I use my iphone or ipad as a frequency generator with either the "freqgen" app or the "tone generator" app.  My question this morning was, how do I whirl my phone in a reasonably high-speed circle without the risk of breaking the phone?  Tying string to the iphone wasn't getting me anywhere.

My Chinese-teaching colleague Scott Navitsky gave me the key suggestion:  use an onion bag instead of string!  I asked our dining services for an onion bag, and I thank Jim Robertson and Aimee Carver for providing me with one.  I told the app to play a 600 Hz note, stuck the phone in the bag, twirled... and the warbling frequency was apparent to everyone nearby.

GCJ

08 October 2012

Questions about AP Physics 1 and 2


Georgia teacher Mark DiBois sends in the questions that virtually everyone in the country who teaches AP Physics B is dying to have answered:

Just read the e-mail about the sweeping changes coming in AP Physics B.  Now its going to be Physics 1 and Physics 2.

Yup, this has been in the works for a while, now.  The College Board has set a date:  May 2014 will be the last AP Physics B exam, and the new exams will be released in May 2015.

Can you fill me in on what the premise behind this is?

Take a look at this post from 2010 and this post from 2011.  Then look at the official College Board home page for AP Physics B and click on the redesign link.

Then take a look at the "curriculum framework", which has now been released publicly.

I read the curriculum and It didn't make a lot of sense to me.

I know.  That's the glaring weakness in the College Board's well-intentioned, and generally quite successful, effort to take Physics B ever farther away from the show-me-your-algebra-skills content of the 1980s and into the era of explaining physics with words.  Once the education PhDs got themselves involved, all hope of a one-page topic listing vanished.  In the effort to be transparent and specific about the exam, the College Board instead has written an impenetrable document, one that must be parsed as carefully as the infield fly rule.*

*Which, as TBS announcers showed during the Braves-Cardinals game, is incomprehensible even to purported baseball experts.  Don't get me started on that one.  Suffice it to say that Sam Holbrook got the call exactly right, and the TBS booth should be disbanded on grounds of competence.

The best bet is to learn over the next few years how to use a low-pass filter to eliminate the buzzwords, the eduspeak, and the myriad "the student can engage in...." fluff.

Take a look at the example questions on page 131 of the curriculum framework.  Read them.  Be sure you can solve them.  Don't bother with the "targeted learning objectives" -- just read the questions, and look for patterns indicating how this test will be different from Physics B.  For example:


  • See the ranking task: not "what is the power dissipated by the 110-ohm resistor," but "rank the energy dissipated by each resistor in a fixed time."
  • It doesn't ask, "calculate the initial speed of the car," but rather "can the speed of the car be determined, and why or why not?"
  • Not "at what time on the v-t graph is the cart at rest," but "describe in words the motion of the marble represented in the graphs above."
  • Note the request in the free response question to "justify your answer qualitatively, with no equations or calculations."  [my emphasis.]


What's going to come out of all the blabber is a course which demands that students be able to express a clear understanding of physics topics using WORDS.  

Now, that doesn't mean you should stop teaching calculational physics!  My own perspective is that for AP-level students, calculation is a step toward serious conceptual understanding expressed verbally.  If they can explain correctly, they can calculate, too; but if they can't calculate, they can't explain, either.  You're going to have to spend the time to go beyond just complicated problem solving, and into making the students explain why they solved problems the way they did.  Good physics B teachers are already doing this, but are pressed for time.  How nice that the new exams aren't as broad as Physics B.

What will be the basic topics for each?

Look on page 152 for Physics 1, and on page 160 for Physics 2.  This gives the "concepts at a glance."  It's still way too much information for a quick overview; but it's a start.  Perhaps eventually I will try to digest the topics down to a one-page cheat sheet.  You need that cheat sheet -- as long as you understand the level of deep verbal reasoning required in each topic.

If each course is a year long... do you have to take both courses to get 1 college credit?

Ach, the old credit question.  It's a reasonable question, but it's as answerable as "what is the sound of one hand clapping."  My answer: who knows.  Don't ever believe anything you hear about AP credit or placement policies unless it is in a personal communication with the registrar of the college you are considering.  

The best advice for our students: take AP Physics 1 as a first-time physics course.  That's how it's intended.  Do well on the exam.  Then, if you have a year of high school left, take AP Physics 2 or AP physics C.  Don't worry about college credit until you're at college.    

The best advice for teachers is to place your top first-year physics students into AP Physics 1.  Then offer either AP Physics 2 or AP Physics C as a second-year course, depending on your interest, and on whether your students are sophisticated mathematically.  (Physics 2 does not require calculus or any math higher than Algebra I / geometry; Physics C requires fluency at college-level calculus.)

These are good courses, courses that I encourage everyone to try before dishing out the boilerplate "Aarrgh, change!" complaints.  Every day, we ask our students to adjust to new ways of thinking in order to tackle new and scary physics problems.  I think it only fair that we make the effort to jump into a new course that requires us to change the emphasis of our teaching a bit.  Right?

GCJ

03 October 2012

Using a lookup table for a conceptual physics lab

In conceptual physics, I want to do an experiment with a 60 Hz frequency generator and waves on a string.  The setup is shown in the picture to the right: the hanging mass is varied, varying the tension in the string and thus the wave speed and the wavelength.  We move the generator left and right until the standing waves are clear; then we measure the wavelength with a ruler.

I want to plot wave speed vs. wavelength, so that the slope of the straight-line graph will be the 60 Hz frequency.  

Problem is, I don't have an instrument to measure wave speed on the string.  In AP physics, I'd just show the students the equation 
and let them figure out the wave speed for themselves.

Well, this is 9th grade conceptual physics. Most of my students either have not completed algebra 1; most wouldn't know a square root if it bit them on the arse.*  I can not expect my class to be able to plug into this formula.  But I still need them to be able to graph a wave speed, knowing only the mass of the hanging mass.

*That happened to me once.

One thought I had was to create a quick app to make the calculation:  On an iphone or ipad, it could ask "What's the hanging mass?"  Then, using the linear mass density value I measured for the string before class, I could program the app* to spit out "the wave speed is 3000 cm/s."  Yes, I know I could do something like this in excel or on wolfram alpha, perhaps, but anything beyond a mass input in grams followed by a speed output in cm/s is too complicated for me.

*That is, if I knew how to program ios apps.  Hey, now, if I had access to a 1985 version of applesoft basic, I'd pwn all of ya in a programming contest.  And I'd have that "app" ready in five minutes.

Without the ability to make the program I want, I realized that I could go all 1940s and just create a lookup table.  Excel will do the calculation... in fact, I learned how to get excel to round the speeds to two significant figures.  So I put mass values from 5 g to 300 g in one column.  I made excel use the equation above to calculate the wave speed in units of cm/s.  

Then I just printed the two columns.  I'll hand this out to each lab group.  I think it's totally reasonable to expect freshmen to use this table to relate the hanging mass to the wave speed... then to graph wave speed on the vertical, and the measured wavelength on the horizontal.

GCJ

27 September 2012

Just take the data! It works!

Left to their own devices, novice physics students get easily intimidated by laboratory.  They concern themselves so much with the precision of each measurement that they lose sight of the overall purpose of the lab.  How do we get students to just freakin' measure, for goodness' sake?

Sports analogy time:  I'm a baseball umpire.  When I work a game with players and coaches who have never seen me, they're sizing me up, figuring out what's a ball and what's a strike.  

Now, I generally work high school JV and 8th grade games.*  I call a wide strike zone -- a tight zone at a level where the pitchers can only aim the ball within a steradian or thereabouts leads to walk after walk, which is no fun for anyone.  So how do I establish this strike zone?

I don't talk about it.  I just call it and smile confidently.

Sure, I get a lot of stares some bad body language, even a few verbal complaints in the first few innings.  But it's amazing how quickly everyone sees what I'm doing.  They adapt.  By the fifth inning, the batters are swinging more often; they aren't turning around when I call a strike on the outside corner.  And if they see me in a game a week later, they know what to expect right away.  I get a fun, action-packed, fast-paced game with few if any complaints -- all because I weathered the initial storm to establish my strike zone. 

* By choice, much of the time -- I've done varsity games, but at varsity they expect you to start perfect and get better.  The participants and fans at a lower-level game are so happy to see someone who seems relaxed but serious and competent that they don't complain or argue with me.  I've had even losing fans repeatedly thank me for my work as I'm leaving... that never happens at the varsity level.  

I start establishing the physics laboratory equivalent of the strike zone on the first day of class in conceptual physics.  We measure angles of reflection and refraction with a ray box, a protractor, and a mirror.  I call students to the front of the room to make the measurements and record data on the board; everyone else is in a seat making a graph as the measurements show up on the board.  Easy stuff, but we're learning:  we're learning that angles are measured from the normal, we're learning how to use a protractor, we're learning to graph as we go in lab... we're learning not to think too hard, but just to make the measurements and move on.

Then in the first student-run lab exercise, they measure angles of refraction in a plastic block with a ray box and protractor.  I go from group to group, cracking the figurative whip.  "Why are you arguing?  Just record the data point and move on."  "Why is there no graph?  No, you're not allowed to 'just graph it later,' graph it now."  "If he's so slow at making the measurement, change roles; you do the measuring, you make the graph."  "You've done fifteen measurements for angles less than 20 degrees.  How 'bout some large angles?" "You've just spent three minutes deciding between 29 degrees and 30 degrees?  Just pick one and be done with it."  

Okay, I freely admit -- *I'm* intimidating the students a bit.  A bit of fear won't hurt.  They're more afraid of me telling them they're doing something silly than they are of the scary lab equipment.  I'm so loud, too, that telling one group something that they can improve means the whole class hears.  Of course, I'm not a jerk here... groups find abundant and loud praise when their data starts looking good.  "Hey, what a great-looking graph!  See what happens when you just take the data quickly?"  I always maintain a smiling face, but I move things along with no tolerance for baloney.

Most of the class figures out by the end of the 90 minute lab period what makes me bark, and what makes me wag my tail.  I hand out candy to the group with the best graph; we deconstruct as a class what we're looking for during a data collection session.  Everyone leaves with a smile, and a bit of relief that loud guy is done shouting.

But then the second week... I don't have to shout at all.

We do the lens experiment shown in the picture above, in which students graph image distance vs. object distance for real images in a converging lens.  For the most part, they don't make the same mistakes they made in the first week; those who do start doing something silly often hear from their classmates before they hear from me.  Data collection is so fast that they're all working on the homework well before the end of the lab period.  

It all starts with weathering the storm in the stressful first lab session.  The same thing worked well in AP physics; so much so, that by mid-year I could just describe the experiment and then sit at my desk while the class got on with their data collection.  

21 September 2012

Justifying answers in 9th grade physics

The phrase "justify your answer" appears on AP physics exams all the time... and I would contend that this phrase should be a staple of everyone's physics classes.  Students usually struggle to understand just what depth of justification is necessary; often they even struggle with the idea that a justification isn't simply a restatement of the question.

I've written before about crystallizing the elements of an appropriate justification:  it should be written to be understood by an intelligent student at the same level of physics, and should include either an equation, a calculation, or a fact of physics.  

In my 9th grade conceptual class, we're avoiding calculations wherever possible.  And ninth graders are much less savvy than seniors about what might be considered a "fact of physics."  So I've had to adjust my approach a bit.

We use the Phillips Style of teaching ninth grade physics, in which we spend time in class highlighting relevant facts of physics in the text, then we quiz on those facts.  Students are allowed to use notes that they hand-wrote outside of class for some of the quizzes.  This style does two things for me.  For one, my students can tell you pretty quickly that "When light speeds up into a new material, the light bends away from normal."  Their recall of facts is solid.

For another, and more importantly, the statements we highlight in the text define the starting point for justifications.  We never have to go deeper than the facts we've learned; we must always start with one or more of these facts.  

The requirement:  Each justification must include at least two sentences.  The first sentence or two must be facts of physics, stated pretty much word-for-word from our class notes. Then, the facts must be related to the problem at hand with a separate sentence.

Consider a seemingly simple question:

beam of light travels from air into a liquid.  The index of refraction of the liquid is 1.4.  Will the light bend toward the normal, or away from the normal? 
 
The justification must include two sentences, for example, like:

1. Light travels as fast as fast as it can possibly go in air.  When light enters a material in which its speed decreases, the light bends toward the normal.
2. In this problem, the speed of light in the liquid must be less than in air, so the light must slow down, bending toward the normal.

Or, perhaps:

1. The higher the index of refraction, the slower the speed of light in a material.  When light enters a material in which its speed decreases, the light bends toward the normal.
2. In this problem, the speed of light in the liquid must be less than in air because the liquid has a higher n.  So the light must slow down, bending toward the normal.

Or the equation n=c/v could be used to show that the light slows down -- we put that equation in our notes.

Now, at first I'm being somewhat generous about credit.  I'm giving plenty of credit for reasonable attempts that use facts of physics from the notes; after all, these are freshmen, and I'm pleased at this point if they are not leaving problems blank.  And I'm fine for now if the answer is in the style required, but the logic is incomplete.  All I ask is that everyone write facts from the book, not facts they made up.

And that's fine progress for the first few weeks of general ninth grade physics.  One step at a time.

14 September 2012

A clever way to insist on a good initial effort on problems

The two extremes we try to avoid in teaching creative problem solving:

(a) The student who holes up in a quiet place for hours by himself hammering his head on the desk trying to solve a problem that should take all of 20-25 minutes

(b) The student who looks at the problem for 30 seconds, throws up his hands, and turns in a blank page saying "I have no idea, this is too hard."

Somehow we have to convince students to make a serious individual effort, but to stop and seek help when they get truly stuck.  How?  I've got my own techniques, which usually involve rules about how much time students must spend writing down their own ideas before collaborating.  Occasionally I've assigned work due on one day, then on that day granted a reprieve to allow further collaboration.  That works great; except, you can only do it once or twice before students stop doing the individual work, hoping for and expecting a reprieve.

Jen Deschoff, originally a Michiganer but now a North Carolinininian,  created a kick-arse approach to holding students accountable for their individual effort on problem sets.  In my Summer Institute that Jen attended, I pointed out the four essential elements of a well-presented physics problem:

* words
* diagrams
* equations
* numbers

There's hardly a well-solved AP-level problem anywhere which doesn't include at least three of these four elements.  I remember making a throwaway comment that, if I were pressed for time during the school year*, instead of grading a problem set carefully I might just look quickly for these elements in order to assign a grade.  

* Ed. Note: Why use the subjunctive?  You're a teacher.  When school is in session, you are pressed for time by definition.  Might as well say "If Ray Lewis could beat you up, then he wouldn't steal your lunch money, 'cause he's reformed now."

Well, Jen took that comment and ran with it.  She now grades many AP-level problems in two stages:

Stage 1: On the day the problem is due, students give the problem to another student, who looks for each of the four elements.  The students are NOT grading the answer at all!  They're just verifying that words, diagrams, equations, and numbers show up somewhere, and giving a grade for that.  Everyone keeps their original work.  

Stage 2: The NEXT day, everyone just turns in the problem, and Jen grades it for correctness as well as for the four problem solving elements.  

This approach fosters discussion among students -- they grade each other's initial work, and so I'm sure they comment on the correctness of the solution.  Someone who was previously stuck will likely see the hint he needs.  And now the guy who writes nothing because "it's just too hard" stands naked* before the class, seeing that he could have, should have, earned credit just by going through the problem solving motions.  (Jen says she has thrown** blank papers back to students.)  Next time, when he does go through those motions, he'll be surprised to find that physics isn't as hard as he thought.  

* figuratively
** literally