31 August 2015

Electric fields and potentials demo in corn oil... and why the voltmeter didn't work.

Several years ago I shared Wayne Mullins' demonstration of electric fields and potentials.  He used two metal PASCO masses placed parallel to one another in water to produce a uniform electric field in the water.  The electrodes were connected to ~25 VAC.  The linear variation of potential with position between the plates can be demonstrated with a voltmeter; a couple of fingers spread in the water (done carefully -- read the post!) can show viscerally what a potential difference really means.

Today in my visit to TASIS American School in London, blog reader Scott Dudley showed me and his classes a similar demonstration.  He connected 2000 VDC to two small wires placed in a pool of corn oil.  A sprinkling of some grass seed between the wires showed these long particles lining up with the electric field lines, as you can see in the picture.  This demonstration provoked three thoughts from me.

(1) Why would the particles align with the electric field rather than along the equipotential lines?  Teacher Dallas Turner once suggested using goldfish in water between the electrodes to show the equipotentials.  The goldfish will align perpendicular to the electric field so that no current runs through their bodies due to a potential difference.  So what makes grass seeds different?  I expect that the seeds are slightly polarized... then they experience a torque because they're dipoles in a uniform electric field.  That torque aligns them with the field: the positive end is forces as close as possible to the negative plate, and vice-versa.  (Right?)

(2) I suggested that Scott use a voltmeter to map the equipotential lines, as I do in Wayne's demo.  So Scott gamely stuck the probe in the oil... and nothing.  No reading.  Why not?  Because, as Scott immediately pointed out to me, the meter produces a small (few milliamp) test current in order to measure a voltage.  The oil is a strong insulator, thus not allowing the meter to make the measurement.  The demonstration works fine when I do it in tap water, because tap water is quite conductive.  Of course, Greg... that's why I need water in the first place rather than just the air in between the two electrodes.  And that's why the "field mapping" lab exercise is generally done with conducting paper.

(3) The AP Physics 2 exam does not deal with traditional field lines.  Instead, field mapping is done using "vector fields" in which a multitude of arrows indicate the magnitude and direction of the electric (or magnetic or gravitational) field at various positions.  The grass seed can help develop an understanding of the vector field representation.  Each individual grass seed is pointing in the correct direction; now, draw each seed, but draw it bigger or smaller depending on the strength of the field at that position.  Nice.

Thank you to Scott for hosting me at his school.  I met a number of clearly excellent teachers; I wish I could have spent more time with everyone there.  Perhaps I can convince my school to send me to London a second time... :-)

GCJ

22 August 2015

What the science teaching community can learn from NBC's soccer coverage

The best sporting events need no over-the-top, carnival barker-style salesmanship in order to draw a large audience; physics, or science in general, similarly needs no hype to make it interesting.  Bear with me as I give a brief tutorial of American sports coverage.  I'll get to the physics teaching connection at the end.

For decades, baseball was the only American sport that mattered.  Coverage included the dulcet voices of Vin Scully and Al Michaels, who took the game seriously, even though they didn't take themselves too seriously.  They knew that baseball, interwoven with a century of history, would sell itself -- their job was to tell the story of that days' game.

Baseball lost its title of "America's Pastime" to football not because of underpromotion, but because football is by far more suited to television and 21st century lifestyles.  When FOX took over national telecasts in the late 1990s, they tried to change baseball's downward trend in popularity with wrestling-style promotion: "NOW!!!  PUJOLS VS LESTER!!!!  LIVE!!!"  If anything, FOX has turned people off by misrepresenting their product.  Baseball is not suited to such treatment.

On the other hand, the championships at Wimbledon and the Masters golf tournament explicitly reject the typical "loud men screaming and laughing at each other" coverage that is typical for an American sporting event.  The tournament hosts insist upon a serious, nay reverent broadcast; yet they draw extraordinary television ratings, and tickets are next to impossible to come by.  Funny, that.

Then there's soccer.  For most of my life, what little soccer coverage I could see tried too hard to sell sizzle.  "Americans don't know about this game, and it's a boring game, to boot," said the producers (who also knew nothing about soccer).  So the announcers talked down to us: "Now, when I was little, my coach called this big box here the 'mixer.'  You're supposed to put the ball in the mixer to score goals."*  The pregame shows tried to explain the rules of the game again and again in excited voices, rather than to tell the story of the game's history.  The broadcast ignored everything but items deemed of direct relevance to Americans, who had no soccer history anyway.  It was all so, so condescending to even the mildly knowledgeable fan.  No wonder no one watched: those who were serious soccer fans felt talked down to, and those who weren't certainly didn't fall for the artificial sales job.

* Not kidding -- approximate quote from 1994 World Cup coverage.

Let's examine that paragraph in a science teaching context.  Rewrite, substituting science for sport.

Then there's science.  Too many science education programs try too hard to sell sizzle.  "Kids don't know about science, and science is boring, to boot" say the people providing education grants, who too often know little about science or science teaching.  So the teachers, program directors, and presenters talk down to students.  "And without science, we couldn't have iphones, and you couldn't twitter to your friends!  Isn't science great?"  Classes are taught facts and equations, without connecting those facts and equations to experiments that students can themselves perform.  Topics are ignored unless they can be made immediately "relevant to everyday life," even if said relevance is so forced as to be a camel through the eye of a needle.  It is all so, so condescending to even the moderately intelligent student.  No wonder people get turned off: smart, otherwise interested students feel talked down to, and those who aren't already interested don't fall for the artificial sales job.

Soccer coverage has changed.  In 2008, ESPN tried something different.  They put on Europe's premier soccer tournament, one that did not involve a single American.  They named Bob Ley, perhaps the only prominent American broadcaster with a bona fide soccer background, as the studio host.  They gave up trying to force the use of American-accented commentators, and instead hired the best, most experienced soccer commentators in the world -- even if that meant hiring foreigners.  They told the story of the tournament on its own terms, not attempting to adapt to an American audience or an ignorant audience.  Point was, if soccer was so great, this major tournament which drew hundreds of millions of watchers in Europe would sell itself.

And it did.  People watched, and talked about the games and the stories.  The drama was authentic, the audience was captivated.  

Now, NBC broadcasts the English Premier League in the US using the same principles.  They tell the story of the league from a true fan's perspective, trusting the audience to keep up.  Just like Apple doesn't have to oversell the iphone, just like google doesn't need to hype its search service, NBC recognizes that the Premier League is a product that needs no enhancement, as long as the commentary is smart and authentic.  NBC's ratings are through the roof, despite the lack of on-air shouty salesmanship.

Science sells itself, as long as the teacher is good.  There's a reason that so many of you reading this are interested in science -- and it's not because someone screamed at you that science is FUN!  While many of us do some crazy-arse things in our classrooms, it's not the craziness that wins our students' hearts and minds.  It's the subject we teach, it's the way we communicate our deep knowledge of the subject, and it's the way we relate to our students about our subject.  Problems come when teachers *don't* know their subject or can't build relationships with the class.  Feigned enthusiastic salesmanship doesn't make those problems go away.

So please, folks... let's encourage science teaching in which the teacher takes science seriously.  Let's encourage expert teachers, both experts in subject and experts in relating to students, to do their thing the way they see fit.  Let's encourage more folks who are experts in one of these skills to become expert in the other.  

But let's not oversell science as a discipline.  There's no need.  We have an amazing product that a lot of people want.  We just have to manage the queue and provide outstanding customer service.

02 August 2015

A lesson in percentages

I'm hardly the first writer to kvetch about how the dang kids these days -- or any day, really -- don't have any sort of number sense.  My kid is working on his summer math assignment, which includes a page of percentage problems.  The questions themselves are not just reasonable, but important.  "What is 31% of 75" or "28 is 25% of what number" are to mathematical literacy what the offside rule is to soccer -- not everyone understands, but you'd dang well better understand if you want to be considered fluent.

My complaint, therefore, is not that Milo's class is studying the wrong thing.  It's how they approach the problems.  He is required to do the problems the same way I was taught 30-odd years ago:  set up a proportion, translating English to mathematics.  In this parlance, "of" means to multiply, "is" is an equals sign, "percent" means to make a fraction over 100.  No calculator is allowed.  And thusly, Milo and his classmates usually get the right answer.  They often don't notice when they do a routine backwards and say that 31% of 75 is 220, but they usually get the right answer.

I've no doubt that there is some sort of validity to this pedagogy, especially if some sort of national exam is going to require precise answers to such questions with no calculator.  But consider: beyond the test, what do we really want functional high school students and adults to be able to do with percentages?  I personally would prefer my class to be skilled estimators.  What's 31% of 75?  It's about 25, or maybe 24, because 31% is just about a third.  And I would prefer that no one in my class or family* rejoin "well, actually, one-third is 33.3333 repeating percent, so you're wrong."

* For their own sake, so they don't get thrown in the scorpion pit

Me, I'd teach this topic like a video game.  

Start with obvious reference percentages: 50% is a half, 25% is a fourth, 33% is a third.  And use them intuitively to solve problems quickly.  For example, I'd set up a competition: everyone gets 30 seconds to do, say, five no-calculator problems with just these obvious percentages.  Score something like one point for getting "close" in a way defined by the teacher, and an additional point for being right-on.  Guessing is encouraged, and essentially required by the time limit.  Students are practicing making intelligent guesses, and refining their guesses.

Once the class is getting bored with the obviousness, do tricksier problems.  Now the additional point would be awarded to the student closest to the right answer.  Don't demand any formal work or method, but discuss and share methods.  After doing, say, "What is 66% of 210," one student might suggest they knew that the answer had to be more than 105, because 66% is more than half.  But perhaps someone else noticed that 66% is twice 33%, and so is two-thirds -- and perhaps someone else explains how they estimated 2/3 of 210 without painstakingly dividing by three and multiplying by two.  

What does this have to do with physics?  I use essentially this same method when teaching circuits to freshmen in conceptual physics.  They learn to estimate, not calculate, voltages across series resistors and currents through parallel resistors.  And, by unit's end, they have a better sense for the answers than do seniors who have been taught to calculate.

I understand math teachers' obsession with routine and algorithm.  When weak students -- students without any innate number sense, and without any serious interest in the subject -- simply need to get exact answers, well, algorithm can be a friend.  I'm telling you, though, an estimating approach can work wonders.  Even weak students can make progress by guessing and checking.  I've seen it happen.  If that culminating test is multiple choice, even the weak students will be able to pick out correct answers from a lineup.  

And, perhaps if a page of problems didn't represent a multi-hour sentence to proportions, cross-multiplication, and hand arithmetic, such students might develop an interest in the subject.  Or at least a competence with it.  

29 July 2015

Why do I teach: a rather prickly response

My school posed the entirely reasonable, in context, question: Why do you teach?  Each faculty member was asked to respond in a narrative and post to our faculty development site.

The folks who asked the question are friends; they're not just colleagues, they're the best teachers at the school.  As you see below, I take offense at the question, but not offense toward the people asking it.  They didn't know, were unlikely to know, why they pushed my buttons.  

Me, I still don't understand why teachers are expected to have kumbaya moments around the fireplace in regard to their employment, yet e.g. bankers, videogame programmers, and professional athletes are not.  Nevertheless... and with advance recognition that I very much enjoy my job and my school...

Why do I teach?

This answer is going to be quite a bit prickly.  I know this exercise was intended in good faith and without ulterior motives.  Yet, the question itself hits a major nerve.  


Short answer: None of your business. The question is offensive to me, though I know you intended no offense.

Most folks in academia are aware that seemingly every woman physicist has a story about someone in her life – family, professors, colleagues – who made an extraordinarily rude statement suggesting that she is in the wrong profession for a girl.  “You’ll never get a husband as a physics major,” or “Why don’t you take this lower-level class, the girls usually need a bit of catch-up,” or, famously and recently, “Three things happen when [girls] are in the lab: you fall in love with them, they fall in love with you, and when you criticize them they cry."

What folks don’t often recognize is a different social problem faced by physics teachers, especially male physics teachers.  Many of us have stories of family, administrators, and colleagues who don’t quite understand what a person with a degree in a “real” or “useful” field – and a man, to boot – is doing as a teacher. 

In interviews I was asked the question, “They say those who can, do, but those who can’t, teach.  So what made you decide to teach?”  The automatic assumption was that a person with physics and engineering degrees must be a crappy engineer indeed if he must resort to teaching for a living.  I’ve been asked repeatedly over the years, “why don’t you use your physics degree?” as if I’m making waste of said degree by teaching.  And, of course, “Gregory Charles, we paid all that money to send you away to an elite college, and you’ve decided to teach?  What are you thinking?!?”  Somehow, my sister with a theater degree from Dartmouth never got those sorts of questions when she began her teaching career.

And these questions are from well-meaning people.  I’m not even including the outright condescending discrimination from female colleagues and administrators who have no use for men in education.  I mean, obviously, a man who teaches is either gay, perverted, or on a power trip to become an administrator, right?

Then there’s the Soviet undertones of the question.  “Let’s all share why we’re so happy living in a Workers’ Paradise.”  What, this person isn’t happy?  He wants more of a say in how paradise is run?  He mentioned that perhaps the education establishment subjects us to vacuous dogma rather than encouraging engagement in substantive, intellectual discussion with creative, professional craftspeople?  He’s obviously a troublemaker sowing dissent and discord.  Take him to Lefortovo. 

I guess I’d like to rephrase the question… of course smart, interesting people choose to teach.  That’s obvious.  The question should instead be directed to the less intelligent or less dedicated folks, and should say “How in the hell are you a teacher?”

Why do I teach?  Because it pays the bills, I’m good at it, and I usually enjoy my students and my colleagues.  That’s all you need to know.  That’s all I’m willing to state publicly.  Actions speak louder than words: I encourage you to judge my commitment to my profession not by this sort of essay, but rather by the feedback from two decades of students, from colleagues at our school, and from fellow physics teachers around the country. 


17 July 2015

Rule 3 of teaching: Your students don't listen to you. (And a non-ohmic light bulb.)

Rule 3 of teaching, as described in the 5 Steps to a 5: AP Physics 1 teacher's manual:  Your students don't listen to you.  Don't worry, they don't listen to me, either.  

I hear regularly from physics and non-physics teachers fretting over the material they "cover" in class, over the precise content and activities they do.  I suggest taking a holistic view of a course as a whole, recognizing that students will rarely remember a specific classroom event more than a week or so later.  The College Board has gone over-the-top with this philosophy, prioritizing "science practices" and "big ideas" over content.  Their heart is in the right place.  An understanding of experimental physics isn't about "spit back the procedure, analysis, and results from this experiment you did six months ago."  It's more about, "here's a new situation that you've never seen; how would you answer a well-formed question with an experiment?"

More on-point, did you do an experiment measuring the resistance of a light bulb this year in AP Physics 1?  Did you show that the bulb's resistance changes depending on the voltage across it?  Did you have students design and carry out an experiment to determine whether, and to what extent, the bulb obeys Ohm's law?

Some of you are hanging your heads in shame, because you didn't -- and this very experiment showed up as free response problem 2 on the 2015 AP Physics 1 exam.  My big, friendly point is, don't worry about trying to match experiments with what might show up on the exam.  Not only is it an impossible fool's errand, but it doesn't even matter.

My class did this exact experiment in January.  I even made it what education professors would call an "open inquiry" exercise.  Toward the end of the circuits unit, during which we had always treated electronic devices as having constant resistance, I pointed out that some books suggest that a light bulb under some conditions might not obey Ohm's law.  It was each lab group's job to test the validity of those books' contention.

Oh what wonderful results we got!  Most groups figured out quickly and independently to graph voltage as a function of current.  You can see one of the graphs in the picture at the top of the post.  The curve is apparent as soon as you smack a ruler down on the page.  The slope varied from 51 ohms at about 2 V, to 77 ohms at about 8 V.  The bulb is non-ohmic, with a 30%-plus difference in resistance across a useful range of voltages.

Since we did such a good job with this experiment, one might expect that my students kicked arse on 2015 free response problem 2.  Um, nope.  My students performed, by far, worse on that problem compared to the others.  The College Board just released some class statistics, showing which quartile our students fell into on each of the free response problems.  On problems 1, 3, 4, and 5, the vast majority of my class performed in the top 25% nationally.  On problem 2, more than a third of my students were in the bottom half.  

Back in 2003, the same sort of thing happened in reverse.  I remember kicking myself because that was the first year ever when I didn't do an optics-bench-style experiment with my class; sure enough, that was the year when problem 4 was a laboratory-based optics bench question.  Turned out, though, my students did fine, indistinguishably from other years when I had sometimes done the very experiments that showed up on the exam.

The precise lab exercises you do don't matter.  And that's because of Rule 3. Don't take this rule as a complaint, or as the "get off my lawn" ramblings of an old man carrying on about the danged kids these days.  It's just a well-verified observation.  I see it as my job to be sure that my students succeed despite Rule 3.


12 July 2015

Do NOT allow questions during tests... repost

Never even allow a student to ask a question during a test or quiz.  This is perhaps the most important piece of teaching advice I can give.  

I am utterly convinced that your school could raise your SAT scores by 20 points across the board, and your AP math/science scores by a third of a grade, merely if your math and science departments never allowed questions during tests.

It is a dirty little secret that no one ever discusses... so many teachers talk their students through difficult problems.  No wonder those students struggle when they're faced with standardized tests, when their friendly lifeline is taken away.

I hear people argue with me, saying that they answer questions on tests because they want to help the students succeed.  Well, so do I -- and I take offense to the ridiculous connection that refusing to answer test questions equates to not caring about students.  I want my students to succeed over the time frame of their physics course.  That doesn't mean they must ace every individual test or quiz.  It is crucial that we allow our students to make mistakes, and then to learn from those mistakes.  I judge my success by how well students perform at year's end, not by whether one student got one question right on one test.

Here is the critical post explaining my approach, including some help with the issue that I know many of you already brought up, that "I could never get away with this at my school."  :-)


11 July 2015

AP Physics 1 scores 2015 -- more people passed Physics 1 than Physics B, and other commentary.

By now those of you who taught the inaugural year of AP Physics 1 have seen how their students did.  Not like the old physics B, eh?  Let's talk about the reasons for the ostensible precipitous decline in the scores.

Firstly, the raw score necessary to earn each AP grade has increased, by about 5-6% across the board.  Trinna Johnson and Trevor Packer sent a letter to the "AP Teacher Community" discussion group describing the score-setting process in tremendous detail.  In that letter, they revealed the grade cutoffs, which I've converted to percentage of available points necessary for each grade:

AP PHYSICS 1 GRADE          Percentage of available points on the test
     5                                                       71%
     4                                                       55%
     3                                                       41%
     2                                                       26%

The old physics B exam, typically, had cutoff scores of 65%-50%-35%-25%.  It takes more correct answers to pass now.

The AP Physics 1 exam, though, is considerably more difficult that Physics B.  There are no pity points available for simple calculations.  Synthesis is prized over recall.  There's no room to hide -- the questions probe for explanations rather than answers.  Due to the higher raw score cutoffs, we would expect fewer of our students to pass even on an exam of equivalent difficulty to AP Physics B. Now we have two effects that combine to reduce overall exam grades: a harder exam AND higher cutoff scores.

And finally, consider the population of students who took AP Physics 1 this year.

In 2014, 90,000 students worldwide took the AP Physics B exam; of these, 60% passed, and 14% earned 5s.  (That itself is a bit down from previous years, because the number of students taking AP B doubled over the previous decade.)  That works out to about 13,000 students earning 5s on Physics B, and 55,000 passing.

In 2015, 170,000 students took the AP Physics 1 exam -- just about double the population who previously took AP B.  Part of the intent of the redesign was to increase the pool of students who could handle an AP physics course.  Physics B was intended as a second-year course, and was so broad that it did not encourage serious, deep understanding.  Physics 1 is in fact for first-time advanced physics students.  Many schools appropriately replaced their "honors physics" courses with AP Physics 1.  Good.

But this twofold expansion in the student -- and teacher -- pool means a much broader range of student -- and teacher -- ability.  Many of the 80,000 additional students taking the exam were intrinsically weaker students.  And a bunch of teachers who were not experienced with college-level physics, or who were simply not yet capable of teaching college-level physics, were nevertheless thrown into an AP 1 course.  No wonder only 4% of the country earned 5s; no wonder only 37% passed.

Let's look at raw numbers now, not percentages.  On AP Physics 1, about 7,000 students earned 5s.  This is about half as many as earned 5s on Physics B.

But 63,000 students passed the AP Physics 1 exam -- that's considerably more than the 55,000 who passed AP Physics B  the previous year.  Even on a more difficult exam, even with higher standards for passing, more students passed this year than last.  Of course... because Physics 1 is intended as a first year course.  Sure, a bunch of folks tried this exam who weren't ready (or whose teachers weren't ready).  So what.  Thousands of folks who were ready just fine tried the new exam, and found out that they could do it.

As teachers forcibly learn that physics is about more than plugging numbers into equations, as students figure out that they can't write a bunch of baloney and expect to earn credit, AP Physics 1 scores should eventually improve.  It's on us to adjust our teaching to help these scores improve.

09 July 2015

Open Lab 2015 -- Last Call

The Manning Family Science Building, site of the Open Lab
This year's Open Lab will run Sunday to Tuesday, July 19-21, at Woodberry Forest School. Participants have requested things to do such as:

* discuss materials and laboratory ideas for conceptual physics; 

* do quantitative demonstrations and experiments with waves;

* show some options for computer simulations beyond the typical Phet; 

* set up each of the five AP Physics 1 free response problems experimentally while brainstorming short- and long-form lab exercises -- for all levels of physics class -- with these problems.

My goal is to do all these things and more, with the opportunity to improvise further with whatever equipment we have in the storeroom or the hardware store.  

The best part of any gathering of physics teachers is the shop talk and sharing.  To that end, on the first night the Woodberry Forest science department is providing dinner at my house.  Everyone's welcome; I just need to know soon if you'd like to come.  Send me an email, and I'll get you on the list.

Festivities start around 4:00 (pm, of course) on Sunday afternoon in my classroom, followed by dinner around 6:30.  Monday we'll work 8:30-4:00 or so; Tuesday we'll be done at noon.  Feel free to attend for all or part of this time.  No fees, no hassle; an email telling me your plans is the complete registration process.  You'll need to find a place to stay... I recommend the Holiday Inn Express in Orange.  Several folks will also be staying at the Doubletree in Charlottesville and carpooling up.

GCJ

06 July 2015

Mail Time: Is it "fair" to evaluate students on the quality of their homework?

As I was going through emails in preparation for the 2015 open lab (please let me know if you'd like to attend!), I found this:

I saw in your "Less is More" article that homework [was at that time] 25% of the total grade for your classes.  I was considering making homework a much lower percentage but mostly a "good faith effort completion" grade, since I've found it difficult to justify to myself grading students on their knowledge of a material while their still in the process of learning it, rather than an exam where they are reviewing the material.  What are your thoughts on this?

That's an important question for any physics teacher to be able to answer.  Remember that physics teaching is art, not science -- there are few hard truths of physics teaching, only ideas that work or do not work for each of us.  Would pointillism have worked for Picasso?  Could Rodgers and Hammerstein have written about singing cats?  Maybe, maybe not; yet pointillism and Cats! are indisputably successful things that other artists should at least be aware of.  I have my answer to the homework question that indisputably has worked wonders for me and many others.  Some good teachers may disagree with me on principle, or may choose not to use my approach in their teaching environment.  Yet everyone should acknowledge, whether they use it or not, that my approach does in fact produce considerable success for me and my students.  

To the question, then:  When you grade homework, you're not grading students on their knowledge of the material; you're grading the skill of problem solving with new concepts, along with students' diligence in seeking the correct answer.  Fact is, homework (or any work) is worthless if it's not done carefully with a full effort toward getting the correct answer and approach.  "Good faith effort completion" sounds great, but ask yourself -- if you graded students' homework carefully, would they do a better job?  Would they perform better on tests?  

My answer is, I grade homework carefully and thoroughly on a regular basis, especially early in the year.  I grade such that the students expect that their work will be judged, such that the students do their work to the highest standard they can.  And therefore, my students don't have to study for tests.  And, they perform well on those tests, because they've practiced carefully.  The one year when I didn't carefully grade homework, many students did a half-arsed job on the homework, then were upset when their test performance was poor, then complained to all who would listen that physics was too hard and that I was mean and unreasonable in my expectations, that I didn't understand my students.

As for the "fairness' issue, is it fair for the football coach to choose a starting quarterback based on his performance in practice?  I mean, practice is when players are supposed to develop their skills, right, and only the game really matters?  Yes, but everything's a test, everyone is evaluated all the time.  If you grade homework regularly -- even every other night, even only part of one problem, even a grade on a 0-1-2 scale, then you'll have enough data that one bad performance on something a student couldn't grasp quickly will be a mere blip.  

I now am counting homework and daily quizzes as half of the student's term grade, with the other half coming from monthly tests.  Not surprisingly, there is a very high -- nearly 1.0 -- correlation between homework and test grades.  I am virtually certain that the correlation between homework and test performance exists independent of how much you grade, or how much you count the grade.  The goal, therefore, is to create an incentive mechanism so that students do everything they can to get homework right.  Then test and exam performance will take care of itself.  

05 July 2015

AP Physics 1 Lab Ideas: ticker-tape machine to determine acceleration of a cart (and preparing students for open-ended labs)

Tape Timer from Sargent Welsh
The College Board has released an official lab manual for AP Physics 1 and 2.  It's important to understand that, though they call it a teacher's manual for laboratory investigations, the experiments listed are not "required" for the AP exam.  Your choice of experiments that you do in class should be based on your interest, available equipment, etc.  

The manual might be 348 pages long, but no worries.  Just read the 30 pages or so that describe the actual suggested experiments.  These are as gold to the AP physics teacher, except more practically useful than gold.  I don't suggest you use these good 30 pages exactly as described, but that you use the activities described in these 30 pages as the basis for a couple of ideas in your course.

One of the activities in the book asks students to determine whether a wind-up toy car moves with constant acceleration.  What a great question!  Acceleration by itself is a difficult enough concept, but then understanding what is meant by "constant" acceleration is tougher still.

However.  Were I to ask that open-ended question early in the year, right after finishing the kinematics unit, I'd get such poor lab performance as to make the activity worthless.  "Open inquiry," as the College Board calls it, is a waste of time if your students aren't ready for it.  An open-ended problem followed by incessant questions about what to do, followed by frustration on your and the students' part and you finally just giving them step-by-step directions, isn't really what's intended by "open inquiry."  

Students must be carefully prepared throughout the year for open-ended laboratory exercises.  Early on, you need to teach some laboratory skills that they can eventually fall back on when it's time to answer a truly free-form question in lab.  For example, using a motion detector to measure distance, instantaneous speed, and acceleration is a skill that students must be taught.  Similarly, it's important to get your class practice in using photogates, spring scales, video analysis, ammeters and voltmeters, and other basic equipment.  I'm not suggesting one of those beginning of the year "let's measure a bunch of random stuff and talk about error" exercises, I'm suggesting that you teach such skills in context.  Do demonstrations with this equipment.  Have students use the equipment to verify the answers to homework problems.  Do a long-form lab with graph linearization where they must use equipment for multiple-data-point collection.  

Want a practical example of the difference between an early-year experiment and a late-year experiment?  

Here's the early-year version:  I have students release a PASCO cart from rest on an inclined track.  They use a tape-timer* to get the position of the cart 60 times per second.  Graphing the cart's position every 6th dot** makes a position-time graph with 0.1 s precision.

* You can buy such a timer from PASCO for $180, or you can get the cheap version from Sargent Welsh for $17.  The cheap version works fine.

** Why only every 6th dot?  Because we can decimalize every 6/60 of a second into 0.1, 0.2, 0.3 s.  Trying to graph every 1/60 of a second leads to numerical confusion, the graph taking ten times as long to make, and incorrect accelerations.  Thanks to Curtis Phillips for pointing this easy trick out to me after I had struggled with the graphical analysis of this experiment for nigh on two decades.

Next, I have students take the slope of two tangent lines to find two instantaneous speeds.  The change in speed divided by the time it took for the speed to change is the cart's acceleration.  You can see here the homework assignment that students fill out.  I determine the "theoretical acceleration" by measuring the angle of each group's track with an angle indicator, and using gsinĪø.

This experiment takes a full 90-minute lab period plus a night's homework assignment to complete.

Then the late-year version:  In the last month of the course, I assign the homework problem with a direct measurement video that you can read here.  Everyone can view the video, then determine for himself how he's going to check for constant acceleration.  No one really, truly remembers the tape timer experiment from October.  However, they now have a reasonable understanding of what acceleration is, and they have used multiple methods of finding instantaneous speeds all year.

This assignment provoked such a wonderful in-class discussion.  Some folks compared the change in speed over two time intervals.  Others used four time intervals.  Some made a velocity-time graph for four or eight data points and looked for a straight line.  An argument ensued as to what the distance scale on the video was; a student pointed out that it didn't matter, as arbitrary distance units work just fine to answer the question.  The one confused student who confused speed and acceleration discovered his mistake quickly and authentically, without me having to say a word.

In other words, at the end of the course, my class not only could perform a complicated, creative experimental task... they had the skills to discuss the merits of different methods.  That's the holy grail of introductory physics laboratory work.  But, searching for the literal Holy Grail requires a long, difficult journey filled with peril.  Don't expect to hold the grail immediately -- guide your class through the journey.  Can't I have just a bit more peril?