Reactions to the Advanced Placement / AP Audit and How Teachers Fail
I use Google Alerts to monitor certain topics. I’ve specified that all news articles and blog posts with the phrases “AP” or “Advanced Placement” be sent my way over the last few weeks. The results show that the AP audit is a hotly-debated topic with a vast number of concerns - and very few proposed solutions.
Reader comments to my previous posts about the AP curriculum [located here and here] bring up some points worth addressing regarding the state of Advanced Placement. The College Board’s audit has renewed discussion about the AP’s purpose, delivery and effects; we shouldn’t waste the opportunity. I’d like to lay out two reader comments separately and then address them together.
JMoon points out the common-sense (but underpublicized) reason for the audit: The College Board wants to make sure classes, teachers and schools who use the AP moniker fall in line with the AP mission, its execution, and also its funds. This is a necessary requirement of all franchises. Your local Burger King faces periodic inspections to determine whether it’s worthy of using Burger King’s corporate resources. If the College Board can do this and evaluate the quality and effectiveness of instruction in the process, even better. JMoon says:
“… [A]lthough I agree with you in terms of increasing accountability, what do you think of adding a component that compares a student’s test score with the class grade? The problem is when a student manages an A in the class and can’t earn a 5, or even a 4.”
The Science Goddess tells us that not every student takes AP courses to get college credit. She accurately sums up the motivations of a significant block of students and exposes with subtlety an important underlying point: not every college accepts AP credit, and since students don’t know exactly where they’re headed until their last April of high school, they can’t count on college credit as a guaranteed benefit of an AP course. She also shares her experience as an AP reader:
“I don’t think that requiring all kids to take the test will say anything about the rigor of the class. As a Reader, I saw hundreds of test booklets each year that kids either left blank or wrote a lot of junk in—complaining how they were forced to take the test. I’m sure that the average score for their teachers suffered, but that doesn’t really give an accurate picture of how they ran their class or their qualifications to teach it.”
The theme here is defining and measuring the AP curriculum.
The College Board, as I wrote in an earlier post, is taking the right first step. Auditing syllabi will pluck a few rotten apples out of the barrel and delineate the AP curriculum so an AP course can resemble more closely its college-level counterparts. It also makes teachers with dubious syllabi [and presumably those with dubious skills] accountable and shows them that they need to seek help from peers or outside bodies to refine their course content and methods. Districts all over the country should encourage teachers to address their weaknesses and provide appropriate solutions/help for their staff. Will they? We’ll see over the next few months.
Objections to using AP exam performance to measure a teacher’s quality stems from a difference in philosophy about what testing means. Exams are designed to be a certification of a student’s knowledge. The ultimate measure of a teacher is their students’ performance, much like a company is only as good as the quality and customer satisfaction of the products they make. This doesn’t mean that a class needs to pull a collective 5 on an AP exam to show they were taught well, and it doesn’t mean that a class average of 2 means they were taught poorly. [n.b. : Just think of Joe Girardi, who was awarded National League Manager of the Year in 2006 for finishing with a 78-84 record managing a team with baseball's lowest payroll and questionable talent. Girardi's sub-.500 performance looks lackluster, but given the circumstances he engineered an incredible success.]
There’s no hope of creating a truly comprehensive measure of a student’s achievements in an AP course. There are simply too many hands in the pot; the district, the state, the federal government, the College Board, etc. all have a say [directly or indirectly] in how a student is measured. Without a broad standardized grading process - a logistical impossibility at this point - we can’t combine in a meaningful [and fair or accurate] way subjective elements like course grades with the AP exam score. If we tried, the problems we face with current grade inflation would be exacerbated.
That means we necessarily have to rely on the exam as a measure of achievement. This includes the students who blow off their AP exam for any of a host of reasons, as The Science Goddess has pointed out. If a student doesn’t take the exam seriously, the teacher has failed.
Again: If a student doesn’t take the exam seriously, the teacher has failed.
A teacher can’t guarantee that every student will try their best on their exam, but almost all of them should. There will be a few token failures generated from apathy and that can’t be avoided. If a significant percentage of a teacher’s students don’t approach the exam with the seriousness of purpose it warrants, with a commitment to demonstrating their achievement and being accountable for that demonstration, that teacher has failed to impress upon his students the meaning of the exam.
Think it sounds harsh? Consider a football or basketball game - or any high school sport, team or individual. If a significant portion of a team doesn’t try at all and shows a selfish disdain for the event, how long does their coach keep his job? He will be dismissed, rightly, with all deliberate speed, because he has failed to impart a sense of purpose on his athletes. From chemistry to tennis, the issues of relevance and import are the same. Requiring every student to take the test will tell you nearly everything about the classroom environment in which they learned. How they perform on that test will tell you the rest about the degree to which they have mastered the skills set out in the curriculum.
Teachers and districts as a whole won’t have the will to mandate that AP students take their exams, either with policy or forging a financial solution. We’re nearing the need for a classification of AP students to separate those who choose to integrate accountability and evaluation into their purpose for taking the class and those who essentially will audit the course but receive a grade at the local level. There’s a difference between those two types of students - and it’s a big one.
I’ll address funding the AP exam [and several other Advanced Placement issues] over the course of these coming weeks. If you’d like to stay on top of this debate, you can subscribe to the e-mail newsletter in the left column [or at the bottom of this post] or subscribe to RSS feeds through the links that are also on the left column.
UPDATE at 04/09/07, 8.35pm:
Jay Mathews of the Washington Post gives some pertinent insight in Finding the Best Schools, Part III: High-Income Blahs. He nails the analysis on how too many AP teachers generate student indifference toward the AP curriculum when it ought to be - and easily could be - an exciting and challenging component to education. Mathews’ piece is a must-read. Here’s an excerpt:
Most high schools still treat AP as if it were nothing more than an impressive credential to add to the resumes and college applications of top students. This produces a lethargy, a case of the academic blahs, in the rest of the student body. Without knowing it, high school officials in these cases are saying to average students: “We know you are going to college. We can’t stop you from going to college. But we can and will keep you from taking courses and tests that will prepare you for college.”
“Part IV: Rationing AP” echoes with plenty of data many of the points I’ve written about over the last few weeks - take a look.
UPDATE at 04/10/07, 1.17pm:
Make sure you read Mathews’ new piece, Part V: Grade Grubbing.