Higher standards don’t necessarily mean higher test scores in K12
Dan Hamlin, a postdoctoral fellow in the Kennedy School of Government at Harvard University, and Paul E. Petersen have examined data to see what impact states lowering the bar on academic proficiency have had on student achievement.
When they were created, the Common Core standards were intended as consistent benchmarks for student learning across the country. But public opinion turned against them, and many states either revised the standards or opted out entirely.
This withdrawal led many to fear a “race to the bottom,” says Dan Hamlin, a postdoctoral fellow in the Kennedy School of Government at Harvard University. Hamlin, with Paul E. Petersen, senior editor of Education Next, examined the data to see what impact states lowering the bar on academic proficiency had on student achievement.
“The political advantages of a lower hurdle are obvious,” Hamlin says. “When it is easier for students to meet a state’s performance standards, a higher percentage of them will be deemed ‘proficient’ in math and reading. Schools will appear to be succeeding, and state and local school administrators may experience less pressure to improve outcomes.”
Your study compares state proficiency levels to those of the National Assessment of Educational Progress?
Yes. The first part of the report looks at the bar for proficiency across states. It’s not examining how well students are performing in a given state, but how high states set their bar for proficiency. Why? Because when No Child Left Behind was initiated and states were required to start testing students, states could set their own proficiency bars.
For example, a student in Connecticut performs at a certain proficiency rate because maybe Connecticut set the bar low. The student didn’t have to perform so well to be labeled as proficient.
Then the same student moves to Massachusetts, where the bar is set a little higher. That student wouldn’t be proficient in math and reading anymore—even though he or she is still performing at the same rate. That creates a bit of a problem, as you might imagine, when you’re trying to compare students across states.
So, then we wanted to see whether a state has a lower or a high bar. We ended up using the NAEP as a reference because it is widely considered as having set a high bar for proficiency and it uses a nationally representative sample of students across all states. It’s representative also by state.
For example, again looking at Connecticut, NAEP will have a math and reading proficiency rating for grades 4 and 8 that is representative of Connecticut students. You can compare that proficiency rating to what the state of Connecticut has set for its proficiency rating in state-administered exams.
Now, if Connecticut says 75 percent of its students are proficient in math and reading in state exams, but then you cross over and look at the NAEP and see that it’s maybe only 50 percent, that’s a pretty big discrepancy. It would suggest to you that the state is setting a lower bar for proficiency. That was really the primary impetus behind the analysis.
The report discusses mastering content standards and proficiency standards. Can you explain the difference?
To a large extent the two things are interrelated. By virtue of setting the content standards you’re saying students should be proficient in these things. The difference essentially is that content standards are saying, “Here’s what students should know at fourth grade in math.” Then the proficiency rating says, “OK, are the students proficient in that content or not?”
There’s a distinction between what is a content standard and what is a proficiency standard. That’s just a simple way of trying to describe how the proficiency bar could differ from a content standard. I agree with you—it’s confusing because the two things are really inextricably linked in a lot of ways.
Part of your report is about how states turned against Common Core, yet in reality many states kept it intact but gave it a different name.
There’s some evidence to suggest that when you ask parents or the public at large what they think about Common Core, you have a much more negative response to that than you do asking a question like, “What do you think about having uniform national standards?”
In other words, it seems to be the case that the public isn’t necessarily against uniform national standards as much as they seem to be against the Common Core brand. It seems that politics to some extent is playing a role in how the public feels about Common Core.
It’s not entirely clear to me that everyone understands what Common Core is. A lot of people have soured on the brand given all the politics surrounding it. As a result, we saw many states pull back from the Common Core brand to create their own standards. But some follow-up analyses of these states raises the question of what they have actually changed.
At least, the preliminary results from some of these analyses suggest that many of the changes have just been cosmetic. States don’t appear to have made really sweeping changes. I would underscore the word “preliminary” there. At least in the short-term, most of these changes appear to be just cosmetic.
It sounds similar to how some people were against “Obamacare,” but if you called it the “Affordable Care Act” they had a more positive impression.
Yes. It’s funny how that works. I do think it’s pretty clear that because Common Core was so frequently in the public sphere over the past couple of years and often presented in a negative way, both on the left and the right, that the attitudes have really soured on it.
They are against the brand rather than the content?
Yes. The actual organized constituencies that are against Common Core understand what uniform national standards are and have their own rationales for why they’re against them.
The public generally has a less clear idea of what a uniform national standard is. When they hear it explained, they seem to be much more positive on it than when they just hear the Common Core name, which has all these negative connotations attached to it now.
Although states have raised their standards, you say that hasn’t translated into higher levels of student test performance.
The first part of our analysis finds that states have dramatically increased their proficiency bars. That’s a really interesting thing. They have raised the bar to very close to what NAEP is—and they had been quite far apart if you go back to 2009.
But, if you look at test score growth on the NAEP, there’s no relationship between test score growth and a state’s improvements in how close it comes to NAEP or how well it has strengthened its proficiency bar.
Now, we didn’t control for a host of other factors that might also be affecting that relationship, but at least, based on this simple relationship, there’s zero evidence for it.
Were you expecting a correlation?
Many of the folks who wanted to see states raise their proficiency bars thought that it would translate into improved student achievement. If that’s not coupled with improvements in teaching and learning, I don’t know how simply raising the bar will translate into better outcomes.
Your research shows we haven’t begun the feared “race to the bottom.” But rather than being satisfied, shouldn’t educators see it as a reproach of sorts?
I think that’s right. On the one hand, states have raised their bar for proficiency. That’s a good thing. But on the other hand, states haven’t figured out how to translate a higher bar for proficiency into better student achievement. It’s not even clear that just having a higher bar for proficiency can even be translated into higher student achievement.
We just don’t know yet.
Maybe when we do this analysis again in two years we’ll have a better answer. The point here is that now that you have these high proficiency bars in place—which are good—how do we infer that students are doing a better job of reaching those high proficiency standards?
If you look at the latest NAEP results, the test growth is flat. That’s rather disappointing.
Tim Goral is senior editor.