K-12 curriculum evaluation under review

How to interpret curricular effectiveness data to guide decisions and purchases

You’ve all read the effectiveness research that vendors include to show the value of their curricular products and programs, but not all curriculum evaluation studies are created equal.

Administrators can and should look at effectiveness research with a more analytical eye to gauge a program’s potential to improve learning. Here are five tips for evaluating the quality of the evidence that accompanies curricular products.

1. Look for well-designed curriculum evaluation studies

A “strong” study uses a randomized design in which individuals are selected indiscriminately to receive an intervention. Participants then join an experimental group or a control group.

Another key identifier is the effect—the quantifiable difference in outcome between the two groups. In this case, it’s the learning gain one group attains as the result of an intervention. A bigger effect usually indicates higher success, but not always.

Assistant Professor Betsy Wolf and her colleagues at Johns Hopkins University’s Center for Research and Reform in Education in Maryland analyzed studies carried out or commissioned by product developers. They found that developer-commissioned studies produced larger effects than independent evaluations. They called this the “developer effect.”

“Be wary of who’s conducting the study and who funds it,” Wolf says.

2. Spot some common flaws in effectiveness research

Wolf also urges educators to be wary of studies with the following characteristics:

  • Smaller sample sizes: Researchers theorize that studies with smaller sample sizes have larger effects (i.e., favorable results) because it is easier to control small-scale studies. “Any study with 50 kids, even 100 kids, is a very small sample size,” Wolf says. An appropriate sample size helps researchers make better inferences about a product’s performance in the real world.
  • Developer-made measures: Researchers or developers may create a measure or assessment for the purposes of the study. Look for studies that use independent measures such as an SAT or standardized exam that is routinely administered by states and districts, Wolf says.
  • Nonexperimental design: A nonexperimental study does not manipulate any variables or randomly assign interventions. Wolf says selection bias can threaten the validity of a nonexperimental study design. For instance, Wolf says, participants in a nonexperimental study may be more passionate about the program and therefore more likely to implement the intervention than those in an experimental study, which applies treatments to a group and compares the results to a control group.

3. Make sure the product fits

Sarojani S. Mohammed, partner at The Learning Accelerator, a national nonprofit, says administrators must decide if favorable results shown in a vendor’s study align with the district’s desired outcomes. “If a study measures engagement, but you need a solution to increase graduation rates, the research is not targeting the problem you’re trying to solve,” Mohammed says.


Sidebar: Do’s and don’ts of curriculum evaluation


Also, look for similarities in student demographics, characteristics of the school and district, teacher needs, available technology, and other factors. “Make sure the evidence being shown to you is relevant to your own population,” Mohammed says.

She suggests using Digital Promise’s Evaluating Studies of Ed-Tech Products.

Finally, review product implementations. “Highly effective results” from a study may not carry over into an actual classroom if every teacher who used the program struggled with implementation or disliked the program—even if it was effective, she says.

On a related note, make sure that the study was conducted in an environment similar to the school or classroom setting for which it is being considered.

District leaders must revisit the district’s mission or vision statement to determine what they want a curricular program to achieve, says Angela Di Michele Lalor, senior consultant for Learning-Centered Initiatives and author of Ensuring High-Quality Curriculum: How to Design, Revise, or Adopt Curriculum Aligned to Student Success (ASCD, 2016).

“Without keeping that in mind, you could very well end up with a product that doesn’t get you where you want to be,” Lalor says.

4. Seek outside guidance

The Jefferson Education Exchange, launched in 2015 and overseen by the University of Virginia, has been working to standardize the conditions that make edtech products effective. The nonprofit organization created several yearlong working groups—consisting of entrepreneurs, researchers, investors and district leaders—to evaluate various aspects of the role of efficacy research and evidence in the procurement process.

In the next stage of this work, the Jefferson Education Exchange will crowdsource data from teachers about the performance of technology in their schools.


Sidebar: Understanding evidence types in curriculum evaluation


“We can then run an analysis that shows us that products A, B and C thrive in environments with factors 1, 2 and 3, and products D, E and F are completely failing when implemented in the same environments,” CEO Bart Epstein says. “That information can then be used to make much better decisions about what to buy, while not having to rely on salespeople and well-intentioned word-of-mouth.”

5. Have faith in your colleagues

Other education research platforms have emerged in recent years to assist educators with procurement. The U.S. Department of Education’s What Works Clearinghouse has served as a central source of scientific evidence about products and educational practices since 2002.

And at EdReports.org, each evaluation represents hundreds of hours of work by teams of four to five educator reviewers, says Executive Director Eric Hirsch.

The reviews evaluate products based on usability and design, as well as on alignment with college- and career-ready standards. Educator-recommended materials must meet or partially meet certain expectations:

  • alignment with appropriate ELA, math and science standards
  • alignment with standards that provide appropriate depth and quality to support student learning
  • user-friendly for students and educators, and enhance a teacher’s ability to differentiate instruction

“We like to see not only if the standard is there, but also if it is there in the right dosage and at the right time to help kids learn,” Hirsch says.

Emily Ann Brown is associate editor.


Interested in edtech? Keep up with the Future of Education Technology Conference®.

Most Popular