Evidence about a product’s effectiveness can help district leaders decide which educational technologies or instructional resources to purchase and use.
Mathematica Policy Research created a guide to help educators determine which claims can be trusted and which are less reliable. The following is a summary of evidence that educators are most likely to encounter, from weakest to strongest:
Consists of personal descriptions or claims based on one person’s experience or subjective impressions.
Example: My students love using product X. They use it for about 20 minutes every day. On average, my first-grade class is working at a middle of second-grade level.
Summarizes characteristics of participants and outcomes over a period of time without evidence. It is commonly found in marketing materials and news articles.
Example: An infographic displays positive results—without a comparison group.
Moderately strong: Correlational
Identifies the relationship between an educational condition or initiative (such as using an educational technology) and a specific outcome (such as math test scores). It’s a good starting point, but it cannot rule out other possible explanations for the differences in outcomes between users and nonusers.
Example: Middle school students participating in a blended-learning reading program showed increased gains in math skills—up to nearly 50 percent higher in some cases—over the national average.
Usually conducted by independent evaluators and compares apples to apples by ensuring the only difference between the group that participated in the program and a comparison group is the program itself.
Example: Any independent evaluation that uses a randomized controlled trial design.
Source: “Understanding Types of Evidence: A Guide for Educators,” Mathematica Policy Research, 2016.