What Is a “Minimal Detectable Effect” and Why Does It Matter?

You have seen the impact with your own eyes. Teachers are using the product. Students seem more engaged. District partners are telling you the program is making a difference. Your team has stories, usage data, testimonials, and real confidence in the work.

Then the evaluation comes back and says there was “no statistically significant impact.”

That can be frustrating, especially for EdTech leaders and nonprofit executive directors who need evidence to support renewals, fundraising, grant applications, board conversations, or district partnerships.

But here is the part that often gets missed: “no statistically significant impact” does not always mean “no impact.” Sometimes, it means the study was not sensitive enough to detect the impact that was actually there.

That is where Minimal Detectable Effect, often called MDE, matters. For education organizations, understanding MDE can be the difference between useful evidence and an expensive report that was never built to answer the real question.

The Jewelry Scale vs. the Truck Scale

Imagine trying to weigh a gold coin on a scale built for semi-trucks. The coin has weight. It is real. But the scale is too blunt to register it. Now imagine using a jeweler’s scale. Suddenly, that same coin can be measured with precision.

That is the simplest way to think about Minimal Detectable Effect.

MDE is the smallest amount of impact a study is designed to reliably detect. In general, if a program produces a smaller effect, the evaluation needs to be more sensitive. That usually means a stronger design, better comparison groups, more students, better baseline data, or more targeted outcomes.

If the program’s effect is large, a smaller study may be able to detect it. If the effect is modest, which is common in real-world K–12 settings, the study needs to be built carefully enough to see it.

This is where many organizations fall into what we call the Academic Trap.

The Academic Trap is the belief that rigorous evidence always requires a long, expensive, multi-year study. Because of that assumption, leaders either delay evaluation until “someday” or run a smaller study without fully understanding what it can and cannot detect.

Both options create risk.

A rapid-cycle evaluation can still be rigorous, but it has to be designed with the right level of sensitivity from the beginning.

Why MDE Matters for EdTech Companies

For EdTech companies, MDE is not just a technical detail. It is a renewal and growth issue.

If your product produces meaningful student gains, but the study is too small or too noisy to detect them, the evaluation may come back inconclusive. That creates a frustrating situation for your sales, marketing, and customer success teams.

You may have a product that is helping students, but not the right evidence to show it.

This matters because districts are increasingly expected to make evidence-based purchasing decisions. The U.S. Department of Education describes ESSA evidence tiers as a way to classify interventions based on the type and quality of research behind them.

That means your claims need more than enthusiasm. They need credible evidence.

A well-designed evaluation helps you avoid gambling your renewal strategy on a study that is too blunt to detect your product’s actual effect. It gives your team a clearer answer to the question district leaders are already asking: “Does this work for students like ours?”

Why MDE Matters for Nonprofits

For nonprofits serving youth, MDE can be just as important.

Many nonprofit leaders know their programs are making a difference. They hear it from students, families, teachers, and district partners. But funders and boards often need more than stories. They want evidence.

The challenge is that nonprofits often operate with limited evaluation budgets. That makes it even more important not to waste resources on a study that was unlikely to produce clear findings from the start.

If the MDE is too high, the study may only be able to detect very large effects. But many valuable education programs produce smaller, meaningful gains over time. Those gains may matter deeply for students, but they can disappear statistically if the evaluation is not designed with enough sensitivity.

MDE helps nonprofit leaders ask better questions before the study begins:

  • What size impact are we expecting?
  • How many students are included?
  • Are we measuring the right outcomes?
  • Are we using baseline data to account for where students started?
  • Are we comparing students in a fair and credible way?

Those questions can protect your budget, your board report, and your fundraising story.

The Connection Between MDE and Statistical Power

MDE is closely connected to statistical power.

Statistical power is the ability of a study to detect an effect when one truly exists. A commonly used benchmark in impact studies is 80 percent power, meaning the study is designed to detect a specified effect size 80 percent of the time under certain assumptions. IES materials describe this type of standard in the context of education impact studies.

Put simply, power is about whether your study has enough strength to find the signal. MDE is about how small that signal can be before your study misses it.

This is why “no significant impact” needs to be interpreted carefully. A null finding from a strong, sensitive study means something very different from a null finding from a small, noisy, underpowered study.

One is evidence. The other may be a measurement problem.

How to Lower Your MDE Without Waiting Years

The good news is that organizations do not always need a massive, multi-year randomized controlled trial to get useful evidence.

There are practical ways to improve the sensitivity of an evaluation while still working on a rapid-cycle timeline.

One way is to reduce noise. That may mean focusing on students, schools, or implementation sites where the product or program was actually used with fidelity. If a study includes too many low-use or inconsistent-use cases, the true effect can get diluted.

Another way is to use baseline controls. In K–12 settings, students rarely begin in the same place. A strong evaluation should account for where students started before the program or product was introduced. Pre-test or prior achievement data can make the comparison more precise.

A third way is to choose targeted outcomes. If your intervention is designed to improve phonics skills, measuring broad ELA performance may be too blunt. If your program is designed to reduce absenteeism, attendance outcomes may be more relevant than a general academic measure.

The broader lesson is that evidence is most useful when the study is designed around the decision being made. That is the heart of rapid-cycle evaluation.

Clarity Is the Shortcut

Minimal Detectable Effect may sound technical, but it’s pretty simple.

Before you trust an evaluation to tell you whether your product or program worked, you need to know whether the study was capable of detecting the kind of impact you expected to see.

For EdTech companies, that can protect renewals, strengthen sales conversations, and turn product claims into credible evidence.

For nonprofits serving youth, it can support grant applications, donor reporting, board confidence, and stronger district partnerships.

And for school districts, it can bring more clarity to the decisions that affect students, budgets, and long-term strategy.

You do not have to wait years to learn whether your work is making a difference. You need a study designed to see the difference clearly.

Want to see what this kind of evidence can look like in practice? Request a sample MomentMN Snapshot Report and experience how Parsimony turns existing district data into clear, decision-ready impact evidence.

Share

Continue Reading

Experience an Easier Way To Get Rigorous Evidence of Your Impact

Have questions? Want a demo?

Book a call with Dr. Amanuel Medhanie, who will answer your questions and show you
around the Snapshot Report service.