Snapshot Report Interpretation Guide

Page 1: The Executive Summary

The first page provides a high-level view of your performance metrics and overall impact.

Identification & Support

  • Product/Service Name: Located in the top right corner, confirming the specific service evaluated.
  • Order Number: Located in the bottom left corner of page 1 and the bottom right corner of page 2. Please reference this number whenever contacting us with questions.
 

The Overall Score

The Overall Score (0% to 100%) represents the percentage of desired student outcomes that were positively and significantly impacted by your product/service.

Outcome Charts

These charts group your results into two categories:

  • Targeted Outcomes: Outcomes your product/service is specifically designed to improve.
  • Non-Targeted Outcomes: Outcomes your product/service may influence, but is not specifically designed to improve.
 

 

Page 2: The Deep Dive

The second page breaks down the specific data, student samples, and statistical comparisons.

Defining the Student Groups

To determine value-added impact, we compare two distinct groups:

  • Treatment Students: The students from your roster who utilized your product/service during the specified time period.
  • Similar Non-Treatment Students: A comparison group of students in the same district who did not use your product/service. We use multivariate matching (a quasi-experimental technique) to ensure these students share similar demographics and baseline performance as your Treatment group.
 

 

Understanding the Impact Scales

Because different assessments use different scoring systems, we translate all results into two standardized scales:

  1. Cohen’s d (For Assessments)

Used for continuous data like test scores. It measures the “distance” between the two groups.

  • 0.0: No difference between groups.
  • 0.2: Small impact.
  • 0.5: Moderate impact.
  • 0.8+: Large impact.

 

 

 

  1. Odds Ratio (For Binary Outcomes)

Used for “Yes/No” outcomes, such as achieving at least 95% attendance or receiving a behavioral referral.

  • 1.0: Both groups were equally likely to achieve the outcome.
  • Above 1.0: Treatment students were more likely to achieve the outcome.
  • Below 1.0: Treatment students were less likely to achieve the outcome.

 

 

Statistical Significance

We distinguish between “random noise” and real impact. Significant differences (those we are confident are not due to chance) related to the product/service  are visually highlighted:

  • Charts: Indicated by a dark teal or dark yellow dot.
  • Tables: Indicated by a dark teal-filled cell.
 

 

Methodology & Data

  • Total Students (N): Not all students participate in every assessment (e.g., grade-specific tests). The Total Students column on page 2 shows the exact sample size used for each specific outcome.
  • Data Source: Data is pulled directly from your selected school district, combining your roster with district demographic and baseline records.
 

 

 

Frequently Asked Questions

 

The Basics

What is the “Overall Score” actually measuring? The score represents the “hit rate” of your product/service. If you have 10 desired outcomes and 8 of them show a statistically significant positive impact, your score would be 80%. It is a quick way to gauge how consistently your product/service delivers on its promises.

Why is my “Total Students” count different for each outcome? Data availability varies by student and grade. For example, a 3rd-grade reading assessment won’t have data for 5th graders, even if they used your product/service. We only include students in the analysis if they have both “Treatment” data and “Baseline” data available for that specific metric.

 

Understanding the Results

What does it mean if a result is not “Significantly Different”? If a cell isn’t highlighted in dark teal, it means the difference between your students and the comparison group was small enough that it could have happened by chance. In research terms, we cannot confidently say the product/service caused that specific change.

Is a Cohen’s d of 0.2 actually “good”? In education research, a 0.2 is often more meaningful than it sounds! While 0.8 is “large,” even small gains (0.2) in standardized testing can represent months of additional learning growth compared to peers.

How do I read an Odds Ratio for negative behaviors (like suspensions)? For outcomes you want to decrease, look for an Odds Ratio below 1.0. For example, an Odds Ratio of 0.7 for suspensions means the odds of your Treatment students being suspended was 30% lower (1 – 0.7 = 0.3) than the comparison group’s odds of being suspended.

 

The Methodology

How do you find “Similar Non-Treatment Students”? We don’t just pick students at random. We use multivariate matching, which looks at a student’s prior test scores, English Learner status, Special Education status, and other demographics. We then find a “data twin” in the district who looks just like your student but didn’t use your service.

Why do we use two different scales (Cohen’s d and Odds Ratio)? Think of it like measuring weather: you use degrees for temperature (Cohen’s d) but “percent chance” for rain (Odds Ratio). We use Cohen’s d for scores that can go up and down on a scale, and Odds Ratios for things that either happen or they don’t.

Can I get more technical details on the statistical models used? Yes. Our reports are grounded in a quasi-experimental design that uses advanced statistical modeling to ensure accuracy. This includes bias-reduction techniques and specialized regression models (such as Firth’s penalized logistic regression) to provide the most reliable estimates possible. Detailed documentation on our analysis framework is available in our Study Methodology document. Because we regularly update our methods to reflect the latest academic standards, please contact us at [email protected] for the most recent version.

 

Next Steps

What should I do if my Targeted Outcomes aren’t significant? This is a great time to look at usage/engagement. Check your records to see if the students used/engaged with the product/service as intended. Often, “low impact” is actually a “low usage” or “low engagement” issue. We can help you dive deeper into these patterns.

Who should I share this report with? This report is perfect for District Partners, School Boards, or your internal Product/Service Development team to show exactly where your product/service is providing the most value.