The first page provides a high-level view of your performance metrics and overall impact.
Identification & Support
The Overall Score
The Overall Score (0% to 100%) represents the percentage of desired student outcomes that were positively and significantly impacted by your product/service.
Outcome Charts
These charts group your results into two categories:
The second page breaks down the specific data, student samples, and statistical comparisons.
Defining the Student Groups
To determine value-added impact, we compare two distinct groups:
Understanding the Impact Scales
Because different assessments use different scoring systems, we translate all results into two standardized scales:
Used for continuous data like test scores. It measures the “distance” between the two groups.
Used for “Yes/No” outcomes, such as achieving at least 95% attendance or receiving a behavioral referral.
Statistical Significance
We distinguish between “random noise” and real impact. Significant differences (those we are confident are not due to chance) related to the product/service are visually highlighted:
Methodology & Data
What is the “Overall Score” actually measuring? The score represents the “hit rate” of your product/service. If you have 10 desired outcomes and 8 of them show a statistically significant positive impact, your score would be 80%. It is a quick way to gauge how consistently your product/service delivers on its promises.
Why is my “Total Students” count different for each outcome? Data availability varies by student and grade. For example, a 3rd-grade reading assessment won’t have data for 5th graders, even if they used your product/service. We only include students in the analysis if they have both “Treatment” data and “Baseline” data available for that specific metric.
What does it mean if a result is not “Significantly Different”? If a cell isn’t highlighted in dark teal, it means the difference between your students and the comparison group was small enough that it could have happened by chance. In research terms, we cannot confidently say the product/service caused that specific change.
Is a Cohen’s d of 0.2 actually “good”? In education research, a 0.2 is often more meaningful than it sounds! While 0.8 is “large,” even small gains (0.2) in standardized testing can represent months of additional learning growth compared to peers.
How do I read an Odds Ratio for negative behaviors (like suspensions)? For outcomes you want to decrease, look for an Odds Ratio below 1.0. For example, an Odds Ratio of 0.7 for suspensions means the odds of your Treatment students being suspended was 30% lower (1 – 0.7 = 0.3) than the comparison group’s odds of being suspended.
How do you find “Similar Non-Treatment Students”? We don’t just pick students at random. We use multivariate matching, which looks at a student’s prior test scores, English Learner status, Special Education status, and other demographics. We then find a “data twin” in the district who looks just like your student but didn’t use your service.
Why do we use two different scales (Cohen’s d and Odds Ratio)? Think of it like measuring weather: you use degrees for temperature (Cohen’s d) but “percent chance” for rain (Odds Ratio). We use Cohen’s d for scores that can go up and down on a scale, and Odds Ratios for things that either happen or they don’t.
Can I get more technical details on the statistical models used? Yes. Our reports are grounded in a quasi-experimental design that uses advanced statistical modeling to ensure accuracy. This includes bias-reduction techniques and specialized regression models (such as Firth’s penalized logistic regression) to provide the most reliable estimates possible. Detailed documentation on our analysis framework is available in our Study Methodology document. Because we regularly update our methods to reflect the latest academic standards, please contact us at [email protected] for the most recent version.
What should I do if my Targeted Outcomes aren’t significant? This is a great time to look at usage/engagement. Check your records to see if the students used/engaged with the product/service as intended. Often, “low impact” is actually a “low usage” or “low engagement” issue. We can help you dive deeper into these patterns.
Who should I share this report with? This report is perfect for District Partners, School Boards, or your internal Product/Service Development team to show exactly where your product/service is providing the most value.