Blog: Subscribe Now

ST Math Results & Impact

Understanding MIND's Research Paradigm and Ways to Evaluate Effective Education Research

Academic conversations around education research can often be daunting, difficult to follow, and beg more questions than answers when considering education programs. 

The ability to come away with a thorough understanding of the results and outcome of a study—and why they're so significant—carries many challenges. The attempt to make research more relevant brings with it the need to unpack and translate formal research findings so that they'll be helpful to decision-makers, namely educators and school administrators. 

When discussing research that evaluates edtech program effectiveness, we need to talk less about access to it and more about the nature of the information itself. Traditionally, edtech evaluation has been made up of scarce data and insufficient credible information about the efficacy of its program. Every year, however, districts are still required to select programs to purchase and adopt to support their schools.

Our goal at MIND Education is to equip educators and administrators with easy-to-digest language around edtech research, including the way we conduct our own studies. Hopefully, you'll come away with a better understanding of the critical components of research, as well as a quick resource for having better questions to ask when searching for the right edtech program for your students.

As a nonprofit social impact organization focusing on neuroscience and education research, we do many things differently. In addition to demonstrating why ST Math is as overwhelmingly effective on student math achievement as it is, we also dive into edtech evaluation and numerous studies on how the brain learns—research is a huge part of what we do at MIND. We also aim to help educators and administrators better navigate edtech efficacy research and resources and make informed decisions for their schools.

Is "Evidence-Based" Enough?

Many edtech companies will ask schools and districts to spend a great deal of money on their programs. To prove their program's effectiveness, they'll provide supporting evidence—typically a single study covering half a school year. Often, this is how an edtech program is judged. Many may not be aware that the "evidence" is scarce and the research approach is insufficient to provide enough data to indicate how effective the program truly is. 

Problems with the "One Good Study" Paradigm

Evaluating edtech programs should be more than simply "checking the right boxes." But we also understand engaging with edtech studies on a deeper level can be difficult and time-consuming. However, there are also significant limitations to depending on a "one good study" paradigm.

If we're willing to change how decision-makers approach the evaluation of edtech studies, the research itself must be useful and useable. Additionally, we need to empower decision-makers to expect more from edtech research by closely examining what the research or study is actually indicating.

Long-Term Impact 

One challenge education leaders face is the lack of long-term accountability pervasive among the abundance of educational programs out there. Without the ability to track long-term performance data, there are no opportunities for further innovation that could improve the program.

This leads to achievement levels that are far behind what the potential outcome could be if the program were fully implemented and used with fidelity. Without the proper widespread use of an instructional program, results remain scarce, with no generalizable findings.

When seeking and evaluating potential programs to adopt for a school or district, it's crucial to ask for impact studies demonstrating its long-term efficacy. Do the results indicate sustained growth in other schools and districts? The longer the program has been implemented with consistency and fidelity (i.e., multi-year, longitudinal studies), the more reliable the findings and outcome.

Understanding methods for evaluating the impact of a program can also go a long way. For example, how were the results measured consistently? What was the frequency of the program's evaluation?

At MIND, we've continuously evaluated ST Math's impact in many schools and districts over the years. You can see our most recent results here.

Repeatable Results at Scale 

Education research requires a renaissance. It's time to move beyond the "gold standard" study by looking at many studies, using the most updated version of the program, and acquiring repeatable results over many varied schools and districts.

Rather than relying on randomized control trials, quasi-experimental studies (in essence, studies conducted in a natural setting) can examine the adoption of a program as is, eliminating the time-consuming nature that formal experimental planning takes. By assessing the efficacy of a program using quasi-experimental research, we can evaluate a much higher number of studies, which is ideal when assessing repeatability.

Put simply, multiple quasi-experimental studies are much more powerful than randomized control studies when examining the efficacy of an education program.

Why is repeatability important? If a research study cannot be replicated, its credibility is questionable. The scientific merit behind a study's results lies in its ability to be replicated, thus ensuring its validity. It's vital that an educational program can reproduce its impact in other settings.

The rigor of repeatability can add vastly improved validity concerning:

  • A recent version of the program, training, and support
  • A real-world variety of types of use, districts, grade levels, teachers, and student subgroups
  • Patterns of results across many different assessments

For more on what questions to ask when evaluating edtech research, visit the link here: Does That Program Really Work? 6 Questions to Ask About Research. 

MIND's Unique Approach to Research

At MIND, we’ve leveraged the efficacy advantage with ST Math. Our leadership model provides educators results with authenticity, transparency, and clarity. We strive to generate credible, rigorous research studies to demonstrate how ST Math can impact all teachers and students. To learn more about our methodology and impact, visit the link here

One of the many things that make MIND unique is how we conduct studies. In addition to offering overwhelming transparency with our research, we produce between 10-15 data reports yearly—with specific parameters around how we run these reports. Each of our studies requires, at the very least, 1,000 or more students—which is not something you often find in education research.

Moreover, we only report on grade-band, and never on individual students alone. The goal has always been how to get all students to grow. When determining student math performance, there are more adequate sources of data than looking at the average math score. Average is one of the greatest statistical misconceptions and is often misunderstood. For example, if you have 10 data points on one side, and just 1 data point on the opposite end, this will skew the overall average—even though many of the data points occur on one end.

In essence, we want to ask, “how do we move the spread?” We measure this by effect size.

To learn more about what effect size is and ST Math’s effect size, visit the link or watch the video below: What is Effect Size?

 

Third-Party Validation 

At MIND, we also believe in accountability and transparency. In a first-of-its-kind, nationwide study, WestEd—a research and development agency that works closely with education communities to promote equity and learning—published the most extensive study of its kind to evaluate ST Math nationally. This included over 150,000 students between 2013 and 2016.

The study looked at grades 3, 4, and 5 in 474 schools that started using ST Math between 2013 and 2015, and included 16 states where complete state standardized test and demographic data was publicly available to the researchers. 

The 2019 nationwide WestEd study was a breakthrough in effectiveness evidence for ST Math, as the size was, at last, sufficient to earn all the significance asterisks for our novel studies. What's novel is we're reporting results on the highest stakes metric in the education market: school-wide performance improvement from year to year.

The third-party evaluation of the WestEd study provides an extra layer of scrutiny and accountability. Though many edtech programs are not required to be third-party validated, MIND has put the requirement on ourselves because we believe full transparency is vital to improving the health of the education market.

Visit the link to learn more about our Third Party Validation of the ST Math WestEd study. 

Equity

Experiencing math growth and developing a deep conceptual understanding of math concepts shouldn’t be limited to only one or two student subgroups. An educational program must support ALL dimensions of diversity among students from all backgrounds.

Equitable impact means that the same program works for every single student. Accessibility is critical when creating and developing a program, which should also increase entry points for all proficiency levels. This is especially crucial when considering how to accelerate student learning amid the low math and reading scores due to the pandemic’s impact.

Take a look at ST Math’s recent impact in Texas in the graph below. There’s no denying the outcome—the unprecedented results set ST Math apart as a visual-instructional tool that serves and benefits students from all subgroups. 

An educational program should always aim to serve ALL students—and ensure they’re all equipped to solve the world’s most challenging problems. And due to the current inequities in education, we need to provide innovative resources to eliminate these disparities. We need to empower all students of all backgrounds and continually discover ways to elevate their education.

Key Takeaways

When making well-informed decisions about edtech programs for your school or district, it's crucial to look beyond the initial appeal and deeper into the efficacy research and data that would justify the program's usage. How effective will the program be? Will it reliably get most, if not all, your students to an elevated level of academic achievement?

​​To recap, here are some key questions to keep in mind when evaluating an educational program’s efficacy:

  • What are the study results?
  • Does the study cover the most recent version of the program?
  • Does the study involve a reasonable amount of usage (time, dose, etc.) in its treatment group?
  • Does the study break down results by low, medium, and high use?
  • Does the study require repeatability of results? Can it be replicated?
  • Is the research methodology consistent?
  • Is there a third party involved, thus ensuring the study’s validity?
  • Is the study conducted at least once a year?
  • Does the study take into account student equity?

You don't need to be an expert in research and evaluation, but knowing what to look for is essential. And it never hurts to ask. That's why we're here.  

 

Additional Resources

Victor Nguyen

About the Author

Victor Nguyen is MIND’s Content and Community Specialist. Victor is a passionate storyteller with a penchant for creative writing. In his free time, you can find him engrossed in books, going on long hikes, or trying to meditate.

Comment

Interested in Contributing?

Read Our Blog Guidelines

Join Our Newsletter