Blog: Subscribe Now

Utah STEM Action Center Releases Exemplar Statewide Program Evaluation

A new bar has been set for state-grants-funded digital math programs evaluation. The Utah STEM Action Center has released its 2020 annual report, which includes the results of their comprehensive statewide program evaluation. This represents a new high water mark in a seven-year progression towards the vital practice of holding education programs and school districts using state funds accountable for achieving student learning impacts.

Founded in 2013, the STEM Action Center is tasked with regular and rigorous reporting to the Utah state legislature on the use and impact of annual state funding for digital supplemental math programs, and for knowledge capture and dissemination for advancing STEM digital education best practices in the state. The program offers a variety of grants to Utah schools, including the K-12 Math Personalized Learning Software Grant that provides access to a selection of math personalized learning software programs to improve student outcomes in mathematics literacy. ST Math, created by MIND Research Institute, is one of the math programs available to schools and districts as part of the grant. 

Why is the report significant?

Clocking in at over 450 pages, the report tracks and unpacks digital program usage at a statewide level, and reports back usage fidelity and impact efficacy to the state legislature. 

By conducting such a thorough and comprehensive analysis, the Utah STEM Action Center is a national pioneer and model in its transparent and highly visible commitment to accountability and continuous improvement. 

Further, the STEM Action Center engaged university data researchers from the Utah Education Policy Center at the University of Utah to conduct the studies in this report – a strategic collaboration that further underscores the Center’s focus on sound research methodology, and independent and rigorous analysis. 

Utah’s commitment to rigor

Preparing a statewide analysis requires a commitment to rigor by all parties – the state, the researchers, the edtech program vendors, and the school and district users. Key challenges include:

  • Getting the digital usage data in a normalized fashion despite a myriad of product platforms
  • Defining the measures of impact, for the internal program inputs and outputs, for fidelity of implementation, and within the state standardized math test metrics
  • Willingness to test and adapt different methodologies on the road to success – it won’t always work the first time
  • Feedback from and partnership with the edtech vendors directly, who have insight into their data

Under the auspices of the STEM Action Center, I – along with representatives from all other program vendors – was delighted to have the regular opportunity to work closely with the researchers from the Utah Education Policy Center to share more about how we analyze the efficacy of ST Math using a model of repeatable results at scale.  

Ultimately, digging so deeply into program efficacy, at scale and normalized across multiple programs, benefits every stakeholder. Edtech providers share and learn from apples-to-apples usage data that enables cross-program analysis. Schools and districts are provided a statewide lens on digital program implementation findings and benchmarks on usage and impact for them and for their neighbors… leading to more valid and data-informed decisions on product fits and product implementation planning for their teachers and students. And state stakeholders have insight into the effectiveness of their initiative, with evidence for the return on investment of taxpayer dollars. 

What does the report say about K-12 math software? 

The Impact of K-12 Math Personalized Learning Software on Student Achievement portion of the report begins on page 311, and compares software users to non-users on three outcomes of interest: proficiency, percentile rank, and student growth percentile (SGP). 

Overall, the researchers found “students who use digital math software more and students who use digital math software with greater consistency outperform students who use the software less and use software with less consistency.” (see page 351)

Proficiency

Measures student performance relative to a predefined benchmark

Proficiency chart from Utah STEM Action Center report, figure 13

Percentile Rank

An indicator of students performed relative to other students who took the same test

Percentile rank chart from Utah STEM Action Center report, figure 14

Student Growth Percentile (SGP)

A statistical estimate of student growth relative to students who had similar performance in the past 

Student growth percentile chart from Utah STEM Action Center report, figure 15

A word about repeatable results at scale

The researchers take care to note (see page 350) that their analysis is observational and correlational in nature. The study was not a fully experimental, randomized control trial (RCT) and is not evidence of causal impact.

While RCTs are that ultimate causal test in terms of assessing the efficacy of a program, they are also lengthy, expensive, and rare. This means that RCTs may fall quickly out of external validity in terms of application beyond the experiment: for example after updates to the version of the program that is currently on the market, or to its implementation or support model. In my view, the Utah study is a sterling example of where the market should be moving – toward a large data set, using recent program versions, over many varied districts. At MIND, we believe a high volume of effectiveness studies is the future of a healthy market for product information in education.

What can other states take from this analysis?

While assembling a report of this scope is a significant undertaking, it provides an unparalleled look at the results of Utah’s major investment in STEM education. 

There are many strong benefits that come from conducting a statewide analysis. The state grantor will see greater district accountability and buy-in for the fidelity of their implementation and quality of their usage. The state grantor will also have the ability to evaluate program usage and efficacy across districts, helping to uncover the best practices that lead to the strongest gains for students. 

Ohio is another state that has invested in statewide efficacy analysis. For example, in 2015 the PAST Foundation conducted a formative evaluation of its Math Matters Projects, which studied ST Math implementation across nine districts in Fairfield and Franklin Counties. Funded by the Ohio Department of Education Straight A Fund, one of the most significant findings of that analysis was that teachers need more time, hours, and information in order to plan for a successful roll-out at scale of a new blended learning program. 

It’s more important than ever for states to understand how effective their edtech spend has been – particularly in light of the need for CARES Act and CRRSA Act funding to go toward edtech programs that address unfinished learning and are effective in a distance learning environment. We believe the Utah STEM Action Center report is a strong exemplar for other states to review and follow for their own analyses. 

Further Reading

Andrew Coulson

About the Author

Andrew R. Coulson is Chief Data Science Officer at MIND Research Institute. His team of data analysts evaluate program usage and measure student learning outcomes. Follow Andrew on Twitter at @AndrewRCoulson.

Comment

Interested in Contributing?

Read Our Blog Guidelines

Join Our Newsletter