Every edtech product has a study that says it’s effective. But what are the impact results of that study really saying? How meaningful is the difference between outcomes? And how do we know that difference is not due to chance?
For quite some time now, the traditional approach to edtech evaluation—the goals, the conventional process, the vetting, description and use—has been broken. Because of this, administrators can be easily confused or misled by the scarce credible information about program effectiveness. Yet every school year, they still need to make decisions about what tools or programs to source, select, purchase, adopt and support in their schools and districts.
At MIND, we believe there needs to be more non-academic conversation about research, and that formal research findings can and should should be unpacked and translated to become useful to decision makers.
Demanding More from Edtech Evaluations gathers together content from MIND’s podcasts, blogs, webinars, videos and more into a single, easy-to-digest resource aimed at equipping administrators to expect and get more out of current and future edtech research.
You can learn more about our methodology and the impact of of our visual instructional ST Math program at stmath.com/impact.
Andrew R. Coulson is Chief Data Science Officer at MIND Research Institute. His team of data analysts evaluate program usage and measure student learning outcomes. Follow Andrew on Twitter at @AndrewRCoulson.
Comment