Chicago Program for Testing Ed-Tech Finds Need for Data, Smart Practice

Managing Editor

A Chicago project that pairs ed-tech companies with schools is offering insights on the importance of educators implementing products with care–and businesses producing data to help teachers do their jobs.

The marriage of schools with digital providers is being arranged by LEAP Innovations, a Chicago-based nonprofit that attempts to create opportunities for breakthroughs in education by working with businesses and schools.

LEAP released a report this week on the initial cohort of companies and schools taking part in the program.

Schools were selected based on having shown a commitment to trying new digital approaches, the soundness of their IT infrastructure, and other factors. Companies also went through a detailed screening process.

LEAP (which stands for Learning Exponentially, Advancing Potential) released findings from its first group of participants during the 2014-15 school year. Another round of testing in Chicago schools is underway this academic year.

While the sample sizes of students using the ed-tech products were relatively small, the schools that used the digital literacy tools with care saw positive results. LEAP officials believe the preliminary data offer tips to companies and K-12 officials on the importance of sticking to sound practices, and focusing careful implementation, when using new digital tools.

“It really is about the practice, and not the tool,” said Phyllis Lockett, the CEO of LEAP Innovations.

“This needs to be about helping school teams re-design their classrooms for personalized learning,” and that means professional development and planning “needs to happen first, before personalized learning is implemented.”

Fifty-four schools applied to participate in the pilot program; 15 ultimately took part, a cohort that included five traditional Chicago public schools, eight charter schools, and two Archdiocese schools.

Twenty-nine applications for different companies’ products were given to LEAP. A panel of learning scientists, educators, and others then evaluated the companies based on several factors, including their potential to improve student learning; their stability; and their alignment to learning science and common-core standards.

At the end of that process, six companies, all of whose products were focused on literacy, were chosen.

Test Score Gains

LEAP officials set out to measure the effectiveness of those products by looking at data on two measures: the Northwest Evaluation Association’s Measures of Academic Progress, and DIBELS assessments.

Results from a total of 1,613 students from schools in the pilot were then compared against a larger control group from the Chicago public schools, which uses the NWEA. Researchers used a method known as propensity score matching to create a control group of Chicago students mirrored as closely as possible to the pilot group students.

Just four companies’ overall results were included in the LEAP report, because of mismatches between the grades or populations they served and the measures used by researchers evaluating them. But some of the results were encouraging:

  • Overall, for students in grades 3-8, students in the pilot saw a gain of 1.07 test-score points above the control group on the NWEA–equivalent to a big closing of the achievement gap, of about 45 percent for students from low-income backgrounds.
  • Students using 1-to-1 instruction and supplementary 1-to-1 instruction (with built-in additional instructional time) had greater academic gains than students using “station-rotation” models.
  • Two products tested showed significantly positive results for student learning. Lexia Reading Core5, an adaptive literacy tool, was piloted by 1,038 students and closed the achievement gap by more than half for impoverished students. ThinkCERCA, a critical thinking and learning framework, was found to have an extremely large impact, producing gains equivalent to roughly an extra year’s worth of academic growth.

For the two other products with results included in the study, one product’s sample size was too small to produce statistically meaningful results, and the other was undermined by implementation challenges, LEAP officials said. (They did not name those products.)

During the initial pilot, despite the positive results, LEAP officials found that the majority of companies did not have “reliable, research-based” recommendations on how students should use their products.  Many of the products were grounded in learning science, but they lacked credible information on implementation and outcomes.

Additionally, some of the companies also weren’t able to produce data to help teachers improve instruction, LEAP found. Many of the newer products, specifically, didn’t have the capacity to export data regularly, or capture it.

For ed-tech providers taking part in the program in the future, LEAP officials explained in the report, “establishing standards around what data teachers need to most effectively utilize products and what data is needed to evaluate them will be key.”


See also:


Follow @EdWeekSCavanagh and @EdMarketBrief for the latest news on industry and innovation in education.