How Are Educators Broadening Their Definitions of Evidence?

Education Research

How Are Educators Broadening Their Definitions of Evidence?

By Rachel Burstein     May 28, 2019

How Are Educators Broadening Their Definitions of Evidence?

This story is part of an EdSurge Research series about how educators are changing their practices to reach all learners.

Joe Romano’s architecture and design students at the Annie Wright Schools in Tacoma, Wash., were engaged in a project to design and build tiny houses for Seattle’s homeless population. He hoped the project would create a shared culture of learning among students at the recently-opened all-boys high school. Romano was able to use reflection and feedback mechanisms to develop and evaluate students’ collaboration skills. But other goals for the project, such as community engagement and empathy, were more difficult to measure.

Romano faced a problem familiar to many educators—he wasn’t sure how to collect evidence that would measure student progress for some of his goals. “If the success criteria was to offer students an experience that they’ll remember 30 years from now, I think we nailed it,” Romano reflected in an interview conducted by the EdSurge Research team earlier this year. “But if the success criteria was [whether] I would be able to assess social-emotional learning [such as greater social awareness], I’m not sure because I don’t have evidence.”

Over the past year, the EdSurge Research team has been working on a project to understand how educators are shifting practice to reach all learners. We convened and facilitated local educator gatherings called Teaching and Learning Circles in 22 cities around the country; published 60 stories of changing practice by both practitioners and reporters; and surveyed and interviewed hundreds of educators about their experiences. (Learn more about this EdSurge Research project.)

One of the things we learned is that Romano isn’t alone.

From fall 2018 through spring 2019, the EdSurge Research team conducted a survey of 115 educators who registered for Teaching and Learning Circles. Respondents viewed an inability to evaluate student progress toward meeting certain types of goals as a significant problem. In fact, 32 out of 88 educators identified an “inability to effectively measure student progress in academic and non-academic skills” as the first or second most acute challenge they faced from a list of five. Like Romano, these educators were often unsure of how to measure success and daunted by the possibility of collecting and analyzing data to assess progress.

Educators Don’t Think They’re Measuring Success

In our survey, we asked educators how they measured the success of a particular strategy or initiative they had implemented in their learning community. Of the 69 educators who responded to the question, nearly half reported that they had not tried to measure success. This included 21 survey respondents who judged their strategy or initiative to be “successful” or “very successful” (Figure 1).

EdSurge Research

What do these numbers tell us? In some instances, educators’ assessments may be based on a gut feeling rather than based on evidence. But in other cases, we found that educators have a clear—albeit unproven—idea of whether an initiative is successful even if they haven’t developed a formal plan for evaluating its effectiveness.

Some educators are already gathering evidence, even if they don’t know it. In another survey question, we asked educators how they measured the success of their project, program or approach to capture growth for the whole child in their learning community. Between references to specific evaluation rubrics, descriptions of surveys and observational data points, we found comments indicating that some educators aren’t counting anecdotal evidence, and perhaps other forms of qualitative data as evidence of the effectiveness of new approaches. For example, one survey respondent wrote, “I didn’t have quantitative measures, but projects had final products.” Another reflected, “[We] only [had] anecdotal [evidence]. We have not been applying rubrics or assessments.”

This finding aligns with what our editors found with a significant number of contributing writers for this project. When asked about evidence, some writers were hesitant, explaining that they didn’t have any quantitative metrics. Others asked questions like, “What do you consider evidence?” After digging deeper with our editors, most contributors were able to identify evidence of success for non-academic skills such as recognizing an increase in student-led conversations with strong scientific arguments after incorporating a new feedback practice, or watching the board become filled with student work after trying out a collaborative, whole class challenge instead of a traditional math assessment.

So, what changed for these educators? According to our editors, it was a combination of broadening their definitions of evidence, paired with a dose of confidence and validation that what they’re doing matters—even if it’s not traditional, quantitative assessment.

Identifying Measurements That Align With a Project’s Goals and Scope

For educators who are used to evaluating student progress with traditional grades and test scores, it can be hard to think about qualitative data as a legitimate form of evidence. And finding the time to implement a formal evaluation approach can prove challenging. Still, gauging effectiveness is key. The impetus for making a change to pedagogical practice, is of course, that something isn’t working. Ensuring that a plan is in place for figuring out whether the new or modified approach is working is critical.

But measuring progress needn’t be a laborious process. Comprehensive rubrics, lengthy feedback forms and time-consuming observational techniques all have their place. However, more lightweight approaches to evaluation may be more effective for projects and initiatives of a smaller scope.

Many of the educators we surveyed and interviewed shared that measuring effectiveness can be daunting, but that aligning the evaluation approach with the goal of the project can make it more manageable.

We asked survey responses to identity the goal or initiative of the approach they instituted. Figure 2 shows a few examples from respondents who identified flipped classroom as the initiative they were trying.

EdSurge Research

These examples show how the goal—not the initiative itself—can inform the measurement strategy. The evaluation method may be quick, or it might involve multiple measurements, depending on the goal. And the evidence needn’t be new. It can come from data that an educator is already collecting or might be a new data point that is easily collected.

As Romano found, it’s not always easy to figure out how to measure success, especially for initiatives with multiple goals. The process can seem overwhelming, leading many educators not to try to measure progress at all, even though they recognize it as important. But by broadening definitions of the term “evidence,” gaining confidence in using and sharing qualitative data, defining specific goals early and looking for simple ways of measuring progress toward those goals, educators can make measuring success more manageable.


Want to learn more about this research project? Visit our project page and download our report:

Download Report

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

Next Up

Stories of Change: Educators Shift Practices to Reach All Learners

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up