Analyzing a meta-analysis of flipped learning

Analyzing a meta-analysis of flipped learning
Photo by Misael Moreno / Unsplash

Here at the blog on Fridays, I like to repost either a "throwback" article from this website years ago, or something that I wrote that was published outside this blog. This week, I'm reposting this article from EdSurge, written by Jeff Young, a significant chunk of which consisted of an interview that Jeff did with me. The article is about a recent paper on flipped learning, which was simultaneously quite vocal in its criticism of flipped learning, and equally vocal about the virtues of a particular model for flipped learning that the paper proposes following the criticisms.

Read the whole thing (it's short) but here are some pertinent excerpts:

The study considered 173 studies of flipped learning, as well as 46 previous meta-analyses of the approach. And while many of the studies showed gains for learners in some cases, the researchers concluded that flipped learning isn’t living up to its promise.
“The current levels of enthusiasm for flipped learning are not commensurate with and far exceed the vast variability of scientific evidence in its favor,” the paper argues.
In fact, the authors made the surprising conclusion that many instances of flipped learning involve more time spent on passive learning than the traditional lecture model, because some professors both assign short video lectures and spend some time in class lecturing to prepare for class activities. As the authors put it: “Indeed, it seems that implementations of flipped learning perpetuate the things they claim to reduce, that is, passive learning.”
The far-reaching meta-analysis considered flipped learning experiments done in elementary schools, high schools and colleges, with the bulk of the studies in the higher ed setting.
The biggest surprise to the researchers as they coded each research project was realizing how many different versions of flipped learning exist, said John Hattie, an emeritus professor at the University of Melbourne who co-authored the study. “The hype is convincing — it’s seductive — but the implementation of the hype is not,” he said. “It has been implemented so variably.”
The researchers [...] end their paper by presenting a model of flipped learning they refer to as “fail, flip, fix and feed,” which they say applies the most effective aspects they learned from their analysis. Basically they argue that students should be challenged with a problem even if they can’t properly solve it because they haven’t learned the material yet, and then the failure to solve it will motivate them to watch the lecture looking for the necessary information. Then classroom time can be used to fix student misconceptions, with a mix of a short lecture and student activities. Finally, instructors assess the student work and give feedback. [...]
Fans of flipped learning had some questions about the new study’s conclusions Among them is Robert Talbert, a professor in the mathematics department at Grand Valley State University and author of the book “Flipped Learning: A Guide for Higher Education Faculty.
“It kind of takes flipped learning educators to task, and I thought that was super unnecessary,” Talbert said. “I wanted to reach out to the authors and say, ‘Hey we’re all on the same team here.’ They’re part of the group doing flipped learning.”
“It’s a great discussion starter, and I’m never going to say we can’t publish things that are critical to flipped learning,” Talbert said. “But the paper’s overall message was, ‘All of y’all are doing flipped learning wrong, and we’re doing it right.’ I didn’t think that was fair to people practicing flipped learning.”

Again I am leaving out a lot. Click here to read the entire piece.

Thoughts

First, I stand by my use of "all of y'all". That's the correct plural form of "y'all".

Second, let me be positive before I get critical.

  • Any indication that flipped instruction is not using active learning to its fullest extent, needs to be taken seriously. The entire purpose of flipped learning is to maximize the time and depth of active learning in the group space. We should listen carefully to any research that says this isn't happening.
  • I like the authors' model in the paper.

Now, here are some more detailed critiques of this article, some of which are hinted at in the EdSurge piece.

  • I know I am supposed to be objective when reading research, but it's hard when the very first sentence of the paper says "The current levels of enthusiasm for flipped learning are not commensurate with and far exceed the vast variability of scientific evidence in its favor." Was there not a better way to put this? Did that sentence even need to be included? As I mentioned in the EdSurge piece, there is a strong flavor of gatekeeping throughout this article which made it hard to listen to what the researchers had to say.
  • The paper's main issue with flipped learning is the wide variability in how it's implemented. But the authors of the study have brought some of that variability upon themselves, by including K12 along with higher ed institutions in the study. The paper states that the studies analyzed come from "3 [studies] in Elementary [schools], 2 in High, 2 in K-12, 19 in College, and 23 across all levels of schooling". In my view, K12 and higher ed instruction – particularly flipped instruction – are too different to compare. Completely different sets of assumptions, goals, and institutional situations. So it's no wonder there was a lot of variability! Indeed, later in the paper, the authors separate the K12 and higher ed audiences and found significant (not in the statistical sense) differences in effect sizes. Would we see the same level of variability within institutional settings? The authors say "yes" but this is so far speculative.
  • If this paper were just a meta-analysis, the criticism would be much easier to take. But there are two parts to this paper: The meta-analysis, which finds a great deal of variability in the way flipped learning is implemented (not all of it good), and then a section where the authors propose a model... of flipped learning. So again, gatekeeping: You people are doing this all wrong, and here's our way of doing it which is doing it right. This is not a real criticism of the research of course. But, if you're going to publish a paper intended to persuade — and I think persuasion is the intent of this paper — then maybe a different sort of touch is required.  
  • The main research question appears to be something of a straw man. The paper says (my emphasis):
It is precisely because flipped learning allows for more active learning during the in-class time that proponents of flipped learning claim that it leads to better academic outcomes, such as higher grades and better test and examination scores, than the traditional method (Yarbro et al., 2014; Lag and Saele, 2019; Orhan, 2019). Examining this claim is precisely the aim of our paper.

But I don't think "higher grades and better test scores" is the main "academic outcome" that most flipped learning practitioners really claim will be improved. I have certainly never focused on that sort of thing; self-regulation and intrinsic motivation have been what I'm interested in. And at any rate, who cares about test scores and numerical grades? The papers referenced here point to more meta-analyses, focusing on such quantitative data. And this is the metric they use for determining effect sizes. But I think this introduces a confound for "variability", namely the fact that the metric itself may not have much construct validity.

  • Back to that model that's proposed, pithily named "Fail, Flip, Fix, Feed": As I said, I like it. In fact I liked it the first time, when essentially the same thing was published by Schneider and Blikstein back in 2016 in their study of "tangible interfaces", which I've mentioned here a couple of times (here, here). No, I'm not suggesting plagiarism. But I am saying (and said so in the EdSurge piece) that this sort of model has been done before, and an analysis of Schneider and Blikstein's work could and should have been a part of the theoretical framework behind it, not just (literally) a footnote.
  • Speaking of past research, I was a little baffled by the absence of previous meta-analyses on active learning, most especially the 2014 Freeman PNAS paper which is deeper, more focused, and more extensive (225 studies of active learning in higher ed STEM classes) than this one, or nearly any other. There is likely far more "variability" in the studies in the Freeman paper than in the flipped learning studies here, and yet Freeman et al. found strongly positive effect sizes. Surely the Freeman 2014 study would have been worth studying? And surely it's worth trying to resolve the differences in conclusions between Freeman and this study?

In the end, I just don't know how I feel about this paper. I have no problems with criticism of any instructional technique, even the ones I've written books and papers about. They are instructional paradigms, not religions. And it's definitely the case that these authors did a much better job of performing a meta-analysis than I am capable of doing.

However: I'm somehow just not convinced. I think I'd be more open to what the paper has to say if I could look into the classrooms of the teachers whose work is being analyzed and meta-analyzed and just see what they are doing, see what their students are doing, and then talk to them about what's being done. There's a place here for "team science" but if you're not also on "team empathy" then the result can be hard to believe.

Robert Talbert

Robert Talbert

Mathematics professor who writes and speaks about math, research and practice on teaching and learning, technology, productivity, and higher education.
Michigan