This post was co-authored with Autumm Caines (primary author). Autumm Caines is a liminal space. Part technologist, part artist, part manager, part synthesizer she is passionate about the use of technology in higher education and the many differing facets of how technology impacts society and culture. She likes spending time at the place where differing disciplines intersect. Autumm is an Instructional Designer at St. Norbert College and is a Co-Director in the Virtually Connecting project.]
Inside Higher Ed (IHE) recently issued their 2017 Survey of Faculty Attitudes on Technology and we were excited about the big initial finding that more faculty are teaching online and that more are finding that online learning can meet the same outcomes. This is great, but unsurprising news.
Research that confirms the basics can be very valuable - it is not sensational but it is practical and can assure that we are not way off track. However, there were some other things right from the start that bothered us about the report. For instance, it is filled with advertisements and there is no statement about how the advertisers impacted or influenced the study - but we know that they sponsored it, and they are edtech companies and commercial publishers, and therefore not uninterested parties. We have taken issue with other reports for similar issues.
Additionally, they surveyed “Digital Learning Leaders” to see how they felt about some (but not all) of the items in the survey. We wonder why you would survey non-faculty in a survey about faculty attitudes? We also question the false, oversimplified division between faculty and “digital leaders”: Some faculty are digital learning leaders, and some digital learning leaders teach.
Near the end of the report, we became more skeptical, wondering if the methodology of this study warranted deeper scrutiny. The last part of the report deals with plagiarism detection software, and it specifically targets a peer reviewed article that was published in Hybrid Pedagogy a few months ago. Neither the survey, nor the official report, link to this article; however, IHE’s own report on the report specifically states that this line of questioning in the survey is directly related to the Hybrid Pedagogy article and links to another IHE report that is critical of the Hybrid Pedagogy article (linking to another article on IHE about the Hybrid Pedagogy article, and not to the original article itself, is in itself biased reporting).
The survey states that “In 2017, some scholars circulated a manifesto that called on instructors to stop using plagiarism detection services, saying that their use had ceded too much control to companies, and removed incentives for professors to teach students about academic integrity.”
Imagine a respondent of the IHE survey, who had not read the Hybrid Pedagogy article: would they realize what IHE was referring to when they said “some scholars circulating a manifesto”? The Hybrid Pedagogy article never calls itself a manifesto and without some context who knows what criticisms the respondents could have been exposed to (in fact, manifesto is a term IHE used to report on the article earlier). Calling it a manifesto and ignoring its peer-reviewed publication status is problematic. This line of questioning and analysis is biased in ways that make us wonder about the survey as a whole.
Besides the fact that Sean Michael Morris and Jesse Stommel do not call their article a manifesto, it is important to point out that the original article is much more nuanced than the little sentence written by IHE misrepresenting it. Survey designers cannot know which of their respondents have actually read this article and which have not. There is an assumption built into this question that you have read the article but if you have not, is the sentence supposed to suffice? Shouldn’t there be an option to say “I haven’t read the article” or “I am unfamiliar with these criticisms”? The survey goes on to ask respondents to gauge on a Likert scale their agreement with the article (of which the survey designers actually mention two distinct points - which should have been separated as two questions, at the very least) and then if they had changed classroom policy because of it. The question was only posed to faculty and not to digital learning leaders.
We have many issues with all of this. Why would IHE go after one specific article and not cite it or link to it in the actual survey? Why would they not ask if respondents had been exposed to the article, and give them an opportunity to say “I had not read this article or heard of these critiques before”? Why would they instead try to sum up such a nuanced article in a sentence and ask survey respondents to make an (un)informed decision about how they felt about something they may not have read? How do you feel about this thing that you’ve never read? It is just so cryptic and does not offer room for reflection (nor is a survey a space to make up our minds about important issues if it is our first exposure to them). If they wanted to know if faculty felt that plagiarism-detection software removed incentives to teach plagiarism, they could have asked that question directly without referencing the article. If they wanted to know if faculty cared about handing student data to tech companies, they should have asked that question directly, and separately. Instead, it seems that the IHE survey seems to have concerned itself with attacking an article, rather than asking respondents about particular ideas and attitudes (i.e. they asked people if they agreed with the article, rather than directly asking them about their attitudes towards tools that can potentially cause harm).
We have both, in the past, supported plagiarism-detection software at our institutions. We eventually became critics of these systems, but we also understand that the viewpoints expressed in Hybrid Pedagogy article take time to develop. Someone answering a survey who is being exposed to them for the first time is not likely to suddenly understand what the problem is or the nuances of the critiques. We’ve spent months or even years trying to explain these ideas to others, and it takes time to sink in.
Lastly, the interpretation of the final part of the question that has us really confounded.The final question of this plagiarism section asks “Have these criticisms of plagiarism-detection software altered your policies on the use of such tools?” and it is a simple yes or no response. You cannot opt out if you hadn’t read the article, there is no N/A option in the results. Keep in mind that only 48% of participants used plagiarism-detection software to begin with; we do not know why the others did not use plagiarism-detection software, and one of the reasons might be that they already agreed with some points in the article. The study reports says “Relatively few faculty members, 6 percent, say these criticisms of plagiarism-detection software have changed the way they use such tools.”
Changing the way someone thinks about a technology is a real innovation and it takes time. Anyone can buy a solution, train people on how to use it and offer certain rewards to see returns. There is nothing special about that. Purchasing power is not innovation. Changing minds is innovation.
The methodology section of the IHE report states that, after some weighting and normalization for a variety of factors, that “The weighted sample results can be considered representative of the views of faculty and digital learning leaders at colleges nationwide”. If this one little article, published a mere four months ago, has actually changed the minds of 6% of the nation’s faculty members so much so that they would change their usage of plagiarism detection software that is a huge leap. It is particularly important given that some institutions and departments require their students and faculty to use these softwares, and that not all faculty have the agency to modify their practice in the short term, even if they agreed with the article (which is about 27% of respondents, and that is really not a trivial number, given institutional directions that view these tools in a positive light). It is also possible that some respondents did not modify their practice because they were already not using plagiarism-detection software. We are unsure how that can be interpreted as “relatively few” in any context but the narrow one that compares it to the larger bucket and completely ignores the actual human realities that we face in the educational technology field everyday, and how difficult it is for an individual faculty member to resist institutional edtech policies.
We are not the only ones critical of this report - see for example, this Twitter thread.
What do you think about the IHE study? Tell us in the comments
DISCLAIMER:
We would like to state that Maha is International Director of Digital Pedagogy Lab, an editor at Hybrid Pedagogy and has participated in organizing and teaching at Digital Pedagogy Lab institutes. Autumm was a fellow at a Digital Pedagogy Lab Institute in August of 2016. We both have good relationships with, and closely follow the work of, Sean Michael Morris and Jesse Stommel.
“The Queen Attacks” flickr photo by malias https://flickr.com/photos/malias/478543755 shared under a Creative Commons (BY) license