read

This is part nine of my annual look at the year’s “top ed-tech stories

We Need to Rethink How We Educate Kids to Tackle the Jobs of the Future.” It’s a core refrain in “the innovation gospel,” one of those arguments that certain pundits and politicians really lean into. You hear it all the time, accompanied by a standard set of justifications about the pressing need to reform education: something about the "factory model of education; something about radical shifts in the job market in recent decades; something about technology changing faster than it’s ever changed before. And almost inevitably at some point, this statistic will get invoked: 65% of children entering primary school today will end up working in jobs that don’t exist yet.

All of these claims play pretty fast and loose with the facts – with the history of education, with the history of technology, and with the history of work. All of them. But the point of these sorts of stories is never historical accuracy (although certainly citing a number – “65%” – gives them all the air of science and truth).

“65% of children entering primary school today will end up working in jobs that don’t exist yet.” It’s a prediction – and a statistic – than Benjamin Doxtdator brilliantly dissects in an article he published this year called “A Field Guide to ‘Jobs that Don’t Exist Yet’.” And while on one hand, I’d like to see everyone stop citing that made-up figure, on the other, it’s always useful to be able identify who the bullshit artists are.

What makes this narrative about the future resonate, I think, is that it taps into the fears many feel about the future, about their children’s future. This isn’t simply a matter of “robots are coming for your jobs.” They’re coming for your kids’ jobs.

The future for younger generations does seem particularly grim: “millennials” carry more student loan debt than their parents; they’re less likely to own a home; their employment rates have been slower to recover after the recession; they earn less money. Oh, and then there’s global climate change.

This narrative – robots are coming for your jobs (and your kids’ jobs) – involves tasking schools with retooling so they can better train students for “the jobs of the future,” although to a certain extent, workforce preparation has always been what (part of) the education system has been expected to do. A sense of urgency about financial precarity – now and in the future – might raise the stakes these days. Really, it’s no surprise that fears about an unsettled, uncertain economy are used to shift and control education’s mission, and it’s no surprise that parents go along with that.

According to the Bureau of Labor Statistics, the occupations that will add the most new jobs in the next decade are personal care aides (754,000 new jobs), food service workers (579,900 new jobs), registered nurses (437,000 new jobs), home health aides (425,600 new jobs), software developers (253,400 new jobs), and janitors and cleaners (233,000). The fastest growing occupations are solar photovoltaic installers (growing by 105%), wind turbine service technicians (growing by 96%), home health aides (growing by 47%), personal care aides (growing by 37%), and physicians assistants (growing by 37%). But just one of these occupations seems to dominate the storyline of how schools should prepare students for the “jobs of the future.” And it sure isn’t “everyone should learn nursing.”

Everyone Should Learn to Code


As I noted in the previous article in this series, the technology industry has continued this year to advocate for changes to both policy and curriculum to expand computer science education.

In an attempt to counter the industry’s ongoing problems with diversity, many of the organizations pushing for more CS education – the College Board and Code.org, most notably – have touted the progress that has been made in getting more kids of color and more girls to participate in coding classes (and in the case of the College Board, more paying for AP assessments, including a second CS-related exam, first administered this year). “Is the College Board’s Newest AP Computer Science Course Closing the Gap?” Edsurge asked in February. Well, probably not as long as the culture of Silicon Valley remains profoundly racist and sexist. And it’s a nice thought too that an AP exam would be a lever for equality, I guess.

The tech industry made a number of high profile donations and pledges this year, coinciding with the Trump administration’s announcement this fall that it would earmark $200 million in grants for computer science and STEM education. (Arguably this focus on CS was the one education policy pursued by Obama that Trump did not try to loudly reverse.) The administration’s announcement did not make it clear where that $200 million would come from (it had, after all, submitted a budget that would slash the Department of Education’s funding by $9.2 billion) – but that didn’t stop some industry folks from crooning about what they saw as a windfall. The next day after the administration’s announcement, several tech giants – Microsoft, Amazon, Facebook, Salesforce, and other – said they would help fund the White House’s commitment to CS, to the tune of $300 million more. Again, this all made for a flurry of headlines, even though there were few specifics about where that money would go or how schools or students would benefit – “Scant Details, Fuzzy Math,” as EdTech Strategies’ Doug Levin put it.

Computer science education need not be about job training and need not be in the service of industry. But with the millions of dollars being funneled by industry into this effort, that is very much the shape that this all is taking.

Saturday’s Child


One of the notable elements this year of both the “everyone should learn to code” narrative and the “we need to be training students for the jobs of the future” story was how young this is all supposed to start. “How to Prepare Preschoolers for an Automated Economy” read a New York Times headline in July. “A Toy for Toddlers Doubles as Code Bootcamp,” said another NYT piece, this one profiling a $225 programmable wooden block toy. “PBS Show Will Teach Preschoolers How To Think Like Computers,” Edsurge declared this summer. And then there was the President’s daughter, Ivanka Trump, who penned an op-ed in The New York Post in October explaining “Why we need to start teaching tech in Kindergarten.”

No time for play, kids. Get to work.

In November, Bloomberg reported that WeWork, a co-working space startup that acquired both the coding bootcamp Flatiron School and the meet-up company MeetUp this year, planned to open a private K–12 school in one of its facilities. The school would be focused on “conscious entrepreneurship.” “Children are ready to start creating their life’s work when they’re 5,” WeWork co-founder Rebekah Neumann told Fast Company. “In my book,” she told Bloomberg, “there’s no reason why children in elementary schools can’t be launching their own businesses.”

There are plenty of reasons, to be quite frank. There are legal issues regarding children’s intellectual property, and there are issues surrounding child labor laws. But good grief, there are moral and ethical and pedagogical reasons too. Is their “life’s work” what we want five year olds to be thinking about? Are monetization strategies what we want elementary school students to be concerned with? (And, of course, let’s consider which students would be weighing monetization at the private WeWork school, and which students in the US are worried about their next meal.)

What should we do with kids over the summer months, Senator Ben Sasse asked in an op-ed in The New York Times in July? Again: put ’em to work.

“The logic of human capital is now the basis for the American education system, which means it’s the code that governs the day-to-day lives of America’s children,” Malcolm Harris argues in his 2017 book Kids These Days: Human Capital and the Making of Millennials. Childhood, education – it’s all become “work,” Harris contends. Learning how to “work.” Learning as “work.” Work as learning. It’s part of what he describes as the “pedagogical mask” that obscures the labor children do in school.

Kids These Days opens with an analysis of one of my very favorite books on education technology, Danny Dunn and the Homework Machine. Published in 1958, the title character in the children’s story has built a homemade labor-saving device that allows him to handwrite two copies of he and his best friend’s math homework at once. “If only we could save even more time,” Danny says to his pal. “You’d think six hours of school would be enough for them, without making us take school home. If only I could build some kind of robot to do all our homework for us….” Danny’s mother, as it turns out, is a housekeeper for Professor Bullfinch, who happens to have a computer in a laboratory in his house. (A “miniac” – a miniature version of Harvard’s Mark I.) The resourceful young Danny programs it to do his homework for him – leaving him more time for playing baseball and for exploring his own curiosities. “Inquiry-based learning,” I think we’d call it. But a classmate snitches, and rather than admit to his teacher than he’s cheating, Danny tries to make the case – the very logical and accurate case – he’s simply using a new technology to work more efficiently, to be more productive like any good scientist would do. Rather than reward his ingenuity, his teacher gives him more homework.

“Between 1981 and 1997,” Harris writes, “elementary schoolers between the ages of six and eight recorded a whopping 146 percent gain in time spent studying, and another 32 percent between 1997 and 2003…. Kids age nine to twelve, like Danny, have sustained near 30 percent growth in homework, while their class time has increased by 14 percent.”

Harris argues that, despite all the stories told about lazy millennials – that op-ed from Senator Sasse, for example – today’s youth are already working incredibly hard. And that’s because, like Danny Dunn, new technologies are being used to ramp up the pace and the expectations of their productivity – the amount of school work and home work they are supposed to do.

Helicopter Robots


In Kids These Days, Harris argues that modern parenting has become all about risk management and even risk elimination parents try to ensure the best possible future for their children. The obsession with hand sanitizer. The elimination of wooden playgrounds. We all know the story.

“Helicopter parenting” – or at least parental anxiety – might not be a new phenomenon, but it is now increasingly able to enlist new technologies to monitor children’s activities. A story this summer in The New York Magazine is, no doubt, an extreme example of this: “Armed with Nest Cams and 24/7 surveillance, one company promises to fix even the most dysfunctional child – for a price.” But many, many technology products boast features that allow parents to check up on what their kids are doing – what they’re reading, what they’re watching, what they’re clicking on, what they’re saying, who they’re talking to, how many steps they’re taking, where they’re at at any given money, and so on. It’s all under the auspices, of course, of keeping kids safe.

This all dovetails quite neatly, as I noted in the article on education data, with the ways in which schools too are quite willing to surveil students. The New York Times family section cautioned in August about “The Downside of Checking Kids’ Grades Constantly.” But the expectation of many ed-tech products (and increasingly school policy) is that parents will do just this – participate in the constant monitoring of student data.

When schools and parents surveil children, they aren’t the only ones collecting the data – companies are (obviously) as well. And as I noted in that article I wrote on data, this isn’t simply about a loss of privacy – although yes, there’s sure a lot of that (and there were a handful of FTC settlements and lawsuits this year regarding the unauthorized collection of data by products and apps aimed at kids). It’s also about the vulnerability of data to various cyberthreats.

One of the major targets for cyberthreats – that is, a growing source of data vulnerability – is “the Internet of Things” or IOT. IOT now includes all sorts of everyday objects that, for some reason, folks think is a good idea to add Internet connectivity to. In February, Motherboard reported that “Internet of Things Teddy Bear Leaked 2 Million Parent and Kids Message Recordings.” Also in February, Germany warned parents that Cayla, a talking doll, could easily be hacked and criminals could use the toy “to steal personal data by recording private conversations over an insecure Bluetooth connection.” In July, the FBI issued a warning about the security of Internet-connected toys: “toys with microphones could record and collect conversations within earshot of the device. Information such as the child’s name, school, likes and dislikes, and activities may be disclosed through normal conversation with the toy or in the surrounding environment.” In November, Germany banned smartwatches for kids due to concerns about surveillance and safety, and the same month, a UK consumer rights group called for the banning of Internet-connected toys, citing the privacy and security risks.

I guess we’ll see how many parents ignore this advice and buy their kids holiday gifts that spy on them. For her part, the head of Mattel believes that “Internet-connected toys are the future.”

Robots Raising Children


In January of this year, at the annual Consumer Electronics Show in Las Vegas, Mattel (or rather, its subsidiary Nabi) unveiled Aristotle, a “smart baby monitor” – what it claimed was the world’s first. Companies always hope they’ll be able to make headlines at CES, and Aristotle received a fair amount of attention this year. There were stories in the usual tech publications – Engadget, PC World, CNET – as well as in the mainstream and tabloid press – USA Today, ABC News, Fox News, The Daily Mail. Bloomberg heralded the device as “Baby’s First Virtual Assistant.” And here’s how Fast Company described the voice-activated speaker/monitor, which is set to launch some time next month (the release day keeps getting postponed):

Aristotle is built to live in a child’s room – and answer a child’s questions. In this most intimate of spaces, Aristotle is designed to be far more specific than the generic voice assistants of today: a nanny, friend, and tutor, equally able to soothe a newborn and aid a tween with foreign-language homework. It’s an AI to help raise your child.

I gave a talk this summer at NMC on the Aristotle and the history of baby monitors – “‘The Brave Little Surveillance Bear’ and Other Stories We Tell About Robots Raising Children” – and I won’t rehash the arguments here. (It’s one of my favorite talks I gave in 2017, I will say, so you should read it.)

I was skeptical at the time about Mattel’s ability to pull off the promises it made about the functionality of the virtual assistant – it had already canceled the Amazon Alexa integration heralded at CES. Hell, I was skeptical it would ever be released. And indeed, Mattel announced in October that it was canceling its plans to bring the device to market. Mattel didn’t cite privacy concerns or the petition drive organized by the Campaign for a Commercial-Free Childhood when it did so. It simply said “leadership changes” at the company prompted the decision. The New York Times reported that “Sven Gerjets, the company’s new chief technology officer, ‘conducted an extensive review of the Aristotle product and decided that it did not fully align with Mattel’s new technology strategy.’”

But where Mattel stumbled, Amazon seems to be thriving, convincing millions of people to buy an Alexa and place the company’s Internet-connected “voice assistant” in their homes. The security and privacy flaws are still there, no doubt, as these devices “listen” to all conversations, gathering data so as to build a consumer profile on a household.

As I argued in my article on “education platforms,” keep an eye on Amazon and how it tries to promote Alexa as a teaching and learning machine. Because Amazon’s power in the platform economy deeply implicates education in the practices of surveillance and in a pervasive culture of commercialism. It deeply implicates the home – family life, childhood – in the same.

Some parents have expressed concern about the relationships – and yes, that’s the word that people use – that children are developing with devices like Alexa. “What will it do to kids to have digital butlers they can boss around?” the MIT Technology Review asked. “Alexa, Are You Safe For My Kids,” asked NPR. “Should Children Form Emotional Bonds With Robots?” asked The Atlantic. What happens, The Washington Post wondered, “When your kid tries to say ‘Alexa’ before ‘Mama’.”

MIT professor Sherry Turkle recently argued that robots shouldn’t be given or promoted to children as “friends.” They offer “the illusion of companionship without the demands of friendship,” she says, “the illusion of connection without the reciprocity of a mutual relationship. And interacting with these empathy machines may get in the way of children’s ability to develop a capacity for empathy themselves.” Of course, since so many folks in ed-tech like to lambast everything Turkle has written lately, her arguments will probably be used to justify more robots, not fewer. And certainly Amazon (and the Bezos Foundation) are ready with all sorts of (corporate-sponsored) research about “connected families” to convince parents that these devices are really quite wonderful for childrearing.

As Stirling University’s Ben Williamson put it,

it’s clear that parenting and child-rearing has now become the focus for ambitious technical imaginings and visions. Supported by the massive technical might of Microsoft and Amazon, companies like Mattel want to extend from supplying Barbie dolls and Hot Wheels tracks to become interactive participants in semi-automated luxury family life. To do this, they’ll need data about families and about children. The surveillance and datafication of the family has begun.

We should ask, of course, whose family life will be one of semi-automated luxury and whose will be one of surveillance and control. (And we should ask too what the hell is going on with the algorithms that are raising children – like the ones feeding kids content on YouTube.)

Robot Teachers


Which families will have a “robot butler” or “robot nanny”? Which students will have a “robot teacher”? The questions surrounding equity and algorithms should be paramount. But too often, there’s simply an assumption that more technology means “progress.” (And “technological progress” is so readily confused with “politically progressive.”) “‘Eton for all’,” NewStatesman wrote about robot teachers in October for example, suggesting that they would mean “everyone gets an elite education.”

Spoiler alert: they wouldn’t.

Robot teachers. I’m not sure there was any ed-tech fantasy repeated more often this year than this one: AI will be the “next big thing” in the classroom. “Amazon’s Alexa: Your Next Teacher.” Robots will revolutionize how people teach and learn. Machine learning will revolutionize how people teach. Artificial intelligence will transform how people learn. AI will choose better lessons. AI will improve graduation rates. AI will improve guidance counseling. Robots will improve “student engagement.” Robots will keep students “on task.” AI will replace testing. AI will facilitate testing. AI will be the key to “personalized learning.” AI will teach students how to be better writers. Robots will then grade those papers. AI will transform universities. AI will “optimize” K–12 education. And then, there were all sorts of claims and predictions about the future of educational chatbots – chatbots as teaching assistants, chatbots for sex and drug education.

Again and again, as I noted in the previous article in this series, we were warned repeatedly this year that “robots are coming for our jobs.” We’re supposed to believe from the repetition of all these robot tales, that AI has made – is making – incredible breakthroughs. Sure, some say education will be particularly challenging to automate. But there’s clearly a strong political desire in certain circles to do just that.

Robots will replace teachers by 2027.” “Machines will replace teachers in ten years.” “Within ten years, human teachers will be phased out, replaced by machines” – these are all the prediction of one guy, Sir Anthony Seldon, vice chancellor of the University of Buckingham. But boy, was this prediction repeated over and over in the media this year.

Talk of robots is always talk about labor. And you can sense the disdain for teachers as labor in some of the arguments for teaching machines this year. “Let Robots Teach American Schoolkids,” George Mason’s Tyler Cowen wrote in July. “Imagine how great universities could be without all those human teachers,” Quartz wrote in September. Imagine.

(Perhaps it’s worth noting here another story from the year: many teachers cannot afford to live in the school districts where they teach. Teachers cannot afford to live in parts of Colorado, for example. Teachers certainly cannot afford to live in the Bay Area. Good ol’ Mark Zuckerberg. His Chan Zuckerberg Initiative contributed $5 million to a fund run by the startup called Landed that helps teachers make down payments on homes. That $5 million will help about 50 teachers in the Bay Area, Edsurge estimates. Why address structural inequality when you can sell a couple of folks on a loan service.)

As I have written about quite frequently, there is a long history to the push for teaching machines – it’s been the project of education technology since at least the 1920s. There has been renewed storytelling in the last year or so about “intelligent tutoring systems,” a phrase first coined in the early 1980s (and one that’s been updated with new, more Facebook-era friendly language“ personalized learning”). Praise for “intelligent tutoring systems” is often accompanied by invocation of the work – also from the 1980s – of Benjamin Bloom and his research on the effectiveness of tutoring itself. Can computers replicate that two standard deviation bump that Bloom found human tutors provided? And does it even matter if they can or can’t – as Politico put it in an article on tutoring low-income children in Chicago, “learning from a computer is going to be far cheaper than hiring a human being.”

Who gets the robot tutor and who gets the human one? Who gets a tutor at all? Investors seem to believe that there are lots of parents who are willing to pay. Tutoring businesses were, by far, the most well funded type of education company this year. Many of these companies come with their own set of labor issues – they’re part of the growing “gig economy” that positions teaching and tutoring as part-time, freelance work rather than as a profession.

Arguably, tutoring exacerbates educational inequalities, as it shifts the burden of enrichment activities onto individual families rather than onto a public institution like school. But it’s a price many parents are willing to pay so that their children can get ahead or stay ahead. Perhaps it’s not so much that “robots are coming for the children.” It’s that global inequality already has.

Financial data on the major corporations and investors involved in this and all the trends I cover in this series can be found on funding.hackeducation.com. Even at 5000+ words an article, there are stories I have left out. You can read more at 2017trends.hackeducation.com.

Audrey Watters


Published

Hack Education

The History of the Future of Education Technology

Back to Archives