When the Animated Bunny in the TV Show Listens for Kids’ Answers — and...

Artificial Intelligence

When the Animated Bunny in the TV Show Listens for Kids’ Answers — and Answers Back

By Rebecca Koenig     Feb 9, 2021

When the Animated Bunny in the TV Show Listens for Kids’ Answers — and Answers Back
Still frames from “Elinor Wonders Why,” when Elinor talks to kids.

This story is part of an EdSurge Research series about early childhood education.

During tricky situations in the new PBS KIDS show “Elinor Wonders Why,” a curious rabbit directs a question to viewers, pausing to give them a chance to answer.

This invitation to participate in the plot of the story is a hallmark of educational programs for young children, a moment designed to check their comprehension and engage them in learning. It usually has limits, though, since no response a kid offers can influence what happens next.

Yet when this rabbit asks the audience, say, how to make a substance in a bottle less goopy, she’s actually listening for their answers. Or rather, an artificially intelligent tool is listening. And based on what it hears from a viewer, it tailors how the rabbit replies.

“Elinor can understand the child’s response and then make a contingent response to that,” says Mark Warschauer, professor of education at the University of California at Irvine and director of its Digital Learning Lab.

AI is coming to early childhood education. Researchers like Warschauer are studying whether and how conversational agent technology—the kind that powers smart speakers such as Alexa and Siri—can enhance the learning benefits young kids receive from hearing stories read aloud and from watching videos.

Before learning to read, many young children spend ample time absorbing this kind of media, often without much guidance from their caregivers.

“Coviewing—parents sitting with kids and watching and asking questions while they watch—can have a positive impact on skill development. We have known that research for a long time, but also know parents are really busy,” says Sara DeWitt, vice president of PBS KIDS Digital. “It’s hard for them to sit down and watch TV with kids.”

Can artificial intelligence help children participate with, and not just consume, media? And can AI conversations make shows more educational, especially for those kids least likely to have a grown-up watching with them? That’s what experts at the Digital Learning Lab and PBS hope to find out—with help from an animated bunny.

“The lack of interactivity really limits what students could learn from this media,” says Ying Xu, a postdoctoral researcher in the lab. “Language is an important vehicle to help children understand and learn. We want to encourage students to verbalize what they think and know.”

Creating Conversation

Xu has a young collaborator to thank for sending her down the talking-rabbit hole. Very young.

“I have a five-year-old. I saw him talking to the smart speaker a lot in my home,” Xu says. “From there, I got my first idea: This is something we could turn into educational purposes.”

So she and her grown-up colleagues set up an experiment. In one scenario, trained human adults read storybooks to kids ages three to six, pausing occasionally to ask questions and seek feedback from the children. In a second scenario, AI-powered smart speakers did the same thing. In another study group, adults read books without pausing to ask guided questions.

child reading
A child reading with the agent partner. Courtesy of the Digital Learning Lab.

The study, published in the journal Computers & Education in February, found that guided questions improved learning, and that having smart speakers do the asking was just as helpful for kids’ story comprehension.

“Not only did children learn a lot better with conversational agents, but the biggest gains were to English language learners,” Warschauer says. “We saw huge benefits.”

Xu and Warschauer decided to apply their research to television. Or, more accurately, to videos, which many kids today watch on smartphones, tablets or computers instead of TV sets. So in 2019, they pitched a collaboration with PBS KIDS, which averages 13.6 million monthly digital users and 359 million monthly streams across digital platforms.

“The idea of building this in while a kid was watching a show was something we were immediately interested in,” DeWitt says. “We are coming from a really similar place: How we can improve a kid’s ability to learn from media?”

The two teams decided to test the conversational agent in a few episodes of “Elinor Wonders Why,” a show created by a fellow University of California at Irvine professor—Daniel Whiteson, who studies physics—and Jorge Cham, the cartoonist behind popular PHD Comics. Doing so required writing additional scripts for characters that anticipate how a child might answer a given question, then creating more animation to match that dialogue.

“Conversation needs to be really carefully crafted so it’s not only understandable by a preschooler, but that Elinor is understanding the ways a preschooler might respond,” DeWitt says.

Kids in the study watched episodes via computers with built-in microphones. When a young viewer responds to a question from Elinor, Google technology turns their speech into text, then analyzes it for semantic meaning, categorizes it, and prompts Elinor to reply with the most relevant of her scripted answers.

child watching show
A child watching an “Elinor Wonders Why” interactive video. Courtesy of the Digital Learning Lab.

“Google Assistant is smart enough that kids don’t have to use the exact words” to trigger the best reply, Warschauer says.

So far, embedding AI in Elinor does seem to offer education benefits, according to the researchers.

“In general, we found that kids learned more, are more engaged and had more positive perceptions,” Xu says.

And the researchers are especially excited about what their findings might mean for alleviating disparities in early learning. They tested the show with families in two regions of California—one affluent, the other low-income and with a high proportion of English-language learners—and assessed how much science children learned after watching. Those from the low-income community generally scored lower than their peers from the wealthier area. Yet children from the poorer region who watched the AI version of “Elinor” had science scores comparable to children from the richer region who watched the ordinary broadcast version.

“The conversational agent completely wiped out the differences in performance between these two groups,” Warschauer says.

Identifying Artificial Intelligence

Exposing young children to artificial intelligence that listens and responds raises all kinds of questions about privacy, security and psychology. Among them: What exactly do children understand about an AI conversation partner?

“At five or six years old, I think they are going to be very willing to believe,” says Georgene Troseth, an associate professor of psychology at Vanderbilt University who studies early childhood development. “If they don’t have a lot of experience, they don’t know how things work—if something gives the illusion of being a person, they may very well believe that. It puts a huge amount of ethical questions into who can decide what that agent can do.”

Children’s beliefs about AI may affect not only their educational attainment, but also their social and emotional development, according to research by Sherry Turkle, a professor of the social studies of science and technology at MIT. As she wrote in The Washington Post, “interacting with these empathy machines may get in the way of children's ability to develop a capacity for empathy themselves.”

The results of the Computers & Education study suggest that kids can distinguish between humans and AI. Although the children responded to guided questions just as accurately whether listening to humans or to smart speakers, their replies varied in other ways. Kids who interacted with humans offered more relevant answers, and their answers were longer and more lexically diverse.

Those who interacted with speakers responded with greater intelligibility—that is, Xu explains, “they spoke slower and clearer so they could be understood.” (Many adults do that, too.)

“Whatever children believe about the conversational agent, they’re smart enough to recognize over time they may need to modulate the clarity of their expressions,” Warschauer adds.

Another study from Xu and Warschauer about how children three to six perceive the conversational agent in a Google Home speaker revealed a variety of perspectives. Through speech and drawings, some kids referenced its human-like qualities, while others portrayed it as more of an inanimate object.

“Some of the kids’ drawings of what they thought was inside the Google assistant was a hybrid,” Warschauer says.

drawings from kids
Children’s explanations of what they think lives inside a smart speaker. Left: A drawing indicates there is a person living inside the speaker. Right: A depiction of the smart speaker as a girl. Courtesy of the Digital Learning Lab.

Embedding conversational agents into animated video characters may alter how kids perceive the AI. To begin with, children just out of babyhood don’t learn very well from screens, Troseth says.

“Kids are trying to figure out true and false and real and pretend,” she explains. “They seem to have this default assumption that things on a screen aren’t real.”

But by the time they are older toddlers—three, four or five—“children become representational thinkers. They become symbolizers,” Troseth says. “They understand that a picture—a motion picture or still picture—stands for something else. It’s so opaque to them earlier.”

Until that clicks, she says, “parents sitting there and scaffolding the idea that what is on a screen can be real and teach you—that helps children learn from what’s on a screen and take it seriously.”

Even without the addition of artificial intelligence, reality and fantasy already seem blurred when it comes to how children interpret media. PBS KIDS research shows that “kids want to be able to talk to and play with the characters” from their favorite shows, and already do so in their own make-believe games.

When the studio enables Elinor to listen and talk back, DeWitt says, “we think this is going to be a magical, joyous moment for kids.”

To help avoid confusion, PBS KIDS does plan to signal which episodes have conversational agents built into them once they’re available to the public. But its team is not too worried about little viewers mistaking Elinor for an autonomously sentient being.

“She’s an animated bunny, so she’s definitely not human to begin with,” says Silvia Lovato, senior director of research and evaluation for PBS KIDS Learning Technologies. “Kids are smarter than we give them credit for.”

Adult Supervision (Still) Required

Parents tired of “Daniel Tiger” and “Dora the Explorer” may be disappointed to learn that AI doesn’t appear to replace the benefits kids get from watching such shows alongside human caregivers. Instead, conversational agents seem to offer children a different, unique learning opportunity.

One of the limits of AI compared to parents, according to Troseth, is that young children differ widely in what they know and how they interact. An agent can be pre-programmed to respond to some incorrect answers from a child, but not all. In contrast, a caregiver faced with unexpected behavior can improvise by offering a kid an example from real life or a metaphor relevant to the child’s experience.

“Parents know that child and know what that child knows,” Troseth says. “How could you program an artificial agent to know what that child knows and what is the appropriate next question? In Mark’s research, an agent can propose a logical next step but doesn’t really know that child.”

The humans behind “Elinor Wonders Why” say they don’t aim to replace parents—but they do hope to teach them.

“This additive piece, we believe, is important not only for the kids, but also with the parent,” DeWitt says. “It’s meant to model the kinds of questions a parent can ask.”

And for kids who don’t have guardians around to discuss the finer points of what’s happening on screen, Troseth thinks conversational AI is a promising education tool.

“There’s a lot of children where nobody is doing that. Going directly to the child with an agent, it’s not crazy,” Troseth says. “I certainly hope it ends up working.”

Members of the Digital Learning Lab are optimistic. AI tools, they say, are more successful than other interventions they’ve tried, like eBook hotspots that kids press to hear sounds or see visual cues.

“It’s hard to move the needle on kids’ learning,” Warschauer says. “Mostly you’re tinkering around the edges. I’m ready to give up those other things. Putting in interaction through conversation has a much bigger effect.”

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

Next Up

Child Care in Crisis: How the Pandemic Is Changing the Lives of Early Childhood Education Professionals

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up