Article

Hi, robot: Why robotics and language need each other

By Matthew Hutson

When Stefanie Tellex was 10 or 12, around 1990, she learned to program. Her great-aunt had given her instructional books and she would type code into her father’s desktop computer. One program she typed in was a famous artificial intelligence program called ELIZA, which aped a psychotherapist. Tellex would tap out questions, and ELIZA would respond with formulaic text answers. “I was just fascinated with the idea that a computer could talk to you,” Tellex says, “that a computer could be alive, like a person is alive.” Even ELIZA’s rote answers gave her a glimmer of what might be possible.

In college, Tellex worked on computational linguistics. For one project, she wrote an algorithm that answered questions about a block of text, replying to a question such as “Who shot Lincoln?” with “John Wilkes Booth.” But “I got really disillusioned with it,” Tellex says. “It basically boiled down to counting up how many words appeared with other words, and then trying to produce an answer based on statistics. It just felt like something was fundamentally missing in terms of what language is.”

For her PhD, Tellex worked with Deb Roy at the MIT Media Lab, who shared her disillusionment. He told her: “Yeah, what’s missing is perception and action. You have to be connected to the world,” Tellex says. Language is about something — an object, an event, an intent, all of which we learn through experience and interaction. So to master language, Roy was saying, computers might need such experience, too. “And that felt really right to me,” she says. “Plus, robots are awesome. So I basically became a roboticist.”

Related Content