By Reece Rogers
Work for a Member company and need a Member Portal account? Register here with your company email address.
via WIRED
Jan. 15, 2025
By Reece Rogers
---
“One problem I see is that people will question who is responsible for the actions of an agent,” reads a WIRED interview with MIT Media Lab professor Pattie Maes, originally published in 1995. “Especially things like agents taking up too much time on a machine or purchasing something you don't want on your behalf. Agents will raise a lot of interesting issues, but I'm convinced we won't be able to live without them.”
I called Maes early in January to hear how her perspective on AI agents has changed over the years. She’s as optimistic as ever about the potential for personal automation, but she’s convinced that “extremely naive” engineers are not spending enough time addressing the complexities of human-computer interactions. In fact, she says, their recklessness could induce another AI winter.
“The way these systems are built, right now, they're optimized from a technical point of view, an engineering point of view,” she says. “But, they're not at all optimized for human-design issues.” She focuses on how AI agents are still easily tricked or resort to biased assumptions, despite improvements to the underlying models. And a misplaced confidence leads users to trust answers generated by AI tools when they shouldn’t.