Post

Family-Friendly Data Privacy + AI Activities: Interactive lessons to help kids learn and design with data privacy in mind

Copyright

Stephanie Nguyen

Stephanie Nguyen

By Stephanie Nguyen, Daniella DiPaola, and Cynthia Breazeal 

Thousands of schools have closed across the United States due to the COVID-19 pandemic, forcing families to become increasingly reliant on digital platforms and resources to educate and keep their kids occupied. During this time of  “distance learning,” advocacy groups like the Common Sense Media and Campaign for a Commercial-Free Childhood are engaging with parents to provide homework help resources and help them plan activities during this transition. 

Policymakers are seeking to strengthen privacy safeguards from the Federal Trade Commission (FTC) and the Department of Education (ED) for online education technology-related platforms. In late March (2020), Senators Ed Markey (D-MA.), Dick Durbin (D-IL), and Richard Blumenthal (D-CT) wrote a joint letter to Secretary of Education Betsy DeVos and the Federal Trade Commission. The letter encourages the FTC and ED to “jointly issue guidance to ed tech companies in order to protect student privacy” by creating clearer privacy policies and issuing guidance for parents about data collection and sharing, security risks, and tracking. 

“The chief barrier to understanding is the power differential between companies and consumers—even if we spend the time to read the privacy policy, it can feel like we don’t have a choice but to use the service no matter what it says,” Emily Peterson-Cassin, digital rights advocate at Public Citizen, shared with us. “This is particularly true when a school district or activity is using a certain platform with perhaps less awareness of the potential risks than would be ideal.”

Beyond initiatives with policy and law, another way that people can try to reduce the risk to children from privacy breaches is through cultivating new education opportunities. How might we integrate complex topics like AI, privacy, and security in relatable ways that empower children to be conscientious consumers of these platforms and technologies? How might we do this in a way that allows children to see both the pros and cons of any technology, and increases their intuition and awareness around data protection and collection? 

This is a unique time to engage and teach children about the benefits and limitations of technology. “Let’s ask our kids—in developmentally appropriate ways—what they like about connecting through tech platforms and how those platforms fall short of providing the rich experience afforded by face-to-face and hands-on learning,” David Monahan, Campaign Manager at the Campaign for a Commercial Free Childhood (CCFC) told us. “For younger kids, that might just mean asking concrete questions about what’s different between online and in real life. For teens, it’s a great way to open up discussions about persuasive design, privacy, and advertising.”

Many organizations, institutions, and practitioners have researched and created online workshops and curricula related to privacy, data collection, and use. Common Sense Education, the NSF backed privacy curriculum at UC Berkeley, Harvard Berkman’s Center’s Digital Citizenship, Cyber Civics, and Intel & Discovery Education’s Digital Safety Program are just a few examples of ways that different teams have pulled together resources to help teachers and parents spread this knowledge in creative ways. 

Our approach was to avoid solely listing the dangers and the “What not to do’s” when using connected technology. Instead, we reviewed existing materials related to data privacy and AI related topics and talked directly with parents and kids to see what privacy topics are relevant and important to their digital lives, such as YouTube videos with targeted ads and Pokemon Go with location data collection. Then, we created these workshops based on assessing needs and gaps of available age-appropriate content on these topics. Our goal was and continues to be to encourage students to form their own opinions and think more critically about the platforms they use every day. When piloting these activities, it was apparent that our students could form their own opinions and used those insights to drive new technical designs and ideas. 

In this post, we share some new data privacy and design activities, and at-home tech and AI debate guides for families that our group has developed and piloted in the Boston area with the Girl Scouts of Eastern Massachusetts. There are four resources in this release:

1) We introduce data privacy with an activity to build intuition about the opportunities and limitations of YouTube recommendations. (Ages 7-9). Resources here.

2) We offer a creative workshop to teach students about the challenges of designing advertisements with transparency. (Ages 9-11). Resources here.

3) We explore designing online consent in social media platforms with a focus on the Children’s Online Privacy Protections Rule (COPPA). (Ages 12-14). Resources here.

4) To make AI and technology-related topics a part of everyday practice, we created a guide for parents and/or guardians to have at-home debates and conversations with their children. Resources here.

We include an overview video of the activities along with accompanying materials (worksheets, syllabus, slide presentations, and teaching notes). We worked with multiple stakeholders to develop and pilot these activities: students, teachers, leaders from the Girl Scouts, and the Edward M. Kennedy Institute for the US Senate. You can read more about the lessons in action in collaboration with the Cybersecurity Badge Day workshop here

This AI + data privacy curriculum is one of a growing number of K-12 educational resources developed by the MIT Media Lab’s Personal Robots group, headed by Professor Cynthia Breazeal. She is leading a larger MIT-wide effort to democratize AI education through developing a range of project-based learning activities for primary, middle, and high school students. According to Breazeal:

“Children today are not just digital natives, they are AI natives. They use AI-enabled devices and applications such as YouTube, search engines, social media, and smartphones equipped with AI assistants. I’m really excited about our new project-based Data Privacy and AI learning unit. Through these design activities, children and their families can build awareness around data collection, AI and privacy considerations that will help them be conscious users of AI-enabled applications that they interact with on a daily basis.”

You can explore these learning tools and a variety of other family-friendly resources on ai.educational.mit.edu. This website is a resource that MIT shall continue to expand as our faculty, staff, and students innovate new methods, materials, tools, and project-based learning activities. The goal is to help K-12 students and their families understand how AI-enabled technologies impact our society, to help children be conscientious users of AI-enabled technologies, and ultimately to empower children to become future ethical designers of AI solutions.

While many parents are still navigating different ways to homeschool their kids during this pandemic, and teachers are exploring effective ways of distance learning, we hope that some of these activities give students and families some helpful ideas. Please reach out to dipaola@mit.edu if you have any feedback, comments and/or questions about these workshop materials. 

--------

Stephanie Nguyen is a research scientist at the MIT Media Lab focusing on data privacy, design, and tech policies that impact marginalized populations. She previously led privacy and user experience design projects with the National Institutes of Health and Johns Hopkins’ Precision Medicine team.

Daniella DiPaola is a first-year graduate student at the Personal Robots Group at the MIT Media Lab, currently working on an AI + Ethics curriculum for middle school students. She has worked as a researcher in the consumer robotics industry, focusing on human-robot interaction paradigms in the home, long-term livability, and robots in the lives of both children and older adults.

Cynthia Breazeal is faculty at the MIT Media Lab where she directs the Personal Robots Group. Her research group investigates and innovates at the intersection of AI, learning, and education, whether that is advancing responsible AI to help people learn or helping students of all ages learn about, use, and design using AI in conscientious ways. She is leading the larger MIT effort to democratize AI education for K12 students and lifelong learners through a variety of new programs.

Related Content