Post

They’re having fun… but are they learning?

Stéphanie Kilgast

By Anneli Hershman (and cross-posted on Medium)

How can you measure the learning outcomes of play? This has frustrated professionals in the field of early childhood education for years. 

It has been well documented that play is an essential piece in children’s learning, especially in creative learning and motivation. But, without the means to directly measure how play facilitates academic achievements, it tends to be the first thing cut from early educational environments. When researchers rely heavily on academic metrics to assess learning, we tend to blind ourselves to the bigger picture. If, instead, we take a holistic view, it’s clear that play has a major role in a child’s educational development through motivation, construction, exploration, experimentation, social-emotional development, language development, and cognitive development.

However, when we look at how we can measure learning opportunities through play, it’s important to understand how complex play truly is. Too much educational research relies on simplified metrics to assess complicated phenomena, resulting in measurements that are incomplete and shallow.

It’s like assessing a person’s ability to cook by testing if they can make a peanut butter and jelly sandwich.

Yes, both cooking and making a PB&J involve combining certain ingredients in certain ways, but the art of cooking is much more complex than memorizing and following one simple recipe. Plus, there are a multitude of ways to make a PB&J sandwich! Some people may put peanut butter on both slices of bread; some may use more jelly; some may toast the bread. If you only consider one of these methods as correct, then all the other ways are misinterpreted as incorrect. Suddenly, this measurement is not only inaccurately gauging whether someone is a good cook, but it is also unsuccessful at deciding whether someone can even make a PB&J sandwich. Instead, it is only testing if someone knows how to make a PB&J sandwich in the one specific way defined by the researcher.

This problem is seen often in educational research, and, I believe, it largely stems from us focusing too heavily on one particular ingredient. In a research lab setting, it is too easy to concentrate on that one specific measure and then control for everything else to produce a researcher’s desired outcomes. But when research leaves the lab and enters into the chaotic environment of real-life educational settings, that measure gets lost in a very messy recipe — one in which we can only ever see half the recipe because the other half has faded or is covered by years of sticky fingerprints.

Okay, so maybe using cooking and PB&J sandwiches isn’t the best analogy, but the main point is this: When we look at how we can measure learning opportunities through play, it is important to understand how there are no simple ways to assess learning. Therefore, trying to see whether an educational tool, such as an app, can actually facilitate learning for young children, we need a play-testing situation that’s conducted outside the lab environment. Furthermore, one measure is not enough — we cannot simply take frequency statistics or use questionnaires to find out how engaging the app is. Instead, we must find ways to collect both quantitative and qualitative data that go beyond results-based learning analytics or qualitative observations of what children say about the app. We must collect rich sets of data that attempt to look at the context, interactions, frequency, process, and intentionality of the child.

So how are educational technology companies or online learning providers measuring these factors?

The main answer is “learning analytics.” That phrase refers to the use of analysis and dissemination of big data about learners and their contexts to optimize learning. There are both pros and cons to learning analytics. On the one hand, this framework provides the ability to create individualized instruction; on the other hand, the frequency and results-based data collected from these tools are often incomplete and shallow. Using data analytics to create personalized learning paths for individual users has the potential to be a wonderful tool, but without context or a view into the learner’s process, the lessons and supports offered to a student can only be based on the what — whether the student has mastered the material by getting the answer right or wrong — rather than the how — whether the student actually understands the process and material. It is important to know why the learner does not understand the material in order to offer effective support.

This becomes even more chaotic when analyzing learning through play — where the “lesson” is open-ended and has multiple pathways to achievement or mastery. Learning analytics and personalized learning tools are excellent scaffolds to help students solve problems that have only one correct answer. But how do you use results-based or frequency data to help students with open-ended work that has no right or wrong answer? Furthermore, for play that is child-interest driven, the first task is to understand what the child intends to do before figuring out what type of support or scaffolds are needed to help the child reach his/her individual goal. This is the main area where I believe learning analytics fall short, and also where I think there is the biggest potential for research to have a positive impact.

Play Analytics

In the Playful Words project at the MIT Media Lab’s Social Machines group, we follow a design approach that is child-driven and machine-guided, in order to develop technologies that empower children by providing self-expressive, socially collaborative, and playful literacy learning opportunities. Our technologies serve as an unstructured playground to collect and combine contextual and behavioral data, which we use to go beyond learning analytics in early education by introducing what we’re calling play analytics: data-driven analysis of free play.

Through the analysis of process-based data (play analytics), we can start to detect patterns to interpret what the child wants to achieve, and produce personalized supports to help them get there. This also has implications for parents and families because it provides a more descriptive view of children’s learning processes and literacy skills. A new part of our work is to translate the information from the play analytics for parents in a digestible way that highlights children’s progress, offers related resources, and shares pertinent activities based on the child’s interest.

My particular interest is in storytelling, and how it can motivate young students (4 to 8 years old) through communicating their own stories while learning important literacy skills and fostering a love of reading and writing. I hope to build on our work by creating tools and interactions that use play analytics to facilitate collaborative storytelling and self-expressive play.

Going Forward

While we’re still working on answering the major question of how to measure the learning outcomes of play, I do believe we are on the right path. Going forward, it is important for us to: (1) use a critical lens when we are designing studies, and to make sure that, in the field of educational research, we are always using both quantitative and qualitative data; (2) always make sure that we are building solutions and testing them in a naturalistic environment, not just trying them out in the lab; and (3) iterate, test, and most importantly, continuously receive feedback from our target users throughout the entire process. Only then, I believe, will we be able to start designing truly impactful tools that have the potential to measure some of the many learning outcomes through play.


Anneli Hershman is a Learning Innovation Fellow in the ML Learning Initiative and a research assistant in the Lab’s Social Machines research group.

Related Content