• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Many community organizations want to engage in conversations with their constituents but lack the support they need to analyze feedback. Most of the time, these organizations either outsource the analysis, which is expensive and distances decision-making from community members, complete a superficial reading of the data, or let the data collect dust and never uplift the stories that people shared. 


We first became aware of this problem while collaborating with Charlotte-Mecklenburg Schools (CMS), a school district in North Carolina, to gather people’s stories and feedback around two new magnet schools. One of the major challenges we faced was when we tried to analyze all of the data. Our partners in CMS were passionate about creating magnet programs grounded in community voices but had no experience in qualitative data analysis (QDA), or sensemaking. With help from expert sensemakers, we were able to examine the data, which involved creating a codebook, or a list of themes, from a subset of our data and then applying the codebook to the rest of our data through a process called qualitative coding. 


Not all organizations can work closely with expert sensemakers for every community engagement initiative they launch. As a result, we aim to create accessible entry points into sensemaking for people with no prior experience. We plan to apply machine learning and human-centered design methods to create, and then evaluate, a platform that helps non-expert sensemakers conduct qualitative coding, or apply a codebook to their entire dataset. We focus on this phase of sensemaking because it is one of the most time-consuming and tedious parts of the analysis process. Our goal is to reduce the amount of time it takes to code a sample of community conversations while improving the overall coding reliability.