There are currently UROP openings for this project.
Humans navigate an overwhelming amount of information flow daily, particularly via text media: reading books, browsing social media, or engaging with chatbots. As we interact with these information sources, we frequently experience confusion when information challenges or exceeds our existing knowledge.
Our study provides the first exploration of detecting confusion while reading text exempts using Electroencephalography (EEG) and eye-tracking. We collected EEG and eye-tracking data from 16 participants as they read short text paragraphs collected from real-world sources. By isolating the N400 Event-Related Potential (ERP) and integrating behavioral markers derived from eye-tracking, we provide a comprehensive analysis of the neural and behavioral signatures of reading confusion. We also focus on understanding better 2 types pf confusion: (1) Factual Confusion, arising from contradictions to previously acquired knowledge, and (2) Contextual Confusion, stemming from insufficient background knowledge.
Furthermore, we demonstrate the feasibility of classifying reading confusion using deep learning, achieving an average accuracy of 77.29% and the best-subject accuracy of 89.55% through multimodal integration. With these results, this study lays the groundwork for developing adaptive systems capable of responding dynamically to user confusion in real-time, with applications ranging from personalized learning tools to accessibility use cases.