• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Publication

Exploring listeners' perceptions of AI-generated and human-composed music for functional emotional applications

Lecamwasam, K. and Chaudhuri, T.R. (2025). Exploring listeners' perceptions of AI-generated and human-composed music for functional emotional applications. arXiv preprint arXiv:2506.02856

Abstract

This work investigates how listeners perceive and evaluate AI-generated as compared to human-composed music in the context of emotional resonance and regulation. Across a mixed-methods design, participants were exposed to both AI and human music under various labeling conditions (music correctly labeled as AI- or human-origin, music incorrectly labeled as AI- or human-origin, and unlabeled music) and emotion cases (Calm and Upbeat), and were asked to rate preference, efficacy of target emotion elicitation, and emotional impact. Participants were significantly more likely to rate human-composed music, regardless of labeling, as more effective at eliciting target emotional states, though quantitative analyses revealed no significant differences in emotional response. However, participants were significantly more likely to indicate preference for AI-generated music, yielding further questions regarding the impact of emotional authenticity and perceived authorship on musical appraisal. Qualitative data underscored this, with participants associating humanness with qualities such as imperfection, flow, and 'soul.' These findings challenge the assumption that preference alone signals success in generative music systems. Rather than positioning AI tools as replacements for human creativity or emotional expression, they point toward a more careful design ethos that acknowledges the limits of replication and prioritizes human values such as authenticity, individuality, and emotion regulation in wellness and affective technologies.

Related Content