Ghandeharioun, A., Eoff, B., Jou, B., & Picard, R. W. (2019). Characterizing Sources of Uncertainty to Proxy Calibration and Disambiguate Annotator and Data Bias. arXiv preprint arXiv:1909.09285.
Work for a Member company and need a Member Portal account? Register here with your company email address.
Ghandeharioun, A., Eoff, B., Jou, B., & Picard, R. W. (2019). Characterizing Sources of Uncertainty to Proxy Calibration and Disambiguate Annotator and Data Bias. arXiv preprint arXiv:1909.09285.
Supporting model interpretability for complex phenomena where annotators can legitimately disagree, such as emotion recognition, is a challenging machine learning task. In this work, we show that explicitly quantifying the uncertainty in such settings has interpretability benefits. We use a simple modification of a classical network inference using Monte Carlo dropout to give measures of epistemic and aleatoric uncertainty. We identify a significant correlation between aleatoric uncertainty and human annotator disagreement (r ≈ .3). Additionally, we demonstrate how difficult and subjective training samples can be identified using aleatoric uncertainty and how epistemic uncertainty can reveal data bias that could result in unfair predictions. We identify the total uncertainty as a suitable surrogate for model calibration, i.e. the degree we can trust model’s predicted confidence. In addition to explainability benefits, we observe modest performance boosts from incorporating model uncertainty.