top of page
Auxane Boch

Decoding the Digital Mirage: Ethics of Computer Vision for Emotion Recognition

Welcome AI Ethics Enthusiast!


In this blog post, we will discuss the ethics of computer vision for emotion recognition.


What is it, and what do we know about it?


Computer vision, which uses artificial intelligence techniques to analyse images and videos, has many applications. It encompasses tasks such as tracking, identification, detection, classification, localisation, segmentation, facial recognition, emotion recognition, and behaviour recognition.


Regarding facial emotion recognition, computer vision techniques are often built on Dr. Paul Ekman's theory of facial expressions of emotions. Ekman proposed that certain emotions, like joy, sadness, and fear, are universally expressed through similar facial expressions. However, it's important to note that many aspects of Ekman's theory have been challenged or discredited over time.


Alternative theories, such as the Dimensional Theory and the Theory of Constructed Emotions, have gained recognition. The Dimensional Theory suggests that emotions can be described along three fundamental dimensions: valence (positiveness or negativeness), arousal (activation level), and dominance (level of control). The Theory of Constructed Emotions, proposed by Dr Lisa Barrett, emphasises how our experiences and bodily signals contribute to constructing emotions.


Considering these theories, it becomes clear that current emotion recognition primarily relies on Ekman's work, which has faced criticism due to its presumed universality and potential biases. The Theory of Constructed Emotions reminds us of the importance of individual context, culture, and experiences in understanding and expressing emotions. Moreover, facial recognition is just one aspect of emotion recognition, as other factors like heart rate or skin conduction need to be considered to evaluate emotions accurately.


Despite these challenges, the dimensional theory allows us to infer the general positive or negative valence of an emotion, provided the AI is trained on a diverse dataset that represents emotions within a particular culture. Now, let's delve into the challenges and opportunities of these technologies.


Let's Discuss Ethics


The biggest challenge lies in achieving accuracy. Training data must adequately represent the diverse emotions across different cultures and settings to avoid inaccuracies in AI outputs. This is especially crucial when considering neurodivergent populations who may express feelings differently or have unique emotional experiences. It can be challenging to collect representative training data for these populations, and we must be cautious not to stigmatise or isolate them based on their emotional expressions. Ensuring diversity in data and involving target populations in the design process is vital to minimise bias and ensure adequate representation.


Beyond the technological aspects, we must also consider the broader impact and ethical implications of emotion recognition technology. People may lose their ability to communicate their emotions firsthand when computer vision systems infer their emotional states. Some applications automatically infer personal information like identity, demographics, or mood. Consequently, individuals are deprived of their agency to communicate this information themselves.


However, like any technology, there are positive use cases as well. Emotion recognition can enhance user experiences by enabling personalised interactions with technology. For example, computer vision can detect a user's emotional state while watching a movie or using a virtual reality application and adjust the content or experience accordingly to create a more engaging and tailored experience. This technology can also benefit education, helping AI-powered educational robots support children in learning specific tasks like math or languages.


We hope this topic has sparked your interest, and we encourage you to reach out if you have any questions or thoughts. Remember, staying informed and using AI responsibly is a collective responsibility.


Until next time,


- Auxane Boch


References and Readings


Barrett, Lisa Feldman. 2017b. The theory of constructed emotion: An active inference account of interoception and categorization. Social Cognitive and Affective Neuroscience, 12(1):1–23. https://doi.org/10.1093/scan/nsx060


Barrett, Lisa Feldman, Ralph Adolphs, Stacy Marsella, Aleix M. Martinez, and Seth D. Pollak. 2019. Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological Science in the Public Interest, 20(1):1–68. https://doi.org/10.1177/1529100619832930


Ekman, Paul. 1992. Are there basic emotions? Psychological Review, 99(3):550–553. https://doi.org/10.1037/0033-295X.99.3.550


Russell, James A.2003. Core affect and the psychological construction of emotion. Psychological Review, 110(1):145. https://doi.org/10.1037/0033-295X.110.1.145


Saif M. Mohammad; Ethics Sheet for Automatic Emotion Recognition and Sentiment Analysis. Computational Linguistics 2022; 48 (2): 239–278. doi: https://doi.org/10.1162/coli_a_00433


Waelen, R. A. (2023). The ethics of computer vision: an overview in terms of power. AI and Ethics, 1-10.



85 views0 comments

Comments


bottom of page