Lucy Suchman and Feminist Technoscience
Lucy Suchman's text Human-machine reconfigurations: Plans and situated actions examines virtually all kinds of relations between humans and machines. Suchman suggests three key elements as being necessary for AI projects to possess 'humanness':
She elaborates on each characteristic in its own respective chapter, with real world examples of historical and contemporary technologies. The latter two chapters directly deal with facial expression recognition/ FER.
AI being created after the image of its 'specifically located' makers. ...The prevailing figuration in Euro-American imaginaries is one of autonomous, rational agency, and projects of artificial intelligence reiterate that culturally specific imaginary. What does it mean to classify human facial expressions into a few categories? they seem kind of arbitrary? and why would you do such a thing?
Here are some examples of the expression category sets chosen for different algorithms:
Angry, Fearful, Disgusted, Sad, Happy, Surprised, Neutral
Neutral, anger, contempt, disgust, fear, happiness, sadness, surprise
Happy, normal, sad, sleepy, surprised, wink
Neutral, smile, anger, scream
The above table from Ion Marquez' 2010 research paper Face Recognition Algorithms, the purpose of which is to 'produce a review of the face detection and face recognition literature as comprehensive as possible', lists the key applications of FER algorithms. All but the last of these categories are related to surveillance and security, which follows from Marquez' introductory statement that this type of technology was first developed with the US Department of Defence and an 'unnamed intelligence agency' in the 1960s.
Rosalind Picard, noted proponent and director of the Affective Computing Laboratory at the Massachusetts Institute of Technology: 'because emotional computing tends to connote computers with an undesirable reduction in rationality,we prefer the term affective computing to denote computing that relates to, arises from, or deliberately influences emotions.' This is exactly what FER does. The intelligence cannot be said to be 'emotional' itself, this is a misstep in reasoning: it is affective because the information it processes is affective(?) to humans (?)
Technologies, Haraway argues, are forms of materialized figuration; that is, they bring together assemblages of stuff and meaning into more and less stable arrangements.
“Good and reliable subjects” were chosen for their ability to display clearly recognizable emotions on demand, whereas those that failed to produce unambiguous and therefore easily classifiable behaviors were left out of the protocol (Dror 1999: 383).
emotional types, normalized across the circumstances of their occurrence (e.g., as anger, fear, excitement), and treated as internally
homogeneous, if variable in their quantity or intensity.
“Emotions were understood as processes in the general scheme of the body-as-machine . . . Thus, emotion was a pattern written in the language of the biological elements that one monitored in, or sampled from, the organism” (2001: 362).
Amanda Knox and Lindy Chamberlain are some prolific examples of emotional responses being judged as insincere or insufficient and as evidence of guilt.. Neurodivergent or autistic people are certainly excluded from 'normative' classifications of humanness..
I don't think FER algorithms do 'figure the human'- not all algorithms are made with this intention? most aren't? it doesn't have embodiment, it is disembodied software- it is affective in one sense of the word but not emotional- maybe it has sociality I'm not sure