This emotion AI technique is based on a theory called “basic emotions.” The theory divides the basic emotional states of human communication into six types – happiness, surprise, fear, disgust, anger, and sadness.
However, the scientific basis for emotional AI technology is not yet sufficient. Moreover, the practical application of this technology will bring a series of ethical privacy issues, which have made the public controversial about this technology.
1. Video interviews are automatically scored, and emotional AI helps companies screen talents
At present, emotional AI technology is widely used in various fields, and emotional AI technology can play a positive role in these fields.
In the professional field, interviewers use this emotional AI technology to score job candidates’ interviews, and their scoring criteria include “enthusiasm,” “willingness to learn,” “responsibility,” and “personal stability.”
In South Korea, emotional AI technology has been widely used in job interviews. Therefore, job search consultants in South Korea often ask their clients to practice AI interviews to improve the success rate of job interviews; in the security field, border guards use emotional AI technology to detect unknown people at border checkpoints and prevent potential threats; In the medical field, doctors use affective AI technology as an aid in the detection and diagnosis of patients with mood disorders.
▲Video interview
2. Human expressions are complex and difficult to judge, and the scientific basis for emotional AI is insufficient
However, there is a lot of controversy on the scientific basis and feasibility of emotional AI technology at present.
The researchers found that people’s facial expressions vary widely across contexts and cultures, and there is currently insufficient evidence that facial structures accurately, reliably, and specifically reflect human emotional states.
In addition, there is also a lot of evidence that the changes of human facial expressions are very different, and there is no uniform standard for judging emotions. Specifically, human “happy” can be expressed through thousands of expressions. Even the same facial expression can mean different things on different people’s faces.
Futurist Tracey Follows tweeted her uncertainty about emotional AI technology, saying: “At worst, this technology may be a pseudoscience.” AI ethicist Kate Crawford said : “The association between facial expressions and emotions has not been proven. Therefore, decisions based on emotion AI technology are full of uncertainty.”
Because of this concern, some companies have abandoned the development of emotional AI technology. On June 22, Microsoft announced that it would stop selling personal facial emotion analysis tools and further restrict the use of facial recognition tools. At the same time, it removed the AI emotion recognition function of its Azure Face (facial recognition service).
3. Involving privacy and ethics, emotional AI technology is controversial
The technology is said to be used not only in areas such as video interviews, but also to “monitor” people’s minds. Some companies use emotional AI software to determine whether an employee is loyal enough to the company to decide whether to have a private conversation with that employee, VentureBeat said. Some companies use emotional AI software to determine whether employees are in the best working condition, so as to arrange employees’ rest time to improve the company’s productivity and profits.
Neuroscience News posed a key question: Even if emotional AI could read exactly how everyone is feeling, would we want to use such technology in our lives? Are we willing to expose our mental activities and emotions unreservedly to the eyes of others? This is a core issue involving privacy.
Conclusion: Monitoring or Progress? Emotional AI applications need to be further standardized and developed
Every technology faces controversy when it’s new, and the same goes for emotional AI technology. What’s more, this is a technology that requires clear ethical boundaries. If you are not careful, this technology will be accused of monitoring.
Just imagine, if this technology is deeply applied in more fields, then in order to influence the judgment results of the machine, human beings change their facial expressions, cater to the judgment of the machine, and let the machine control their actions. After all, we created machines to serve humans, not to let us be at the mercy of machines.
Moreover, there is no scientific evidence for the relationship between facial expressions and human emotions, and the scientific basis for this technology is not yet solid. Human emotions are complex and changeable, far more than the six types of basic emotions. Even when faced with the same thing, everyone expresses it differently.
In general, this technology is still in its infancy, and there is still a long way to go in terms of scientific basis, ethics, and practical applications.
GIPHY App Key not set. Please check settings