menu-control
The Jerusalem Post

Report: Google developed artificial intelligence that "detects emotions"

 
 Google Headquarters. Artificial Intelligence Detects Emotions (photo credit: SHUTTERSTOCK)
Google Headquarters. Artificial Intelligence Detects Emotions
(photo credit: SHUTTERSTOCK)

The new language model, named PaliGemma2, is supposed to identify feelings based on images. This technological development is already raising ethical concerns.

Artificial intelligence is advancing at a rapid pace, and now it is also raising moral questions with Google's new innovation in the field: An AI model called PaliGemma2, which allegedly can analyze emotions based on images. The company announced the innovation yesterday (Thursday), featuring innovative capabilities that include accurate descriptions of actions, emotions, and background stories of photographed scenes.

The PaliGemma2 technology allows for image analysis to generate captions and understand visual queries. According to Google, the system is capable of "detecting" emotions in images, but this function does not operate automatically and requires customization.

The company explained that the new technology goes beyond object recognition and enables a full contextual understanding of the photographed scene. However, this development is raising concerns among experts in the field of AI ethics.

The basic theory of emotion recognition relies on the work of Paul Ekman, who claimed that humans express six basic emotions: Anger, surprise, disgust, joy, fear, and sadness. However, later research has shown significant cultural differences in how people express emotions, casting doubt on the accuracy of the theory.

Advertisement

Emotion recognition systems tend to be biased due to the assumptions made by their developers. A 2020 study conducted by MTI in the U.S. found that face analysis models tended to favor certain expressions, like smiling, and assigned more negative emotions to the facial features of Black people compared to White people.

 PaliGemma 2. A Development Raising Concerns (credit: GOOGLE)
PaliGemma 2. A Development Raising Concerns (credit: GOOGLE)

Google notes that its development has undergone extensive testing to evaluate the demographic bias of the system, but the company has provided limited information about the types of tests and metrics used. The model was tested with a system called FairFace, which represents certain racial groups, but it too has faced criticism for being limited and not representative enough of a broad range of populations.

The European Union already considers emotion recognition as high-risk technology. Its new AI law bans the use of this technology in schools and workplaces but allows its use by law enforcement authorities. Experts fear that wide access to Google's new model will lead to harmful uses, including discrimination against marginalized groups.

A spokesperson for Google emphasized that the company has conducted comprehensive ethical testing, including examining the potential impacts on different groups. The company claims that the model has also been tested for child safety and sensitive content. In the meantime, it is unclear when Google's new AI model will be available to the public.

×
Email:
×
Email: