A new computer system that allows a machine to automatically recognize the emotional state of a person who is orally communicating with it has been developed.
The system created by Scientists at the Universidad Carlos III de Madrid (UC3M) and the Universidad de Granada (UGR) can be used to automatically adapt the dialogue to the user’s situation, so that the machine’s response is adequate to the person’s emotional state.
“Thanks to this new development, the machine will be able to determine how the user feels (emotions) and how s/he intends to continue the dialogue (intentions),” said David Grill, a professor in UC3M’s Computer Science Department, one of the creators.
To detect the user’s emotional state, the scientists focused on negative emotions that can make talking with an automatic system frustrating. Specifically, their work considered anger, boredom and doubt.
To automatically detect these feelings, information regarding the tone of voice, the speed of speech, the duration of pauses, the energy of the voice signal and so on, up to a total of sixty different acoustic parameters, was used.
The discovery has been published in the Journal on Advances in Signal Processing.