Special Session at Interspeech 2019

Thanks to the great work of Anna Esposito, I have the pleasure to join the organising committee of the “Special Session on Dynamics of Emotional Speech Exchanges in Multimodal Communication“, to be held at Interspeech 2019:

https://www.interspeech2019.org/program/special_sessions_and_challenges/

The topics covered in the special session can be described as follows: “Research devoted to understanding the relationship between verbal and nonverbal communication modes, and investigating the perceptual and cognitive processes involved in the coding/decoding of emotional states is particularly relevant in the fields of Human-Human and Human-Computer Interaction.

The special session has been possible thanks to the H2020 funded project “Empathic” (http://www.empathic-project.eu/).

Appearance in “Forbes”

The business magazine Forbes features an article about the 16 Centres for Doctoral Training announced by UKRI on February 21st:

https://www.forbes.com/sites/samshead/2019/02/20/uk-government-to-fund-ai-university-courses-with-115m/#4fdc239c430d

The article explains that the UK government aims at keeping the pace with the USA and China in the AI race: “AI is poised to become the most significant technology for a generation but there are only so many people that know how to develop the technology, which could have a huge impact on industries such as healthcare, energy, and automotive.”

 

New Centre for Doctoral Training

I have been awarded one of the 16 UKRI Centres for Doctoral Training in Artificial Intelligence:

https://www.ukri.org/news/200m-to-create-a-new-generation-of-artificial-intelligence-leaders/

It will be for me the major opportunity to collaborate with 30 world leading colleagues and 15 major industrial partners for the training of 50 PhD students. We will investigate all together the nature of social intelligence in humans and machines. The project takes place at the University of Glasgow and it involves the School of Computing Science, the School of Psychology and the Institute of Neuroscience and Psychology.

 

Interview for Voices in AI

I have been interviewed for Voices in AI, a series of conversations between Byron Reese and experts in Artificial Intelligence:

https://voicesinai.com/episode/episode-78-a-conversation-with-alessandro-vinciarelli/

The interview has focused on the interplay between human psychology and machine intelligence and, in particular, on how machines can learn how to “read the mind” of their users. After outlining the main applications (and the many emerging companies active in the area), the attention has shifted to the significant ethical issues underlying the development of these technologies. The main point we have made is that the danger does not come from technologies, but from people. Therefore, it is through societal choices and political regulation that socially intelligent Artificial Intelligence will be of benefit for people. Many thanks to Neurodata Lab for having created the opportunity of this interview.

 

New Article on Speech Perception

My article “Machine-Based Decoding of Human Voices and Speech” has been published in “The Oxford Handbook of Voice Perception“, edited by S.Fruholz and P.Belin. The chapter provides a general introduction to the main approaches aimed at speech recognition and inference of speech-based social perceptions. After showing that our very physiology is shaped around the perception of human voices, the chapter shows that speech is probably the signal most common studied and analysed in the technological literature. Furthermore, the chapter introduces the main approaches adopted to automatically transcribe speech signals (a task called speech recognition) and to infer from them different types of traits and psychological phenomena (personality, emotions, etc.).

Article Accepted at CHI 2019

The article “Automating the Administration and Analysis of Psychiatric Tests: The Case of Attachment in School Age Children” (G.Roffo, D.B.Vo, A,Sorrentino, M.Rooksby, M.Tayarani, H.Minnis, S.Brewster and A.Vinciarelli) has been accepted for presentation at the next ACM CHI Conference on Human Factors in Computing Systems (CHI 2019). The abstract of the article is as follows:

This article presents an interactive system aimed at administering, without the supervision of professional personnel, the Manchester Child Attachment Story Task (a psychiatric test for the assessment of attachment in children). The main goal of the system is to collect, through an interaction process, enough information to allow a human assessor to manually identify the attachment of children. The experiments show that the system successfully performs such a task in 87.5% of the cases (105 of the 120 children involved in the study). In addition, the experiments show that an automatic approach based on deep neural networks can map the information that the system collects, the same that is provided to the human assessors, into the attachment condition of the children. The outcome of the system matches the judgment of the human assessors in 82.8% of the cases (87 of the 105 children for which the system has successfully administered the test). To the best of our knowledge, this is the first time an automated tool has been successful in measuring attachment. This work has significant implications for psychiatry as it allows professionals to assess many more children and direct healthcare resources more accurately and efficiently to improve mental health.

Multimodality Course at University of Fribourg

I had the chance to give a course on multimodality at the University of Fribourg (Switzerland) in the framework of the Certificate of Advanced Studies in Interaction Science and Technology:

http://human-ist.unifr.ch/cas/courses/social-signal-and-multimodal-processing

It has been an intensive day during which i have been teaching for six hours to a highly interactive class that has posed a large number of interesting questions. After introducing the concept of multimodality in disciplines like psychology and biology, I have shown how Artificial Intelligence deals with the phenomenon. In particular, I have shown how early and late fusion (the two basic methodologies for the development of multimodal approaches) can be thought of as modifications of Bayes Decision Theory. To complete the day, I have show how such methodologies have been applied in two interesting problems, namely the analysis of attachment in children and the inference of personality traits from simple vocalisations like “ehm” or “uhm”.