Abstract

AI technologies for affective computing are emerging for a wide range of digital health applications, most notably in diagnosis, monitoring, and behavioural skills training. However, current AI algorithms cannot accurately recognize emotions in these applications because of the subtle expressions that vary across individuals and capture conditions. This talk describes cost-effective deep learning (DL) models for expression recognition (ER) based on facial, vocal, textual and physiological modalities. These models accurately recognize subtle and subject-specific expressions linked to an individual's affective state, like ambivalence, pain, depression, stress and fatigue using data captured from Q&A videos. They are developed for multimodal and spatiotemporal fusion, multimodal learning using privileged training information unavailable at test time, and weakly supervised learning of data with limited/ambiguous annotations. This talk also describes methods for domain adaptation from unlabeled videos captured at test time to rapidly personalize DL models to individuals and capture conditions.

Biography

Eric Granger received a Ph.D. degree in electrical engineering from École Polytechnique de Montréal in 2001. He was a Defence Scientist with DRDC Ottawa from 1999 to 2001 and in Research and Development with Mitel Networks from 2001 to 2004. He joined the Department of Systems Engineering, École de technologie supérieure (ETS) Montréal, Canada, in 2004, where he is currently a Full Professor and the Director of LIVIA, a research laboratory focused on computer vision and artificial intelligence. He is the FRQS Co-Chair in AI and Health, and the ETS Industrial Research Co-Chair on embedded neural networks for intelligent connected buildings (Distech Controls Inc.). His research interests include pattern recognition, machine learning, information fusion, and computer vision, with applications in biometrics, face recognition, medical image analysis, and video surveillance.