Emily Mower Provost
Emily Mower ProvostProfessor, Computer Science and EngineeringComputer Science and Engineering
(734) 647-1802 3629 Beyster Bldg.2260 Hayward St.Ann Arbor, MI 48109-2122

Prof. Emily Mower Provost

Emily Mower Provost is a Professor and Senior Associate Chair in Computer Science and Engineering and Professor of Psychiatry (by courtesy) at the University of Michigan. She received her Ph.D. in Electrical Engineering from the University of Southern California (USC), Los Angeles, CA in 2010. She has been awarded a Toyota Faculty Scholar Award (2020), National Science Foundation CAREER Award (2017), the Oscar Stern Award for Depression Research (2015), a National Science Foundation Graduate Research Fellowship (2004-2007). She is a co-author of multiple award-winning papers in the field of automatic emotion recognition. Her research interests are in human-centered speech and video processing, multimodal interfaces design, and speech-based assistive technology. The goals of her research are motivated by the complexities of the perception and expression of human behavior.

Research Overview

Logo for the Computational Human Artificial Intelligence (CHAI) Lab

Our goals are to advance speech-centered machine learning for human behavior modeling. We focus on three main areas: 1) emotion recognition, 2) mental health modeling, and 3) assistive technology.

Emotion Recognition: Emotion has intrigued researchers for generations. This fascination has permeated the engineering community, motivating the development of affective computational models for classification. However, human emotion remains notoriously difficult to interpret both because of the mismatch between the emotional cue generation (the speaker) and cue perception (the observer) processes and because of the presence of complex emotions, emotions that contain shades of multiple affective classes. Proper representations of emotion would ameliorate this problem by introducing multidimensional characterizations of the data that permit the quantification and description of the varied affective components of each utterance. Currently, the mathematical representation of emotion is an area that is underexplored. Research in emotion expression and perception provides a complex and human-centered platform for the integration of machine learning techniques and multimodal signal processing towards the design of interpretable data representations.

Mental Health Modeling: Our speech, both language and acoustics provides critical insight into an our well-being. In this line of work, we ask how we can design speech-centered approaches to determine level of symptom severity for individuals with bipolar disorder and risk factors for individuals at risk of suicide. Our work has focused on investigating whether, how, and when passive speech-based technologies can be used to measure changes in mental health symptom severity natural, real-world, unconstrained speech data.

Assistive Technology: An individual’s speech patterns provides insight into their physical health. Speech changes are reflective of language impairments, muscular changes, and cognitive impairment. In this line of work, we ask how new speech-centered algorithms can be designed to detect changes in health.

Recorded Talks

Michigan AI Symposium, Jan. 2019 — A quick talk on our speech-based mission

MIND Summer School, Aug. 2018 — Talk on speech-based assistive technology

Frederick Jelinek Memorial Summer Workshop Plenary Talk, Aug. 2017 — Talk on speech-based assistive technology

Women in Data Science Conference hosted by MIDAS, Feb. 2017 — Talk on speech-based affective computing

MIDAS (Michigan Institute for Data Science) Seminar, Jan. 2017 — Talk on speech-based assistive technology for individuals with bipolar disorder

Data Mining Workshop, 2013 — Talk on engineering approaches to understanding emotion perception.

Keynote at the 7th Annual Prechter Lecture — Talk on human behavior understanding