menu MENU
Emily Mower Provost
Emily Mower ProvostProfessor, Computer Science and EngineeringComputer Science and Engineering
(734) 647-1802 3629 Beyster Bldg.2260 Hayward St.Ann Arbor, MI 48109-2122

Publications

Journal Publications

  1. Amrit Romana, Minxue Niu, Matthew Perez, Emily Mower Provost. “FluencyBank Timestamped: An Updated Dataset for Disfluency Detection and Automatic Intended Speech Recognition.” Journal of Speech, Language, and Hearing Research. To appear, 2024. [coming soon]
  2. Emily Mower Provost, Sarah H Sperry, James Tavernor, Steve Anderau, Anastasia Yocum, Melvin G McInnis. “Emotion Recognition in the Real World: Passively Collecting and Estimating Emotions from Natural Speech Data of Individuals with Bipolar Disorder.” IEEE Transactions on Affective Computing. To appear, 2024. [link]
  3. Gemma T. Wallace, Leslie A. Brick, Emily Mower Provost, Jessica R. Peters, Ivan W. Miller, Heather T. Schatten. “Daily Levels and Dynamic Metrics of Affective–Cognitive Constructs Associate With Suicidal Thoughts and Behaviours in Adults After Psychiatric Hospitalization,” Clinical Psychology and Psychotherapy. April 25, 2024. [link]
  4. Anastasia K Yocum,  Steve Anderau,  Holli Bertram,  Helen J Burgess,  Amy L Cochran, Patricia J Deldin,  Simon J Evans,  Peisong Han,  Paul M Jenkins,  Ravleen Kaur, Scott A Langenecker,  David F Marshall,  Emily Mower Provost,  K Sue O’Shea,  Kelly A Ryan, Sarah H Sperry,  Shawna N Smith,  Ivy F Tso,  Kritika M Versha,  Brittany M Wright, Sebastian Zöllner,  Melvin G McInnis. “Cohort Profile Update: The Heinz C. Prechter Longitudinal Study of Bipolar Disorder,” International Journal of Epidemiology. August 4, 2023. [pdf]
  5. Chi-Chun Lee, Theodora Chaspari, Emily Mower Provost, Shrikanth S. Narayanan, “An engineering view on emotions and speech: From analysis and predictive models to responsible human-centered applications,” IEEE Transactions on Affective Computing, vol: To appear, 2023. [Early Access pdf]
  6. Zakaria Aldeneh and Emily Mower Provost. “You’re Not You When You’re Angry: Robust Emotion Features Emerge by Recognizing Speakers,” IEEE Transactions on Affective Computing, vol: To appear, 2021. [Early Access pdf]
  7. Brian Stasak, Julien Epps, Heather T. Schatten, Ivan W. Miller, Emily Mower Provost, and Michael F. Armey. “Read Speech Voice Quality and Disfluency in Individuals with Recent Suicidal Ideation or Suicide Attempt,” Speech Communication, vol: To appear, 2021. [pdf]
  8. John Gideon, Melvin McInnis, Emily Mower Provost. “Improving Cross-Corpus Speech Emotion Recognition with Adversarial Discriminative Domain Generalization (ADDoG),” IEEE Transactions on Affective Computing, vol:12, issue:4, Oct.-Dec., 2019. [Note: selected as one of five papers for the Best of IEEE Transactions on Affective Computing 2021 Paper Collection] [pdf]
  9. Soheil Khorram, Melvin McInnis, Emily Mower Provost. “Jointly Aligning and Predicting Continuous Emotion Annotations,” IEEE Transactions on Affective Computing, Vol: To appear, 2019. [pdf]
  10. Duc Le, Keli Licata, and Emily Mower Provost. “Automatic Quantitative Analysis of Spontaneous Aphasic Speech,” Speech Communication, vol: To appear, 2018. [pdf]
  11. Yelin Kim and Emily Mower Provost. “ISLA: Temporal Segmentation and Labeling for Audio-Visual Emotion Recognition,” IEEE Transactions on Affective Computing, vol: To appear, 2017. [Early Access pdf]
  12. Biqiao (Didi) Zhang, Emily Mower Provost, and Georg Essl. “Cross-corpus Acoustic Emotion Recognition with Multi-task Learning: Seeking Common Ground while Preserving Differences,” IEEE Transactions on Affective Computing, vol: To appear, 2017. [pdf]
  13. Duc Le, Keli Licata, Carol Persad, and Emily Mower Provost. “Automatic Assessment of Speech Intelligibility for Individuals with Aphasia,” IEEE Transactions on Audio, Speech, and Language Processing, vol: 24, no: 11, Nov. 2016. [pdf]
  14. Carlos Busso, Srinivas Parthasarathy, Alec Burmania, Mohammed AbdelWahab, Najmeh Sadoughi, and Emily Mower Provost, ”MSP-IMPROV: An Acted Corpus of Dyadic Interactions to Study Emotion Perception,” IEEE Transactions on Affective Computing, 8:1(67-80), 2016. [pdf]
  15. Yelin Kim and Emily Mower Provost. “Emotion Recognition During Speech Using Dynamics of Multiple Regions of the Face.” ACM Transactions on Multimedia Computing, Communications and Applications (ACM TOMM), Special Issue on ACM Multimedia Best Papers, 12:1(article 25), 2015. [pdf]
  16. Emily Mower Provost, Yuan (June) Shangguan, Carlos Busso, “UMEME: University of Michigan Emotional McGurk Effect Dataset,” IEEE Transactions of Affective Computing, 10:1(395-409), 2015. [pdf]
  17. Chi-Chun Lee, Emily Mower, Carlos Busso, Sungbok Lee and Shrikanth S. Narayanan, Emotion recognition using a hierarchical binary decision tree approach (2011), in: Speech Communication, 53:9-10(1162-1171). [pdf]
  18. Emily Mower, Maja J. Mataric and Shrikanth S. Narayanan, “A Framework for Automatic Human Emotion Classification Using Emotional Profiles,” IEEE Transactions on Audio, Speech and Language Processing, 19:5(1057-1070). May 2011. [pdf]
  19. Emily Mower, Maja Matarić, Shrikanth Narayanan. “Human Perception of Audio-Visual Synthetic Character Emotion Expression in the Presence of Ambiguous and Conflicting Information.” IEEE Transactions on Multimedia. 11:5(843-855). August 2009. [pdf]
  20. Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel Kim, Jeannette Chang, Sungbok Lee, and Shrikanth Narayanan. “IEMOCAP: Interactive emotional dyadic motion capture database.” Journal of Language Resources and Evaluation, 42:4(335-359). November 2008. [SpringerLink]
  21. Michael Grimm, Kristian Kroschel, Emily Mower, and Shrikanth Narayanan. “Primitives based estimation and evaluation of emotions in speech.” Speech Communication 49, 49:10-11(787-800). November 2007. [pdf]

Book Chapters

  1. Zhang, Biqiao, and Emily Mower Provost. “Automatic recognition of self-reported and perceived emotions.” Multimodal Behavior Analysis in the Wild. Academic Press, 2019. 443-470.[pdf]

Conference Publications

  1. Matthew Perez, Aneesha Sampath, Minxue Niu, Emily Mower Provost. “Beyond Binary: Multiclass Paraphasia Detection with Generative Pretrained Transformers and End-to-End Models.” Interspeech. Kos, Greece. September 2024. [coming soon]
  2. Minxue Niu, Mimansa Jaiswal, Emily Mower Provost. “From Text to Emotion: Unveiling the Emotion Annotation Capabilities of LLMs.” Interspeech. Kos, Greece. September 2024. [coming soon]
  3. James Tavernor, Yara El-Tawil, Emily Mower Provost. “The Whole Is Bigger Than the Sum of Its Parts: Modeling Individual Annotators to Capture Emotional Variability.” Interspeech. Kos, Greece. September 2024. [coming soon]
  4. James Tavernor, Matthew Perez, Emily Mower Provost. “Episodic Memory For Domain-Adaptable, Robust Speech Emotion Recognition.” Interspeech. Dublin, Ireland. August 2023. [pdf]
  5. Minxue (Sandy) Niu, Amrit Romana, Mimansa Jaiswal, Melvin McInnis, Emily Mower Provost. “Capturing Mismatch between Textual and Acoustic Emotion Expressions for Mood Identification in Bipolar Disorder.” Interspeech. Dublin, Ireland. August 2023. [pdf]
  6. Amrit Romana, John Bandon, Matthew Perez, Stephanie Gutierrez, Richard Richter, Angela Roberts, Emily Mower Provost. “Enabling Off-the-Shelf Disfluency Detection and Categorization for Pathological Speech.” Interspeech. Incheon, Korea. September 2022. [pdf]
  7. Matthew Perez, Mimansa Jaiswal, Minxue Niu, Cristina Gorrostieta, Matthew Roddy, Kye Taylor, Reza Lotfian, John Kane, Emily Mower Provost. “Mind the Gap: On the Value of Silence Representations to Lexical-Based Speech Emotion Recognition.” Interspeech. Incheon, Korea. September 2022. [pdf]
  8. Alex Wilf and Emily Mower Provost. “Towards Noise Robust Speech Emotion Recognition Using Dynamic Layer Customization.” Affective Computing and Intelligent Interaction (ACII). Tokyo, Japan. September 2021. [pdf]
  9. Amrit Romana, John Bandon, Matthew Perez, Stephanie Gutierrez, Richard Richter, Angela Roberts and Emily Mower Provost. ”Automatically Detecting Errors and Disfluencies in Read Speech to Predict Cognitive Impairment in People with Parkinson’s Disease.” Interspeech. Brno, Czech Republic. August 2021. [pdf]
  10. Matthew Perez, Amrit Romana, Angela Roberts, Noelle Carlozzi, Jennifer Ann Miner, Praveen Dayalu and Emily Mower Provost. ”Articulatory Coordination for Speech Motor Tracking in Huntington Disease.” Interspeech. Brno, Czech Republic. August 2021. [pdf]
  11. Zakaria Aldeneh, Matthew Perez, Emily Mower Provost. “Learning Paralinguistic Features from Audiobooks through Style Voice Conversions.” Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Mexico City, Mexico. June 2021. [pdf]
  12. Amrit Romana, John Bandon, Noelle Carlozzi, Angela Roberts, Emily Mower Provost. “Classification of Manifest Huntington Disease using Vowel Distortion Measures.” Interspeech. Shanghai, China. October 2020. [Extended/updated pdf]
  13. Matthew Perez, Zakaria Aldeneh, Emily Mower Provost. “Aphasic Speech Recognition using a Mixture of Speech Intelligibility Experts.” Interspeech. Shanghai, China. October 2020. [pdf]
  14. Mimansa Jaiswal, Christian-Paul Bara, Yuanhang Luo, Mihai Burzo, Rada Mihalcea, and Emily Mower Provost, “MuSE: a Multimodal Dataset of Stressed Emotion.” Language Resources and Evaluation Conference (LREC) . Marseille, France. May 2020. [pdf]
  15. Mimansa Jaiswal and Emily Mower Provost, “Privacy Enhanced Multimodal Neural Representations for Emotion Recognition,” AAAI. New York, New York. February 2020. [pdf]
  16. Mimansa Jaiswal, Zakaria Aldeneh, Emily Mower Provost, “Using Adversarial Training to Investigate the Effect of Confounders on Multimodal Emotion Classification.” International Conference on Multimodal Interaction (ICMI). Suzhou, Jiangsu, China. October 2019. [pdf]
  17. Katie Matton, Melvin G McInnis, Emily Mower Provost, “Into the Wild: Transitioning from Recognizing Mood in Clinical Interactions to Personal Conversations for Individuals with Bipolar Disorder.” Interspeech. Graz, Austria. September 2019. [pdf]
  18. Zakaria Aldeneh, Mimansa Jaiswal, Michael Picheny, Melvin McInnis, Emily Mower Provost. “Identifying Mood Episodes Using Dialogue Features from Clinical Interviews.” Interspeech. Graz, Austria. September 2019. [pdf]
  19. John Gideon, Heather T Schatten, Melvin G McInnis, Emily Mower Provost. “Emotion Recognition from Natural Phone Conversations in Individuals With and Without Recent Suicidal Ideation.” Interspeech. Graz, Austria. September 2019. [pdf]
  20. Mimansa Jaiswal, Zakaria Aldeneh, Cristian-Paul Bara, Yuanhang Luo, Mihai Burzo, Rada Mihalcea, Emily Mower Provost. “MuSE-ing on the impact of utterance ordering on crowdsourced emotion annotations.” International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Brighton, England. May 2019. [pdf]
  21. Biqiao Zhang, Soheil Khorram, and Emily Mower Provost. “Exploiting Acoustic and Lexical Properties of Phonemes to Recognize Valence from Speech.” International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Brighton, England. May 2019. [pdf]
  22. Soheil Khorram, Melvin McInnis, and Emily Mower Provost. “Trainable Time Warping: aligning time-series in the continuous-time domain.” International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Brighton, England. May 2019. [pdf]
  23. Biqiao Zhang, Yuqing Kong, Georg Essl, Emily Mower Provost. “f-Similarity Preservation Loss for Soft Labels: A Demonstration on Cross-Corpus Speech Emotion Recognition.” AAAI. Hawaii. January 2019. [pdf]
  24. Soheil Khorram, Mimansa Jaiswal, John Gideon, Melvin McInnis, Emily Mower Provost. “The PRIORI Emotion Dataset: Linking Mood to Emotion Detected In-the-Wild.” Interspeech. Hyderabad, India. September 2018. [pdf]
  25. Matthew Perez, Wenyu Jin, Duc Le, Noelle Carlozzi, Praveen Dayalu, Angela Roberts, Emily Mower Provost. “Classification of Huntington’s Disease Using Acoustic and Lexical Features.” Interspeech. Hyderabad, India. September 2018. [pdf]
  26. Zakaria Aldeneh, Dimitrios Dimitriadis, and Emily Mower Provost. “Improving End-of-Turn Detection in Spoken Dialogues by Detecting Speaker Intentions as a Secondary Task.” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Calgary, Canada, April 2018. [pdf]
  27. Biqiao Zhang, Georg Essl, and Emily Mower Provost. “Predicting the Distribution of Emotion Perception: Capturing Inter-Rater Variability.” International Conference on Multimodal Interaction (ICMI). Glasgow, Scotland, November 2017. [Note: full paper, oral presentation] [pdf]
  28. Zakaria Aldeneh, Soheil Khorram, Dimitrios Dimitriadis, Emily Mower Provost. “Pooling Acoustic and Lexical Features for the Prediction of Valence.” International Conference on Multimodal Interaction (ICMI). Glasgow, Scotland, November 2017. [Note: short paper, oral presentation] [pdf]
  29. Duc Le, Zakariah Aldeneh, and Emily Mower Provost. “Discretized Continuous Speech Emotion Recognition with Multi-Task Deep Recurrent Neural Network.” Interspeech. Stockholm, Sweden, August 2017. [pdf]
  30. Duc Le, Keli Licata, and Emily Mower Provost. “Automatic Paraphasia Detection from Aphasic Speech: A Preliminary Study.” Interspeech. Stockholm, Sweden, August 2017. [pdf]
  31. John Gideon, Soheil Khorram, Zakariah Aldeneh, Dimitrios Dimitriadis, and Emily Mower Provost. “Progressive Neural Networks for Transfer Learning in Emotion Recognition.” Interspeech. Stockholm, Sweden, August 2017. [pdf]
  32. Soheil Khorram, Zakariah Aldeneh, Dimitrios Dimitriadis, Melvin McInnis, and Emily Mower Provost. “Capturing Long-term Temporal Dependencies with Convolutional Networks for Continuous Emotion.” Interspeech. Stockholm, Sweden, August 2017. [pdf]
  33. Zakariah Aldeneh and Emily Mower Provost. “Using Regional Saliency for Speech Emotion Recognition.” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). New Orleans, Louisiana, USA, March 2017. [pdf]
  34. Biqiao Zhang, Georg Essl, and Emily Mower Provost. “Automatic Recognition of Self-Reported and Perceived Emotion: Does Joint Modeling Help?” International Conference on Multimodal Interaction (ICMI). Tokyo, Japan, November 2016. [Note: full paper, oral presentation, best paper honorable mention[pdf]
  35. Yelin Kim and Emily Mower Provost. “Emotion Spotting: Discovering Regions of Evidence in Audio-Visual Emotion Expressions” International Conference on Multimodal Interaction (ICMI). Tokyo, Japan, November 2016. [Note: full paper, poster presentation] [pdf]
  36. Soheil Khorram, John Gideon, Melvin McInnis, and Emily Mower Provost. “Recognition of Depression in Bipolar Disorder: Leveraging Cohort and Person-Specific Knowledge.” Interspeech. San Francisco, CA, September 2016. [pdf]
  37. Duc Le and Emily Mower Provost. “Improving Automatic Recognition of Aphasic Speech with AphasiaBank.” Interspeech. San Francisco, CA, September 2016. [pdf]
  38. Biqiao Zhang, Emily Mower Provost, Georg Essl. “Cross-Corpus Acoustic Emotion Recognition From Singing And Speaking: A Multi-Task Learning Approach.” International Conference on Acoustics, Speech and Signal Processing (ICASSP). Shanghai, China, March 2016. [pdf]
  39. John Gideon, Emily Mower Provost, Melvin McInnis. “Mood State Prediction From Speech Of Varying Acoustic Quality For Individuals With Bipolar Disorder.” International Conference on Acoustics, Speech and Signal Processing (ICASSP). Shanghai, China, March 2016. [pdf]
  40. Duc Le and Emily Mower Provost. “Data Selection for Acoustic Emotion Recognition: Analyzing and Comparing Utterance and Sub-Utterance Selection Strategies.” Affective Computing and Intelligent Interaction (ACII). Xi’an, China, September 2015. [Note: oral presentation][pdf]
  41. Biqiao Zhang, Georg Essl, and Emily Mower Provost Georg Essl. “Recognizing Emotion from Singing and Speaking Using Shared Models.” Affective Computing and Intelligent Interaction (ACII). Xi’an, China, September 2015. [Note: oral presentation][pdf]
  42. Yuan (June) Shangguan and Emily Mower Provost. “EmoShapelets: Capturing Local Dynamics of Audiovisual Affective Speech.” Affective Computing and Intelligent Interaction (ACII). Xi’an, China, September 2015. [Note: oral presentation][pdf]
  43. Yelin Kim and Emily Mower Provost. “Leveraging Inter-rater Agreement for Audio-Visual Emotion Recognition. Affective Computing and Intelligent Interaction (ACII). Xi’an, China, September 2015.[pdf]
  44. Yelin Kim, Jixu Chen, Ming-Ching Chang, Xin Wang, Emily Mower Provost, Siwei Lyu. “Modeling Transition Patterns Between Events for Temporal Human Action Segmentation and Classification.” IEEE International Conference on Automatic Face and Gesture Recognition (FG). Ljubljana, Slovenia, May, 2015. [Note: oral presentation][pdf]
  45. Biqiao Zhang, Emily Mower Provost, Robert Swedberg, Georg Essl. “Predicting Emotion Perception Across Domains: A Study of Singing and Speaking.” AAAI. Austin, TX, USA, January 2015. [pdf]
  46. Yelin Kim and Emily Mower Provost. “Say Cheese vs. Smile: Reducing Speech-Related Variability for Facial Emotion Recognition.” Proceedings of the ACM International Conference on Multimedia. Florida, USA, November, 2014. [pdfwinner, best student paper!]
  47. Duc Le and Emily Mower Provost. “Modeling Pronunciation, Rhythm, and Intonation for Automatic Assessment of Speech Quality in Aphasia Rehabilitation.” Interspeech. Singapore. September, 2014. [pdf]
  48. Duc Le, Keli Licata, Elizabeth Mercado, Carol Persad, Emily Mower Provost. “Automatic Analysis of Speech Quality for Aphasia Treatment.” International Conference on Acoustics, Speech and Signal Processing (ICASSP). Florence, Italy. May 2014. [pdf]
  49. Zahi N Karam, Emily Mower Provost, Satinder Singh, Jennifer Montgomery, Christopher Archer, Gloria Harrington, Melvin Mcinnis. “Ecologically Valid Long-term Mood Monitoring of Individuals with Bipolar Disorder Using Speech.” International Conference on Acoustics, Speech and Signal Processing (ICASSP). Florence, Italy. May 2014. [pdf]
  50. Duc Le and Emily Mower Provost. “Emotion Recognition From Spontaneous Speech Using Hid- den Markov Models With Deep Belief Networks.” Automatic Speech Recognition and Understanding (ASRU). Olomouc, Czech Republic. December, 2013. [pdf]
  51. Theodora Chaspari, Emil y Mower Provost, and Shrikanth S. Narayanan. “Analyzing the Structure of Parent-Moderated Narratives from Children with ASD Using an Entity-Based Approach.” Interspeech. Lyon, France. August, 2013. [pdf]
  52. Emily Mower Provost, Irene Zhu, and Shrikanth Narayanan. “Using Emotional Noise to Uncloud Audio-Visual Emotion Perceptual Evaluation.” International Conference on Multimedia and Expo (ICME). San Jose, CA. July, 2013. [pdf][Examples][Data Mining Workshop Talk]
  53. Yelin Kim and Emily Mower Provost. “Emotion Classification via Utterance-Level Dynamics: A Pattern-Based Approach to Characterizing Affective Expressions.” International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Vancouver, British Columbia, Canada. May, 2013. [pdf]
  54. Yelin Kim, Honglak Lee, and Emily Mower Provost. “Deep Learning for Robust Feature Generation in Audio-Visual Emotion Recognition.” International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Vancouver, British Columbia, Canada. May, 2013. [pdf]
  55. Emily Mower Provost. “Identifying Salient Sub-Utterance Emotion Dynamics Using Flexible Units and Estimates of Affective Flow.” International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Vancouver, British Columbia, Canada. May, 2013. [pdf]
  56. Emily Mower Provost and Shrikanth Narayanan, “Simplifying Emotion Classification Through Emotion Distillation”, Asia-Pacific Signal and Information Processing Association (APSIPA), Los Angeles, CA, 2012. [pdf]
  57. Theodora Chaspari, Emily Mower Provost, Athanasios Katsamanis and Shrikanth Narayanan, “An Acoustic Analysis of Shared Enjoyment in ECA Interactions of Children with Autism”, Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 2012. [pdf]
  58. Emily Mower, Chi-Chun Lee, James Gibson, Theodora Chaspari, Marian Williams, and Shrikanth Narayanan. “Analyzing the Nature of ECA Interactions in Children with Autism.” International Speech Communication Association (Interspeech). Florence, Italy, August, 2011. [pdf]
  59. Emily Mower, Matthew Black, Elisa Flores, Marian Williams, and Shrikanth S. Narayanan. “Rachel: Design of an Emotionally Targeted Interactive Agent for Children with Autism.” International Conference on Multimedia & Expo (ICME) . Barcelona, Spain, July, 2011. [pdf]
  60. Emily Mower and Shrikanth S. Narayanan. “A Hierarchical Static-Dynamic Framework for Emotion Classification.” International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Prague, Czech Republic, May, 2011. [pdf]
  61. Emily Mower, Maja J Matarić, and Shrikanth S. Narayanan. “Robust Representations for Out-of-Domain Emotions Using Emotion Profiles.” IEEE Workshop on Spoken Language Technology (SLT). Berkeley, CA, December 2010. [pdf]
  62. Emily Mower, Kyu Jeong Han, Sungbok Lee and Shrikanth S. Narayanan. “A Cluster-Profile Representation of Emotion Using Agglomerative Hierarchical Clustering.” International Speech Communication Association (InterSpeech). Makuhari, Japan, September 2010. [pdf]
  63. Dongrui Wu, Thomas Parsons, Emily Mower and Shrikanth S. Narayanan. “Speech Emotion Estimation in 3D Space” IEEE International Conference on Multimedia & Exp (ICME) 2010. Singapore, 2010. [pdf]
  64. Emily Mower, Angeliki Metallinou, Chi-Chun Lee, Abe Kazemzadeh, Carlos Busso, Sungbok Lee, Shrikanth Narayanan. “Interpreting Ambiguous Emotional Expressions.” ACII Special Session: Recognition of Non-Prototypical Emotion from Speech- The Final Frontier? (Invited paper). Amsterdam, The Netherlands, September 2009. [pdf]
  65. Emily Mower, Maja J Matarić, Shrikanth Narayanan. “Evaluating Evaluators: A Case Study in Understanding the Benefits and Pitfalls of Multi-Evaluator Modeling.” International Speech Communication Association (Interspeech). Brighton, England, September 2009. [pdf]
  66. Chi-Chun Lee, Emily Mower, Carlos Busso, Sungbok Lee and Shrikanth S. Narayanan. “Emotion recognition using a hierarchical binary decision tree approach.” International Speech Communication Association (Interspeech). Brighton, England, September 2009. [Emotion Challenge Winner] [pdf]
  67. Emily Mower, Maja J Matarić, Shrikanth Narayanan. “Selection of Emotionally Salient Audio-Visual Features for Modeling Human Evaluations of Synthetic Character Emotion Displays” IEEE International Symposium on Multimedia (ISM) 2008. Berkeley, California, December 2008. [pdf]
  68. Emily Mower, Sungbok Lee, Maja J Matarić, Shrikanth Narayanan. “Joint-processing of audio-visual signals in human perception of conflicting synthetic character emotions.” IEEE International Conference on Multimedia & Expo (ICME), Hannover, Germany, June 2008. [pdf]
  69. Emily Mower, Sungbok Lee, Maja J Matarić, Shrikanth Narayanan. “Human perception of synthetic character emotions in the presence of conflicting and congruent vocal and facial expressions.” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2008. Las Vegas, Nevada, March-April 2008. [pdf]
  70. Emily K. Mower, David J. Feil-Seifer, Maja J Matarić, and Shrikanth Narayanan. “Investigating Implicit Cues for User State Estimation in Human-Robot Interaction Using Physiological Measurements.” IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN), Jeju Island, South Korea, Aug 2007. [pdf]
  71. Michael Grimm, Emily Mower, Kristian Kroschel, and Shrikanth Narayanan. “Combinin g categorical and primitives-based emotion recognition.” European Signal Processing Conference (EUSIPCO), Florence, Italy, September 2006. [pdf]
  72. Zhou, W., Wu, W., Palmer, N., Mower, E., Daniels, N., Cowen, L., Blumer, A. “Microarray Data Analysis of Survival Times of Patients with Lung Adenocarcinomas Using ADC and K-Medians Clustering.” Critical Assessment of Massive Data Analysis (CAMDA). Durham, North Caroline, November 14, 2003. [pdf]