Instructor: Emily Mower Provost
Degree Info: This course has been approved as a Major Design Experience (MDE) and as a Flexible Technical Elective (if not counted as an MDE)
Lecture Time: T/Th 1:30-3pm
Discussion Time: W 12:30-1:30
Location: 1008 FXB
Intelligent Interactive Systems
Today’s world is becoming increasingly automated. This includes not only explicit interactions with automated systems, but also implicit sensing that accompanies many popular technologies. Explicit interactions include speech-based question answering with Siri and Google Voice. But what can we learn implicitly? How can we take advantage of the wealth of pervasive and ubiquitous computing platforms? How can we leverage distributed sensor environments?
The answer is that these interaction scenarios provide insight into the user, the ultimate target of any human-facing applications. We can ask a plethora of questions, building a complete model of our end user. A subset of these questions include:
- What is a user telling us through his/her behavior, gestures, speech patterns, and facial expressions?
- How can we understand who a user is?
- How can we intuit what a user needs?
- How can we decide with whom a user should interact?
These are the questions that increasingly underlie Intelligent Interactive Systems (IIS). The focus of this class will be on providing methods that can be used to answer these questions and a semester-long project that ties these questions together through a new interactive technology.
The course will rely on projects as an instructional methodology. The projects will include the implementation of a basic speech-based emotion recognition system, moving to a user state modeling task focusing on autism detection, and ending with a semester-long project, leveraging the techniques discussed in course lectures and course projects.
The course evaluation will include homework, a midterm exam, and a final project.
This course covers the concepts and techniques that underlie successful interactive user environments including facial expressions, body gestures, phone-based sensing, environmental sensing, and speech. Topics include: speech modeling, recognition, and interactive computing. Fluency in a standard object-oriented programming language is assumed.
Prior experience with speech or other data modeling is neither required nor assumed.
EECS 281 or graduate standing