Multimodal approaches for detecting attention in Human-Computer Interaction
Build computer systems that can recognize human emotions, and help us learn, work and collaborate in more satisfying ways.
Inspired by the inextricable link between emotions and cognition, the field of Affective Computing (AC) aspires to narrow the communicative gap between the highly emotional human and the emotionally challenged computer by developing computational systems that recognize and respond to the affective states (e.g., moods, emotions) of the user. Affect-sensitive interfaces are being developed in a number of domains including gaming, mental health, and learning technologies. The basic tenet behind most AC systems is that automatically recognizing and responding to a user's affective states during interactions with a computer can enhance the quality of the interaction, thereby making a computer interface more usable, enjoyable, and effective. For example, an affect-sensitive learning environment that detects and responds to students' frustration is expected to increase motivation and improve learning gains, when compared to a system that ignores student affect. Affect detection is the first step to building computer applications that help us understand more about the human emotions arising in computer interactions. The multimodal approaches of this project would include features from video describing the user (e.g. facial expressions) and the context, the physiology and voice of the user, posture and more. This project will address one of tomorrow’s (or today’s?) biggest chalenge: how to stay focused/on task when we have so many devices (e.g. phones, email, etc) demanding our attention. Within the different affects (mental states) this project will focus in detecting when a user is paying attention. This would allow us to build applications that can adapt and only call for the user’s attention when is needed.
The candidate should proficient developing software , with a background in Computer Science or Engineering.Preferably (but not mandatory) with experience in Matlab and either signal processing or computer vision.Experience designing and running psychology or HCI experiments would welcome. Students will receive a $7,000/year scholarship in addition to any other funding they may have.
Want to find out more?
The opportunity ID for this research opportunity is: 1368
Other opportunities with Associate Professor Rafael A. Calvo
- Social network visualization in collaborative work and learning
- Multimodal emotion recognition
- Positive computing applications - human-computer interaction to change the world
- Moderator Assistant and Cybermate: Engineering systems to support mental health
- Ubiquitous sensors to improve self-regulation skills in multiple environments
- Gesture detection for improvements of manual operations using video processing