Title

Your Face Says It All: Automated Analysis and Synthesis of Facial Actions

Abstract

Abstract

The face is a powerful channel for non-verbal communication. Anatomically-based facial action units (AUs), both individually and in combinations, convey a wide range of facial expressions, emotions, and intentions. Traditional human-observer methods for measuring these actions are labor-intensive, qualitative, and impractical for real-time applications or large datasets, highlighting the need for automated, objective, reliable, and efficient approaches. Additionally, synthesizing realistic facial expressions—using advanced techniques like diffusion models—is essential for generating large, balanced datasets and training personalized models. However, these generative models come with limitations, particularly regarding training speed and the overall quality of the generated outputs. In this talk, I will present my work on deep learning-based methods for automated facial action detection and synthesis, with a focus on improving detection performance and generating more realistic images efficiently. I will also present several applications of our models in detecting and monitoring obsessive-compulsive disorder and analyzing parent-child interactions to explore child development and anxiety-related behaviors, among others.

Supervisor(s)

Supervisor(s)

Itir Onal Ertugrul, PhD

Date and Location

Date and Location

2025-01-03 13:30:00

Category

Category

Seminar