Aim:
The aim of this study is to develop a robust facial expression recognition system capable of accurately detecting and categorizing facial expressions.
Abstract:
Facial expression recognition is a crucial task in the field of computer vision and human-computer interaction, with applications ranging from affective computing to human behavior analysis. In this study, we propose a method for facial expression recognition utilizing a pre-trained MobileNet model. The MobileNet architecture offers advantages such as computational efficiency and flexibility, making it well-suited for real-time applications on resource-constrained devices. Our approach involves fine-tuning the MobileNet model on a labeled dataset of facial images annotated with corresponding expressions. We preprocess the images to meet the input requirements of the MobileNet model and augment the dataset to improve model generalization. Through a series of experiments, we evaluate the performance of the trained model using metrics such as accuracy, precision, recall, and F1-score. Our results demonstrate the effectiveness of the proposed approach in accurately recognizing expressions from facial images. The trained model shows promising performance, suggesting its potential for practical applications in expression-aware systems, human-computer interaction interfaces, and affective computing platforms
Existing Method:
The existing method involves a FER algorithm that combines a face graph with a GCN, which links important facial patches to nodes. Node features extracted from the face patches and attention maps were represented as embedding features through a two-layer GCN, and the final facial expressions were classified using an MLP. However, the proposed method still has a misrecognition problem in the case of rapid changes in the face pose, or with occlusions or a poor image quality.
Problem Definition:
Despite advancements in computer vision and deep learning, accurate facial expression recognition remains a challenging task due to the complexity and variability of human facial expressions. Existing methods often struggle to accurately detect and classify expressions from facial images, especially in real-world scenarios with diverse lighting conditions, facial poses, and demographic factors. Furthermore, there is a need for robust facial expression recognition systems that can be deployed across various domains such as human-computer interaction, virtual reality, healthcare, and marketing to enhance user experiences, improve emotional understanding, and personalize interactions. Thus, the problem statement revolves around developing a reliable facial expression recognition system capable of accurately identifying and categorizing expressions from facial expressions in diverse contexts, addressing challenges related to variability, robustness, and real-world applicability.
Proposed Method:
The proposed method introduces a Conversion of each image into a numerical array representation suitable for input to the MobileNet model. You can use libraries like OpenCV or PIL to load and preprocess the images into arrays. Using the Transfer Learning pre-trained MobileNet model from a deep learning library TensorFlow or Keras. And replacing the top layers (fully connected layers) of the MobileNet model with new layers suitable for the expression recognition task. Compiling the model with an appropriate loss function and optimize. The performance of the proposed method good and accurate compare to existing.
Reviews
There are no reviews yet.