Face Expression Recognition in Grayscale Images Using Image Segmentation and Deep Learning

loading.default
thumbnail.default.alt

item.page.date

item.page.journal-title

item.page.journal-issn

item.page.volume-title

item.page.publisher

Scientific Trends

item.page.abstract

Face recognition relies heavily on the ability to read emotional states conveyed by the face. Face recognition enables a computer to identify individuals in a photograph or video. In contrast, facial expression recognition aids computers in analyzing the emotional state of a single human being, leading to enhanced human-computer interaction. There are several obvious traits, such as the eyes and the shape of the lips that can be used to decipher an individual's emotions. People's lips curve upward and their eyebrows descend as they grin. The same holds true for other emotions, such as anger, grief, surprise, and so on. This study proposed an approach based on transfer learning and deep learning techniques for human facial expression recognition. The Extended Cohn-Kanada (CK+) dataset is used in this study for the experiments. The proposed approach consists of four outstanding deep learning models named; VGG199, Conv2D, VGG16 and DenseNet201. Along with the CNN features the VGG16 model outperforms all the existing approaches as well as the DL models used in this research with the accuracy of 99.10%. The proposed approach is able to efficiently identify the human emotions from a gray-scale image in a very short time.

item.page.description

item.page.citation

item.page.collections

item.page.endorsement

item.page.review

item.page.supplemented

item.page.referenced