Feed Forward Neural Network – Facial Expression Recognition Using 2D Image Texture

Authors: Wisam A. Qader1 & Saman Mirza Abdullah2 & Musa M. Ameen3 & Muhammed S. Anwar4
1Information Technology Department, Faculty of Science, Tishk International University, Erbil, Iraq
2Software Engineering Department, Koya University, Erbil, Iraq
3Computer Engineering Department, Faculty of Engineering, Tishk International University, Erbil, Iraq
4Computer Engineering Department, Faculty of Engineering, Tishk International University, Erbil, Iraq

Abstract: Facial Expression Recognition (FER) is a very active field of study in a wide range of fields such as computer vision, human emotional analyses، pattern recognition and AI. FER has received extensive awareness because it can be employed in human computer interaction (HCI), human emotional analyses, interactive video, image indexing and retrieval. Human facial expression Recognition is one of the most powerful and difficult responsibilities of social communication. Face expressions are, in general terms, natural and direct methods of communicating emotions and intentions for human beings. GWT is applied as a preprocess stage. For the classification of face expressions, this study employs the well-known Feed Forward Propagating Algorithm to create and train a neural network.

Keywords: Feature Extraction, FER (Face Expression Recognition), Classification, GWT

Download the PDF Document

Doi: 10.23918/eajse.v8i1p216

Published: August 15, 2022

References

Bashyal, S., & Venayagamoorthy, G. K. (2008). Recognition of facial expressions using Gabor wavelets and learning vector quantization. Engineering Applications of Artificial Intelligence, 21, 1056-1064.

Chellappa, R., Manjunath, B. S., & Malsburg, C. V. D. (1992). A feature based approach to face recognition. IEEE Conference on Computer Vision and Pattern Recognition, 373-378.

Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17, 124.

Eleyan, A., Özkaramanli, H., & Demirel, H. (2009). Complex wavelet transform-based face recognition. EURASIP Journal on Advances in Signal Processing, 2008, 1-13.

Freund, Y., & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55, 119-139.

Friesen, E., & Ekman, P. (1978). Facial action coding system: a technique for the measurement of facial movement. Palo Alto, 3, 5.

Gabor, D. (1946). Theory of communication. Part 1: The analysis of information. Journal of the Institution of Electrical Engineers-part III: Radio and Communication Engineering, 93, 429-441.

Goodfellow, I. J., Erhan, D., Carrier, P. L., Courville, A., Mirza, M., Hamner, B., … & Bengio, Y. (2013). Challenges in representation learning: A report on three machine learning contests. International Conference on Neural Information Processing, Springer, Berlin, Heidelberg, 117-124.

Hjelmås, E., & Low, B. K. (2001). Face detection: A survey. Computer vision and image understanding.

Mehrabian, A. (2017). Communication without words. In Communication theory. Routledge, 193-200.

Pantic, M., & Rothkrantz, L. J. (2000). Expert system for automatic analysis of facial expressions. Image and Vision Computing, 18, 881-905.

Saaidia, M., Chaari, A., Lelandais, S., Vigneron, V., & Bedda, M. (2007). Face localization by neural networks trained with Zernike moments and Eigenfaces feature vectors. A comparison. IEEE Conference on Advanced Video and Signal Based Surveillance, 377-382.

Shin, J., & Kim, D. (2014). Hybrid approach for facial feature detection and tracking under occlusion. IEEE Signal Processing Letters, 21, 1486-1490.

Viola, P., & Jones, M. J. (2004). Robust real-time face detection. International Journal of Computer Vision, 57, 137-154.

Yuille, A. L., Hallinan, P. W., & Cohen, D. S. (1992). Feature extraction from faces using deformable templates. International Journal of Computer Vision, 8, 99-111.