TY - GEN A1 - Jäger, Matthias A1 - Eberhardt, Jörg A1 - Cunningham, Douglas W. T1 - 3D reconstruction of partial foot scans using different state of the art neural network approaches T2 - Footwear Science N2 - As online shopping continues to grow in popularity, shoes are increasingly being purchased without being physically tried on. This has resulted in a significant surge in returns, causing both financial and environmental consequences. To tackle this issue, several systems are available to measure foot dimensions accurately either in-store or at home. By obtaining precise foot measurements, individuals can determine their ideal shoe size and prevent unnecessary returns. In order to make such a system as simple as possible for the user, only a single image should be sufficient to measure the foot. To make this possible, point clouds from one side of the foot, which are generated by taking a depth image, are to be used. Since these point clouds represent only one side of the foot, the other side has to be generated. For this purpose, different existing state of the art networks were tested and compared to determine which architecture is best suited for this task. After implementing, re-training on our own dataset and testing the different architectures, it can be concluded that the point/transormer-based network SnowflakeNet is the most efficient to be used for our task. KW - Foot measurement KW - Shape completion KW - Shape reconstruction KW - GAN KW - Neural network KW - Machine learning KW - CNN Y1 - 2024 U6 - https://doi.org/10.1080/19424280.2024.2319796 VL - Volume 16 IS - 2 SP - 105 EP - 114 PB - Taylor&Francis ER - TY - GEN A1 - Saadi, Ibtissam A1 - Hadid, Abdenour A1 - Cunningham, Douglas W. A1 - Taleb-Ahmed, Abdelmalik A1 - El hillali, Yassin T1 - PE-CLIP : a parameter-efficient fine-tuning of vision language models for dynamic facial expression recognition T2 - ACM transactions on multimedia computing, communications, and applications N2 - The emergence of Vision-Language Models (VLMs) like CLIP (Contrastive Language-Image Pretraining) provides appealing solutions to various vision problems including Dynamic Facial Expression Recognition (DFER). However, most of the proposed approaches face major challenges, particularly related to inefficient full fine-tuning of the encoders and the complexity of the models. Moreover, some of the proposed methods seem to struggle with suboptimal performance due to (i) poor alignment between textual and visual representations, and (ii) ineffective temporal modeling. To address these challenges, we propose PE-CLIP, a parameter-efficient fine-tuning (PEFT) framework that elegantly adapts CLIP for dynamic facial expression recognition, requiring significantly reduced number of trainable parameters while maintaining high accuracy. At its core, to enhance efficiency and performance, PE-CLIP introduces two specialized adapters namely a Temporal Dynamic Adapter (TDA) and a Shared Adapter (ShA). The temporal dynamic adapter is a GRU-based module with a dynamic scaling mechanism, capturing sequential dependencies while adaptively modulating the contribution of each temporal feature to emphasize the most informative ones while mitigating irrelevant variations. The shared adapter is a lightweight adapter refine representations within both textual and visual encoders, ensuring consistent feature processing while maintaining parameter efficiency. Additionally, we leverage Multi-modal Prompt Learning (MaPLe), which introduces learnable prompts to both visual and action unit-based textual description inputs, further improving the semantic alignment between modalities and enabling the efficient adaptation of CLIP for dynamic tasks. We evaluate our proposed PE-CLIP on two benchmark datasets, namely DFEW, FERV39K, and AFEW, achieving competitive performance compared to state-of-the-art methods while requiring fewer trainable parameters. By striking an optimal balance between parameter efficiency and performance, PE-CLIP sets a new benchmark in resource-efficient DFER. The source code of the proposed PE-CLIP will be publicly available at https://github.com/Ibtissam-SAADI/PE-CLIP . Y1 - 2026 U6 - https://doi.org/10.1145/3786789 SN - 1551-6857 PB - Association for Computing Machinery (ACM) CY - New York, NY ER - TY - GEN A1 - Saadi, Ibtissam A1 - Cunningham, Douglas W. A1 - Abdelmalik, Taleb-Ahmed A1 - Hadid, Abdenour A1 - El Hillali, Yassin T1 - Driver's facial expression recognition using global context vision transformer T2 - 2023 IEEE International Conference on Computer Vision and Machine Intelligence (CVMI) N2 - Driver's facial expression recognition plays a critical role in enhancing driver safety, comfort, and overall driving experience by proactively mitigating potential road risks. While most existing works in this domain rely on CNN - based approaches, this paper proposes a novel method for driver facial expression recognition using Global Context Vision Transformer (DFER-GCViT). With its inherent capabilities of transformer-based architectures and global context modeling, the proposed method handles challenges commonly encountered in real-world driving scenarios, including occlusions, head pose variations, and illumination conditions. Our method consists of three modules: preprocessing for face detection and data augmentation, facial feature extraction of local and global features, and expression classification using a modified GC-ViT classifier. To evaluate the performance of DFER-GCViT, extensive experiments are conducted on two benchmarking datasets namely the KMU-FED driver facial expression dataset and FER2013 general facial expression dataset. The experimental results demonstrate the superiority of DFER-GCViT in accurately recognizing driver's facial expressions, achieving an average accuracy of 98.27 % on the KMU-FED dataset and 73.78% on the FER2013 dataset, outperforming several state-of-the-art methods on these two benchmarking datasets. Y1 - 2023 SN - 979-8-3503-0514-2 U6 - https://doi.org/10.1109/CVMI59935.2023.10464794 SP - 1 EP - 8 PB - IEEE CY - Piscataway, NJ ER - TY - GEN A1 - Saadi, Ibtissam A1 - cunningham, Douglas W. A1 - Taleb-Ahmed, Abdelmalik A1 - Hadid, Abdenour A1 - Hillali, Yassin El T1 - Driver’s facial expression recognition : a comprehensive survey T2 - Expert systems with applications N2 - Driving is an integral part of daily life for millions of people worldwide, and it has a profound impact on road safety and human health. The emotional state of the driver, including feelings of anger, happiness, or fear, can significantly affect their ability to make safe driving decisions. Recognizing the facial expressions of drivers(DFER) has emerged as a promising technique for improving road safety and can provide valuable information about their emotions, This information can be used by intelligent transportation systems (ITS), like advanced driver assistance systems (ADAS) to take appropriate decision, such as alerting the driver or intervening in the driving process, to prevent the potential risks. This survey paper presents a comprehensive survey of recent studies that focus on the problem of recognizing the facial expression of driver recognition in the driving context from 2018 to March 2023. Specifically, we examine studies that address the recognition of the driver’s emotion using facial expressions and explore the challenges that exist in this field, such as illumination conditions, occlusion, and head poses. Our survey includes an analysis of different techniques and methods used to identify and categorize specific expressions or emotions of the driver. We begin by reviewing and comparing available datasets and summarizing state-of-the-art methods, including machine learning-based methods, deep learning-based methods, and hybrid methods. We also identify limitations and potential areas for improvement. Overall, our survey highlights the importance of recognizing driver facial expressions in improving road safety and provides valuable insights into recent developments and future research directions in this field. KW - Driver’s facial expression recognition KW - Driver emotion recognition KW - Facial expression recognition datasets KW - Advanced driver assistant systems (ADAS) KW - Driving safety Y1 - 2024 U6 - https://doi.org/10.1016/j.eswa.2023.122784 SN - 0957-4174 VL - 242 SP - 1 EP - 34 PB - Elsevier BV CY - Amsterdam ER - TY - CHAP A1 - Saadi, Ibtissam A1 - Hadid, Abdenour A1 - Cunningham, Douglas W. A1 - Taleb-Ahmed, Abdelmalik A1 - El Hillali, Yassin T1 - Leveraging vision language models for facial expression recognition in driving environment T2 - Sensor-Based Activity Recognition and Artificial Intelligence : 9th International Workshop, iWOAR 2024 Potsdam, Germany, September 26–27, 2024 : proceedings N2 - We are witnessing an increasing interest in vision-language models (VLMs) as reflected in the impressive results across a large spectrum of tasks. In this context, we introduce in this paper a novel architecture that exploits the capabilities of VLMs for facial expression recognition in driving environment to enhance road safety. We present an approach called CLIVP-FER, which uses the Contrastive Language-Image Pretraining (CLIP) and combines both visual and textual data to overcome the environmental challenges and ambiguities in facial expression interpretation. In addition, we apply average pooling to improve the accuracy and the computational efficiency. The proposed approach is thoroughly evaluated on a benchmark driving dataset called KMU-FED. The experiments showed superior performance compared to state-of-the-art methods, achieving an average accuracy of 97.36%. Cross-database evaluation is also provided showing good generalization abilities. The ablation study gives more insights into the performance of our proposed architecture. The obtained results are interesting and confirm the capabilities of vision-language models in vision tasks, demonstrating their promising applications in efficient driver assistance and intervention systems. We are making the code of this work publicly available for research purposes at https://github.com/Ibtissam-SAADI/CLIVP-FER. KW - Facial expression recognition KW - Driver’s emotions KW - Vision language models KW - Contrastive language-image pretraining Y1 - 2025 SN - 978-3-031-80856-2 U6 - https://doi.org/10.1007/978-3-031-80856-2_6 SN - 0302-9743 SP - 81 EP - 93 PB - Springer Nature Switzerland CY - Cham ER -