Vision Transformer-Enhanced Multi-Descriptor Approach for Robust Age-Invariant Face Recognition

Justice Kwame Appati, Emmanuel Tsifokor, Daniel Kwame Amissah, David Ebo Adjepon-Yamoah

Research output: Contribution to journalArticlepeer-review

Abstract

This study presents a robust age-invariant face recognition framework, addressing challenges posed by age-related facial variations. Evaluated on the FGNet and Morph II datasets, the system integrates Viola-Jones for face detection, SIFT and LBP for feature extraction, and Vision Transformers (ViTs) for global feature representation. Feature fusion and dimensionality reduction (KPCA, IPCA, UMAP) enhance efficiency while retaining key discriminative information. Using Random Forest, KNN, and XGBoost classifiers, the model achieves 96% accuracy, demonstrating the effectiveness of combining traditional and deep learning techniques in advancing age-invariant face recognition.

Original languageEnglish
Article numbere70000
JournalApplied AI Letters
Volume6
Issue number3
DOIs
Publication statusPublished - Oct 2025

Keywords

  • age-invariant face recognition
  • dimensionality reduction
  • feature extraction
  • local binary patterns
  • machine learning
  • scale-invariant feature transform
  • vision transformers

Fingerprint

Dive into the research topics of 'Vision Transformer-Enhanced Multi-Descriptor Approach for Robust Age-Invariant Face Recognition'. Together they form a unique fingerprint.

Cite this