Daniel Bardsley

A curious mix of personal shenanigans and computer vision research

Gabor Feature Selection for Face Recognition using Improved AdaBoost Learning

Comments Off on Gabor Feature Selection for Face Recognition using Improved AdaBoost Learning

Originally published in: IWBRS 2005

Download: Gabor Feature Selection for Face Recognition (pdf)

Though AdaBoost has been widely used for feature selection and classifier learning, many of the selected features, or weak classifiers, are redundant. By incorporating mutual information into AdaBoost, we propose an improved boosting algorithm in this paper. The proposed method fully examines the redundancy between candidate classifiers and selected classifiers. The classifiers thus selected are both accurate and non-redundant. Experimental results show that the strong classifier learned using the proposed algorithm achieves a lower training error rate than AdaBoost. The proposed algorithm has also been applied to select discriminative Gabor features for face recognition. Even with the simple correlation distance measure and 1-NN classifier, the selected Gabor features achieve quite high recognition accuracy on the FERET database, where both expression and illumination variance exists. When only 140 features are used, the selected features achieve as high as 95.5% accuracy, which is about 2.5% higher than that of features selected by AdaBoost.

Download: Gabor Feature Selection for Face Recognition (pdf)