Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and ...Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and recognition performance of the system can be enhanced through judiciously leveraging the correlation among multimodal features.Nevertheless,two issues persist in multi-modal feature fusion recognition:Firstly,the enhancement of recognition performance in fusion recognition has not comprehensively considered the inter-modality correlations among distinct modalities.Secondly,during modal fusion,improper weight selection diminishes the salience of crucial modal features,thereby diminishing the overall recognition performance.To address these two issues,we introduce an enhanced DenseNet multimodal recognition network founded on feature-level fusion.The information from the three modalities is fused akin to RGB,and the input network augments the correlation between modes through channel correlation.Within the enhanced DenseNet network,the Efficient Channel Attention Network(ECA-Net)dynamically adjusts the weight of each channel to amplify the salience of crucial information in each modal feature.Depthwise separable convolution markedly reduces the training parameters and further enhances the feature correlation.Experimental evaluations were conducted on four multimodal databases,comprising six unimodal databases,including multispectral palmprint and palm vein databases from the Chinese Academy of Sciences.The Equal Error Rates(EER)values were 0.0149%,0.0150%,0.0099%,and 0.0050%,correspondingly.In comparison to other network methods for palmprint,palm vein,and finger vein fusion recognition,this approach substantially enhances recognition performance,rendering it suitable for high-security environments with practical applicability.The experiments in this article utilized amodest sample database comprising 200 individuals.The subsequent phase involves preparing for the extension of the method to larger databases.展开更多
Human recognition technology based on biometrics has become a fundamental requirement in all aspects of life due to increased concerns about security and privacy issues.Therefore,biometric systems have emerged as a te...Human recognition technology based on biometrics has become a fundamental requirement in all aspects of life due to increased concerns about security and privacy issues.Therefore,biometric systems have emerged as a technology with the capability to identify or authenticate individuals based on their physiological and behavioral characteristics.Among different viable biometric modalities,the human ear structure can offer unique and valuable discriminative characteristics for human recognition systems.In recent years,most existing traditional ear recognition systems have been designed based on computer vision models and have achieved successful results.Nevertheless,such traditional models can be sensitive to several unconstrained environmental factors.As such,some traits may be difficult to extract automatically but can still be semantically perceived as soft biometrics.This research proposes a new group of semantic features to be used as soft ear biometrics,mainly inspired by conventional descriptive traits used naturally by humans when identifying or describing each other.Hence,the research study is focused on the fusion of the soft ear biometric traits with traditional(hard)ear biometric features to investigate their validity and efficacy in augmenting human identification performance.The proposed framework has two subsystems:first,a computer vision-based subsystem,extracting traditional(hard)ear biometric traits using principal component analysis(PCA)and local binary patterns(LBP),and second,a crowdsourcing-based subsystem,deriving semantic(soft)ear biometric traits.Several feature-level fusion experiments were conducted using the AMI database to evaluate the proposed algorithm’s performance.The obtained results for both identification and verification showed that the proposed soft ear biometric information significantly improved the recognition performance of traditional ear biometrics,reaching up to 12%for LBP and 5%for PCA descriptors;when fusing all three capacities PCA,LBP,and soft traits using k-nearest neighbors(KNN)classifier.展开更多
With the rapid spread of the coronavirus epidemic all over the world,educational and other institutions are heading towards digitization.In the era of digitization,identifying educational e-platform users using ear an...With the rapid spread of the coronavirus epidemic all over the world,educational and other institutions are heading towards digitization.In the era of digitization,identifying educational e-platform users using ear and iris based multi-modal biometric systems constitutes an urgent and interesting research topic to pre-serve enterprise security,particularly with wearing a face mask as a precaution against the new coronavirus epidemic.This study proposes a multimodal system based on ear and iris biometrics at the feature fusion level to identify students in electronic examinations(E-exams)during the COVID-19 pandemic.The proposed system comprises four steps.Thefirst step is image preprocessing,which includes enhancing,segmenting,and extracting the regions of interest.The second step is feature extraction,where the Haralick texture and shape methods are used to extract the features of ear images,whereas Tamura texture and color histogram methods are used to extract the features of iris images.The third step is feature fusion,where the extracted features of the ear and iris images are combined into one sequential fused vector.The fourth step is the matching,which is executed using the City Block Dis-tance(CTB)for student identification.Thefindings of the study indicate that the system’s recognition accuracy is 97%,with a 2%False Acceptance Rate(FAR),a 4%False Rejection Rate(FRR),a 94%Correct Recognition Rate(CRR),and a 96%Genuine Acceptance Rate(GAR).In addition,the proposed recognition sys-tem achieved higher accuracy than other related systems.展开更多
Most present research into facial expression recognition focuses on the visible spectrum, which is sen- sitive to illumination change. In this paper, we focus on in- tegrating thermal infrared data with visible spectr...Most present research into facial expression recognition focuses on the visible spectrum, which is sen- sitive to illumination change. In this paper, we focus on in- tegrating thermal infrared data with visible spectrum images for spontaneous facial expression recognition. First, the ac- tive appearance model AAM parameters and three defined head motion features are extracted from visible spectrum im- ages, and several thermal statistical features are extracted from infrared (IR) images. Second, feature selection is per- formed using the F-test statistic. Third, Bayesian networks BNs and support vector machines SVMs are proposed for both decision-level and feature-level fusion. Experiments on the natural visible and infrared facial expression (NVIE) spontaneous database show the effectiveness of the proposed methods, and demonstrate thermal 1R images' supplementary role for visible facial expression recognition.展开更多
基金funded by the National Natural Science Foundation of China(61991413)the China Postdoctoral Science Foundation(2019M651142)+1 种基金the Natural Science Foundation of Liaoning Province(2021-KF-12-07)the Natural Science Foundations of Liaoning Province(2023-MS-322).
文摘Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and recognition performance of the system can be enhanced through judiciously leveraging the correlation among multimodal features.Nevertheless,two issues persist in multi-modal feature fusion recognition:Firstly,the enhancement of recognition performance in fusion recognition has not comprehensively considered the inter-modality correlations among distinct modalities.Secondly,during modal fusion,improper weight selection diminishes the salience of crucial modal features,thereby diminishing the overall recognition performance.To address these two issues,we introduce an enhanced DenseNet multimodal recognition network founded on feature-level fusion.The information from the three modalities is fused akin to RGB,and the input network augments the correlation between modes through channel correlation.Within the enhanced DenseNet network,the Efficient Channel Attention Network(ECA-Net)dynamically adjusts the weight of each channel to amplify the salience of crucial information in each modal feature.Depthwise separable convolution markedly reduces the training parameters and further enhances the feature correlation.Experimental evaluations were conducted on four multimodal databases,comprising six unimodal databases,including multispectral palmprint and palm vein databases from the Chinese Academy of Sciences.The Equal Error Rates(EER)values were 0.0149%,0.0150%,0.0099%,and 0.0050%,correspondingly.In comparison to other network methods for palmprint,palm vein,and finger vein fusion recognition,this approach substantially enhances recognition performance,rendering it suitable for high-security environments with practical applicability.The experiments in this article utilized amodest sample database comprising 200 individuals.The subsequent phase involves preparing for the extension of the method to larger databases.
基金supported and funded by KAU Scientific Endowment,King Abdulaziz University,Jeddah,Saudi Arabia.
文摘Human recognition technology based on biometrics has become a fundamental requirement in all aspects of life due to increased concerns about security and privacy issues.Therefore,biometric systems have emerged as a technology with the capability to identify or authenticate individuals based on their physiological and behavioral characteristics.Among different viable biometric modalities,the human ear structure can offer unique and valuable discriminative characteristics for human recognition systems.In recent years,most existing traditional ear recognition systems have been designed based on computer vision models and have achieved successful results.Nevertheless,such traditional models can be sensitive to several unconstrained environmental factors.As such,some traits may be difficult to extract automatically but can still be semantically perceived as soft biometrics.This research proposes a new group of semantic features to be used as soft ear biometrics,mainly inspired by conventional descriptive traits used naturally by humans when identifying or describing each other.Hence,the research study is focused on the fusion of the soft ear biometric traits with traditional(hard)ear biometric features to investigate their validity and efficacy in augmenting human identification performance.The proposed framework has two subsystems:first,a computer vision-based subsystem,extracting traditional(hard)ear biometric traits using principal component analysis(PCA)and local binary patterns(LBP),and second,a crowdsourcing-based subsystem,deriving semantic(soft)ear biometric traits.Several feature-level fusion experiments were conducted using the AMI database to evaluate the proposed algorithm’s performance.The obtained results for both identification and verification showed that the proposed soft ear biometric information significantly improved the recognition performance of traditional ear biometrics,reaching up to 12%for LBP and 5%for PCA descriptors;when fusing all three capacities PCA,LBP,and soft traits using k-nearest neighbors(KNN)classifier.
文摘With the rapid spread of the coronavirus epidemic all over the world,educational and other institutions are heading towards digitization.In the era of digitization,identifying educational e-platform users using ear and iris based multi-modal biometric systems constitutes an urgent and interesting research topic to pre-serve enterprise security,particularly with wearing a face mask as a precaution against the new coronavirus epidemic.This study proposes a multimodal system based on ear and iris biometrics at the feature fusion level to identify students in electronic examinations(E-exams)during the COVID-19 pandemic.The proposed system comprises four steps.Thefirst step is image preprocessing,which includes enhancing,segmenting,and extracting the regions of interest.The second step is feature extraction,where the Haralick texture and shape methods are used to extract the features of ear images,whereas Tamura texture and color histogram methods are used to extract the features of iris images.The third step is feature fusion,where the extracted features of the ear and iris images are combined into one sequential fused vector.The fourth step is the matching,which is executed using the City Block Dis-tance(CTB)for student identification.Thefindings of the study indicate that the system’s recognition accuracy is 97%,with a 2%False Acceptance Rate(FAR),a 4%False Rejection Rate(FRR),a 94%Correct Recognition Rate(CRR),and a 96%Genuine Acceptance Rate(GAR).In addition,the proposed recognition sys-tem achieved higher accuracy than other related systems.
文摘Most present research into facial expression recognition focuses on the visible spectrum, which is sen- sitive to illumination change. In this paper, we focus on in- tegrating thermal infrared data with visible spectrum images for spontaneous facial expression recognition. First, the ac- tive appearance model AAM parameters and three defined head motion features are extracted from visible spectrum im- ages, and several thermal statistical features are extracted from infrared (IR) images. Second, feature selection is per- formed using the F-test statistic. Third, Bayesian networks BNs and support vector machines SVMs are proposed for both decision-level and feature-level fusion. Experiments on the natural visible and infrared facial expression (NVIE) spontaneous database show the effectiveness of the proposed methods, and demonstrate thermal 1R images' supplementary role for visible facial expression recognition.