期刊文献+

基于双向LSTM的双任务学习残差通道注意力机制手写签名认证

Dual-Task Learning Residual Channel Attention Mechanism for Handwritten Signature Verification Based on Bidirectional LSTM
下载PDF
导出
摘要 随着人工智能深度学习的发展,网络模型对于在线签名认证系统(Online Signature Verification, OSV)的性能有了显著的提升。然而,如何进一步提高在线手写签名认证的准确性仍然是一个需要解决的问题。为此,本文提出了一种基于双向LSTM的双任务学习残差通道注意力机制网络模型,用于改进手写签名认证。该模型使用残差通道注意力机制来学习序列特征的权重以便解决不同通道的权重分配问题,双向长短期记忆网络来缓解在深度神经网络中增加深度时可能带来的梯度消失和梯度爆炸问题。此外,引入多任务学习,包括有监督学习和深度度量学习,以更好地进行特征学习。最终,本文提出了一种基于多任务学习的训练方法,使得OSV系统的准确性进一步提高。所提出的方法在SVC-2004数据集中取得了2.33%的等错误率和97.03%的准确率。实验结果表明,所提出的方法能够有效地提高OSV系统的身份验证准确性。 With the continuous progress of deep learning, network models have significantly improved the performance of Online Signature Verification system (OSV). Nevertheless, enhancing the accuracy of online handwritten signature verification remains a challenge yet to be resolved. In pursuit of this goal, this paper proposes a dual-task learning residual channel attention mechanism network model based on bidirectional LSTM to improve handwriting signature verification. The model employs the residual channel attention mechanism to adaptively learn the weights of sequence features, addressing the challenge of allocating weights across different channels. Additionally, it integrates a bidirectional long short-term memory (BiLSTM) network to mitigate issues such as gradient vanishing and explosion, which may arise with increased depth in deep neural networks. In addition, multi-task learning, including supervised learning and deep metric learning, is introduced to better perform feature learning. Finally, this paper proposes a training method based on multi-task learning to further improve the accuracy of OSV system. The proposed method achieves 2.33% equal error rate and 97.03% accuracy on SVC-2004 dataset. The experimental results show that the proposed method can effectively improve the accuracy of OSV system authentication.
出处 《计算机科学与应用》 2024年第3期159-168,共10页 Computer Science and Application
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部