摘要
针对服装图像实例分割精度与速度较低的问题,提出一种基于改进YOLACT的服装图像实例分割方法。以YOLACT为基础模型,首先在ResNet101网络中采用深度可分离卷积代替传统卷积,减少模型计算量和模型参数,加快模型速度;然后,在模板生成网络后引入高效通道注意力模块,优化输出特征,捕获服装图像的跨通道交互信息,加强对掩膜分支的特征提取能力;最后,训练过程采用LeakyReLU激活函数,避免反向传播时权值信息得不到及时更新,提升模型对服装图像负值特征信息的提取能力。结果表明:与原模型相比,所提方法能有效减少模型参数量,在提升速度的同时提高了精度,其速度提升了4.82帧/s,平均精度提升了5.4%。
A garment image instance segmentation method based on improved YOLACT was proposed to solve the problem of low accuracy and speed of clothing image instance segmentation.Based on the YOLACT model,firstly,the depth separable convolution was used in the ResNet101 network to replace the traditional convolution,reduce the amount of model calculation and model parameters,and accelerate the speed of the model.Then,the efficient channel attention module was introduced to optimize the output features after the protonet,capture the cross-channel interaction information of the clothing image,and strengthen the feature extraction ability of mask branches.Finally,the Leaky ReLU activation function was used in the training process to ensure that the weight information is updated in time,and to improve the model's ability to extract the negative feature information of the clothing image.The experimental results show that compared with the original model,the proposed method can effectively reduce the number of model param-eters,and increase the accuracy and the speed.The speed increased by 4.82 frame per second,and the average accuracy increased by 5.4%.
作者
顾梅花
董晓晓
花玮
崔琳
GU Meihua;DONG Xiaoxiao;HUA Wei;CUI Lin(School of Electronics and Information,Xi’an Polytechnic University,Xi’an 710048,China)
出处
《纺织高校基础科学学报》
CAS
2024年第2期82-91,共10页
Basic Sciences Journal of Textile Universities
基金
国家自然科学基金青年基金(61901347)。