Copy-move offense is considerably used to conceal or hide several data in the digital image for specific aim, and onto this offense some portion of the genuine image is reduplicated and pasted in the same image. There...Copy-move offense is considerably used to conceal or hide several data in the digital image for specific aim, and onto this offense some portion of the genuine image is reduplicated and pasted in the same image. Therefore, Copy-Move forgery is a very significant problem and active research area to check the confirmation of the image. In this paper, a system for Copy Move Forgery detection is proposed. The proposed system is composed of two stages: one is called the detection stages and the second is called the refine detection stage. The detection stage is executed using Speeded-Up Robust Feature (SURF) and Binary Robust Invariant Scalable Keypoints (BRISK) for feature detection and in the refine detection stage, image registration using non-linear transformation is used to enhance detection efficiency. Initially, the genuine image is picked, and then both SURF and BRISK feature extractions are used in parallel to detect the interest keypoints. This gives an appropriate number of interest points and gives the assurance for finding the majority of the manipulated regions. RANSAC is employed to find the superior group of matches to differentiate the manipulated parts. Then, non-linear transformation between the best-matched sets from both extraction features is used as an optimization to get the best-matched set and detect the copied regions. A number of numerical experiments performed using many benchmark datasets such as, the CASIA v2.0, MICC-220, MICC-F600 and MICC-F2000 datasets. With the proposed algorithm, an overall average detection accuracy of 95.33% is obtained for evaluation carried out with the aforementioned databases. Forgery detection achieved True Positive Rate of 97.4% for tampered images with object translation, different degree of rotation and enlargement. Thus, results from different datasets have been set, proving that the proposed algorithm can individuate the altered areas, with high reliability and dealing with multiple cloning.展开更多
The repeatability rate is an important measure for evaluating and comparing the performance of keypoint detectors.Several repeatability rate measurementswere used in the literature to assess the effectiveness of keypo...The repeatability rate is an important measure for evaluating and comparing the performance of keypoint detectors.Several repeatability rate measurementswere used in the literature to assess the effectiveness of keypoint detectors.While these repeatability rates are calculated for pairs of images,the general assumption is that the reference image is often known and unchanging compared to other images in the same dataset.So,these rates are asymmetrical as they require calculations in only one direction.In addition,the image domain in which these computations take place substantially affects their values.The presented scatter diagram plots illustrate how these directional repeatability rates vary in relation to the size of the neighboring region in each pair of images.Therefore,both directional repeatability rates for the same image pair must be included when comparing different keypoint detectors.This paper,firstly,examines several commonly utilized repeatability rate measures for keypoint detector evaluations.The researcher then suggests computing a two-fold repeatability rate to assess keypoint detector performance on similar scene images.Next,the symmetric mean repeatability rate metric is computed using the given two-fold repeatability rates.Finally,these measurements are validated using well-known keypoint detectors on different image groups with various geometric and photometric attributes.展开更多
This article presents a method for the description of key points using simple statistics for regions controlled by neighboring key points to remedy the gap in existing descriptors.Usually,the existent descriptors such...This article presents a method for the description of key points using simple statistics for regions controlled by neighboring key points to remedy the gap in existing descriptors.Usually,the existent descriptors such as speeded up robust features(SURF),Kaze,binary robust invariant scalable keypoints(BRISK),features from accelerated segment test(FAST),and oriented FAST and rotated BRIEF(ORB)can competently detect,describe,and match images in the presence of some artifacts such as blur,compression,and illumination.However,the performance and reliability of these descriptors decrease for some imaging variations such as point of view,zoom(scale),and rotation.The intro-duced description method improves image matching in the event of such distor-tions.It utilizes a contourlet-based detector to detect the strongest key points within a specified window size.The selected key points and their neighbors con-trol the size and orientation of the surrounding regions,which are mapped on rec-tangular shapes using polar transformation.The resulting rectangular matrices are subjected to two-directional statistical operations that involve calculating the mean and standard deviation.Consequently,the descriptor obtained is invariant(translation,rotation,and scale)because of the two methods;the extraction of the region and the polar transformation techniques used in this paper.The descrip-tion method introduced in this article is tested against well-established and well-known descriptors,such as SURF,Kaze,BRISK,FAST,and ORB,techniques using the standard OXFORD dataset.The presented methodology demonstrated its ability to improve the match between distorted images compared to other descriptors in the literature.展开更多
Image keypoint detection and description is a popular method to find pixel-level connections between images,which is a basic and critical step in many computer vision tasks.The existing methods are far from optimal in...Image keypoint detection and description is a popular method to find pixel-level connections between images,which is a basic and critical step in many computer vision tasks.The existing methods are far from optimal in terms of keypoint positioning accuracy and generation of robust and discriminative descriptors.This paper proposes a new end-to-end selfsupervised training deep learning network.The network uses a backbone feature encoder to extract multi-level feature maps,then performs joint image keypoint detection and description in a forward pass.On the one hand,in order to enhance the localization accuracy of keypoints and restore the local shape structure,the detector detects keypoints on feature maps of the same resolution as the original image.On the other hand,in order to enhance the ability to percept local shape details,the network utilizes multi-level features to generate robust feature descriptors with rich local shape information.A detailed comparison with traditional feature-based methods Scale Invariant Feature Transform(SIFT),Speeded Up Robust Features(SURF)and deep learning methods on HPatches proves the effectiveness and robustness of the method proposed in this paper.展开更多
With the development of the society,people's requirements for clothing matching are constantly increasing when developing clothing recommendation system.This requires that the algorithm for understanding the cloth...With the development of the society,people's requirements for clothing matching are constantly increasing when developing clothing recommendation system.This requires that the algorithm for understanding the clothing images should be sufficiently efficient and robust.Therefore,we detect the keypoints in clothing accurately to capture the details of clothing images.Since the joint points of the garment are similar to those of the human body,this paper utilizes a kind of deep neural network called cascaded pyramid network(CPN)about estimating the posture of human body to solve the problem of keypoints detection in clothing.In this paper,we first introduce the structure and characteristic of this neural network when detecting keypoints.Then we evaluate the results of the experiments and verify effectiveness of detecting keypoints of clothing with CPN,with normalized error about 5%7%.Finally,we analyze the influence of different backbones when detecting keypoints in this network.展开更多
Big data is a comprehensive result of the development of the Internet of Things and information systems.Computer vision requires a lot of data as the basis for research.Because skeleton data can adapt well to dynamic ...Big data is a comprehensive result of the development of the Internet of Things and information systems.Computer vision requires a lot of data as the basis for research.Because skeleton data can adapt well to dynamic environment and complex background,it is used in action recognition tasks.In recent years,skeleton-based action recognition has received more and more attention in the field of computer vision.Therefore,the keypoints of human skeletons are essential for describing the pose estimation of human and predicting the action recognition of the human.This paper proposes a skeleton point extraction method combined with object detection,which can focus on the extraction of skeleton keypoints.After a large number of experiments,our model can be combined with object detection for skeleton points extraction,and the detection efficiency is improved.展开更多
Human pose estimation aims to localize the body joints from image or video data.With the development of deeplearning,pose estimation has become a hot research topic in the field of computer vision.In recent years,huma...Human pose estimation aims to localize the body joints from image or video data.With the development of deeplearning,pose estimation has become a hot research topic in the field of computer vision.In recent years,humanpose estimation has achieved great success in multiple fields such as animation and sports.However,to obtainaccurate positioning results,existing methods may suffer from large model sizes,a high number of parameters,and increased complexity,leading to high computing costs.In this paper,we propose a new lightweight featureencoder to construct a high-resolution network that reduces the number of parameters and lowers the computingcost.We also introduced a semantic enhancement module that improves global feature extraction and networkperformance by combining channel and spatial dimensions.Furthermore,we propose a dense connected spatialpyramid pooling module to compensate for the decrease in image resolution and information loss in the network.Finally,ourmethod effectively reduces the number of parameters and complexitywhile ensuring high performance.Extensive experiments show that our method achieves a competitive performance while dramatically reducing thenumber of parameters,and operational complexity.Specifically,our method can obtain 89.9%AP score on MPIIVAL,while the number of parameters and the complexity of operations were reduced by 41%and 36%,respectively.展开更多
使用骨盆X光片诊断发育性髋关节发育不良(Developmental Dysplasia of the Hip,DDH)要求准确地标注髋关节关键点,而深度学习方法能作为可靠的辅助工具。针对骨盆片拍摄姿势和拍摄距离多样化问题,本文基于U-Net提出了RKD-UNet来检测髋关...使用骨盆X光片诊断发育性髋关节发育不良(Developmental Dysplasia of the Hip,DDH)要求准确地标注髋关节关键点,而深度学习方法能作为可靠的辅助工具。针对骨盆片拍摄姿势和拍摄距离多样化问题,本文基于U-Net提出了RKD-UNet来检测髋关节关键点。该模型使用残差块改进U-Net的卷积层和skip-connection路径,并将坐标注意力引入到编码器中以增强模型对关键点邻域的特征提取能力。在编码器顶部使用卷积和ASPP模块构成Bridge块,以[3,6,9]的空洞率融合不同尺度的特征信息并提升模型的感受野。本文使用包含骨盆正位片、蛙位片、下肢全长片和术后骨盆片的数据集训练和测试模型。RKD-UNet实现了3.19±2.19 px的平均关键点检测误差和2.83°±2.59°的平均髋臼角测量误差。对正常、轻度、中度和重度脱位案例诊断的F1分数分别达到89.6、77.1、57.9和94.1,高于医生的手动诊断结果。实验结果表明,RKD-UNet能准确检测髋关节关键点并辅助医生诊断DDH。展开更多
文摘Copy-move offense is considerably used to conceal or hide several data in the digital image for specific aim, and onto this offense some portion of the genuine image is reduplicated and pasted in the same image. Therefore, Copy-Move forgery is a very significant problem and active research area to check the confirmation of the image. In this paper, a system for Copy Move Forgery detection is proposed. The proposed system is composed of two stages: one is called the detection stages and the second is called the refine detection stage. The detection stage is executed using Speeded-Up Robust Feature (SURF) and Binary Robust Invariant Scalable Keypoints (BRISK) for feature detection and in the refine detection stage, image registration using non-linear transformation is used to enhance detection efficiency. Initially, the genuine image is picked, and then both SURF and BRISK feature extractions are used in parallel to detect the interest keypoints. This gives an appropriate number of interest points and gives the assurance for finding the majority of the manipulated regions. RANSAC is employed to find the superior group of matches to differentiate the manipulated parts. Then, non-linear transformation between the best-matched sets from both extraction features is used as an optimization to get the best-matched set and detect the copied regions. A number of numerical experiments performed using many benchmark datasets such as, the CASIA v2.0, MICC-220, MICC-F600 and MICC-F2000 datasets. With the proposed algorithm, an overall average detection accuracy of 95.33% is obtained for evaluation carried out with the aforementioned databases. Forgery detection achieved True Positive Rate of 97.4% for tampered images with object translation, different degree of rotation and enlargement. Thus, results from different datasets have been set, proving that the proposed algorithm can individuate the altered areas, with high reliability and dealing with multiple cloning.
文摘The repeatability rate is an important measure for evaluating and comparing the performance of keypoint detectors.Several repeatability rate measurementswere used in the literature to assess the effectiveness of keypoint detectors.While these repeatability rates are calculated for pairs of images,the general assumption is that the reference image is often known and unchanging compared to other images in the same dataset.So,these rates are asymmetrical as they require calculations in only one direction.In addition,the image domain in which these computations take place substantially affects their values.The presented scatter diagram plots illustrate how these directional repeatability rates vary in relation to the size of the neighboring region in each pair of images.Therefore,both directional repeatability rates for the same image pair must be included when comparing different keypoint detectors.This paper,firstly,examines several commonly utilized repeatability rate measures for keypoint detector evaluations.The researcher then suggests computing a two-fold repeatability rate to assess keypoint detector performance on similar scene images.Next,the symmetric mean repeatability rate metric is computed using the given two-fold repeatability rates.Finally,these measurements are validated using well-known keypoint detectors on different image groups with various geometric and photometric attributes.
文摘This article presents a method for the description of key points using simple statistics for regions controlled by neighboring key points to remedy the gap in existing descriptors.Usually,the existent descriptors such as speeded up robust features(SURF),Kaze,binary robust invariant scalable keypoints(BRISK),features from accelerated segment test(FAST),and oriented FAST and rotated BRIEF(ORB)can competently detect,describe,and match images in the presence of some artifacts such as blur,compression,and illumination.However,the performance and reliability of these descriptors decrease for some imaging variations such as point of view,zoom(scale),and rotation.The intro-duced description method improves image matching in the event of such distor-tions.It utilizes a contourlet-based detector to detect the strongest key points within a specified window size.The selected key points and their neighbors con-trol the size and orientation of the surrounding regions,which are mapped on rec-tangular shapes using polar transformation.The resulting rectangular matrices are subjected to two-directional statistical operations that involve calculating the mean and standard deviation.Consequently,the descriptor obtained is invariant(translation,rotation,and scale)because of the two methods;the extraction of the region and the polar transformation techniques used in this paper.The descrip-tion method introduced in this article is tested against well-established and well-known descriptors,such as SURF,Kaze,BRISK,FAST,and ORB,techniques using the standard OXFORD dataset.The presented methodology demonstrated its ability to improve the match between distorted images compared to other descriptors in the literature.
基金This work was supported by the National Natural Science Foundation of China(61871046,SM,http://www.nsfc.gov.cn/).
文摘Image keypoint detection and description is a popular method to find pixel-level connections between images,which is a basic and critical step in many computer vision tasks.The existing methods are far from optimal in terms of keypoint positioning accuracy and generation of robust and discriminative descriptors.This paper proposes a new end-to-end selfsupervised training deep learning network.The network uses a backbone feature encoder to extract multi-level feature maps,then performs joint image keypoint detection and description in a forward pass.On the one hand,in order to enhance the localization accuracy of keypoints and restore the local shape structure,the detector detects keypoints on feature maps of the same resolution as the original image.On the other hand,in order to enhance the ability to percept local shape details,the network utilizes multi-level features to generate robust feature descriptors with rich local shape information.A detailed comparison with traditional feature-based methods Scale Invariant Feature Transform(SIFT),Speeded Up Robust Features(SURF)and deep learning methods on HPatches proves the effectiveness and robustness of the method proposed in this paper.
基金National Key Research and Development Program,China(No.2019YFC1521300)。
文摘With the development of the society,people's requirements for clothing matching are constantly increasing when developing clothing recommendation system.This requires that the algorithm for understanding the clothing images should be sufficiently efficient and robust.Therefore,we detect the keypoints in clothing accurately to capture the details of clothing images.Since the joint points of the garment are similar to those of the human body,this paper utilizes a kind of deep neural network called cascaded pyramid network(CPN)about estimating the posture of human body to solve the problem of keypoints detection in clothing.In this paper,we first introduce the structure and characteristic of this neural network when detecting keypoints.Then we evaluate the results of the experiments and verify effectiveness of detecting keypoints of clothing with CPN,with normalized error about 5%7%.Finally,we analyze the influence of different backbones when detecting keypoints in this network.
基金supported by Hainan Provincial Key Research and Development Program(NO:ZDYF2020018)Hainan Provincial Natural Science Foundation of China(NO:2019RC100)Haikou key research and development program(NO:2020-049).
文摘Big data is a comprehensive result of the development of the Internet of Things and information systems.Computer vision requires a lot of data as the basis for research.Because skeleton data can adapt well to dynamic environment and complex background,it is used in action recognition tasks.In recent years,skeleton-based action recognition has received more and more attention in the field of computer vision.Therefore,the keypoints of human skeletons are essential for describing the pose estimation of human and predicting the action recognition of the human.This paper proposes a skeleton point extraction method combined with object detection,which can focus on the extraction of skeleton keypoints.After a large number of experiments,our model can be combined with object detection for skeleton points extraction,and the detection efficiency is improved.
基金the National Natural Science Foundation of China(Grant Number 62076246).
文摘Human pose estimation aims to localize the body joints from image or video data.With the development of deeplearning,pose estimation has become a hot research topic in the field of computer vision.In recent years,humanpose estimation has achieved great success in multiple fields such as animation and sports.However,to obtainaccurate positioning results,existing methods may suffer from large model sizes,a high number of parameters,and increased complexity,leading to high computing costs.In this paper,we propose a new lightweight featureencoder to construct a high-resolution network that reduces the number of parameters and lowers the computingcost.We also introduced a semantic enhancement module that improves global feature extraction and networkperformance by combining channel and spatial dimensions.Furthermore,we propose a dense connected spatialpyramid pooling module to compensate for the decrease in image resolution and information loss in the network.Finally,ourmethod effectively reduces the number of parameters and complexitywhile ensuring high performance.Extensive experiments show that our method achieves a competitive performance while dramatically reducing thenumber of parameters,and operational complexity.Specifically,our method can obtain 89.9%AP score on MPIIVAL,while the number of parameters and the complexity of operations were reduced by 41%and 36%,respectively.
文摘深度歧义是单帧图像多人3D姿态估计面临的重要挑战,提取图像上下文对缓解深度歧义极具潜力.自顶向下方法大多基于人体检测建模关键点关系,人体包围框粒度粗背景噪声占比较大,极易导致关键点偏移或误匹配,还将影响基于人体尺度因子估计绝对深度的可靠性.自底向上的方法直接检出图像中的人体关键点再逐一恢复3D人体姿态.虽然能够显式获取场景上下文,但在相对深度估计方面处于劣势.提出新的双分支网络,自顶向下分支基于关键点区域提议提取人体上下文,自底向上分支基于三维空间提取场景上下文.提出带噪声抑制的人体上下文提取方法,通过建模“关键点区域提议”描述人体目标,建模姿态关联的动态稀疏关键点关系剔除弱连接减少噪声传播.提出从鸟瞰视角提取场景上下文的方法,通过建模图像深度特征并映射鸟瞰平面获得三维空间人体位置布局;设计人体和场景上下文融合网络预测人体绝对深度.在公开数据集MuPoTS-3D和Human3.6M上的实验结果表明:与同类先进模型相比,所提模型HSC-Pose的相对和绝对3D关键点位置精度至少提高2.2%和0.5%;平均根关键点位置误差至少降低4.2 mm.
文摘使用骨盆X光片诊断发育性髋关节发育不良(Developmental Dysplasia of the Hip,DDH)要求准确地标注髋关节关键点,而深度学习方法能作为可靠的辅助工具。针对骨盆片拍摄姿势和拍摄距离多样化问题,本文基于U-Net提出了RKD-UNet来检测髋关节关键点。该模型使用残差块改进U-Net的卷积层和skip-connection路径,并将坐标注意力引入到编码器中以增强模型对关键点邻域的特征提取能力。在编码器顶部使用卷积和ASPP模块构成Bridge块,以[3,6,9]的空洞率融合不同尺度的特征信息并提升模型的感受野。本文使用包含骨盆正位片、蛙位片、下肢全长片和术后骨盆片的数据集训练和测试模型。RKD-UNet实现了3.19±2.19 px的平均关键点检测误差和2.83°±2.59°的平均髋臼角测量误差。对正常、轻度、中度和重度脱位案例诊断的F1分数分别达到89.6、77.1、57.9和94.1,高于医生的手动诊断结果。实验结果表明,RKD-UNet能准确检测髋关节关键点并辅助医生诊断DDH。