The objective of style transfer is to maintain the content of an image while transferring the style of another image.However,conventional methods face challenges in preserving facial features,especially in Korean port...The objective of style transfer is to maintain the content of an image while transferring the style of another image.However,conventional methods face challenges in preserving facial features,especially in Korean portraits where elements like the“Gat”(a traditional Korean hat)are prevalent.This paper proposes a deep learning network designed to perform style transfer that includes the“Gat”while preserving the identity of the face.Unlike traditional style transfer techniques,the proposed method aims to preserve the texture,attire,and the“Gat”in the style image by employing image sharpening and face landmark,with the GAN.The color,texture,and intensity were extracted differently based on the characteristics of each block and layer of the pre-trained VGG-16,and only the necessary elements during training were preserved using a facial landmark mask.The head area was presented using the eyebrow area to transfer the“Gat”.Furthermore,the identity of the face was retained,and style correlation was considered based on the Gram matrix.To evaluate performance,we introduced a metric using PSNR and SSIM,with an emphasis on median values through new weightings for style transfer in Korean portraits.Additionally,we have conducted a survey that evaluated the content,style,and naturalness of the transferred results,and based on the assessment,we can confidently conclude that our method to maintain the integrity of content surpasses the previous research.Our approach,enriched by landmarks preservation and diverse loss functions,including those related to“Gat”,outperformed previous researches in facial identity preservation.展开更多
Predicting disruptions across different tokamaks is necessary for next generation device.Future large-scale tokamaks can hardly tolerate disruptions at high performance discharge,which makes it difficult for current d...Predicting disruptions across different tokamaks is necessary for next generation device.Future large-scale tokamaks can hardly tolerate disruptions at high performance discharge,which makes it difficult for current data-driven methods to obtain an acceptable result.A machine learning method capable of transferring a disruption prediction model trained on one tokamak to another is required to solve the problem.The key is a feature extractor which is able to extract common disruption precursor traces in tokamak diagnostic data,and can be easily transferred to other tokamaks.Based on the concerns above,this paper presents a deep feature extractor,namely,the fusion feature extractor(FFE),which is designed specifically for extracting disruption precursor features from common diagnostics on tokamaks.Furthermore,an FFE-based disruption predictor on J-TEXT is demonstrated.The feature extractor is aimed to extracting disruption-related precursors and is designed according to the precursors of disruption and their representations in common tokamak diagnostics.Strong inductive bias on tokamak diagnostics data is introduced.The paper presents the evolution of the neural network feature extractor and its comparison against general deep neural networks,as well as a physics-based feature extraction with a traditional machine learning method.Results demonstrate that the FFE may reach a similar effect with physics-guided manual feature extraction,and obtain a better result compared with other deep learning methods.展开更多
Afuzzy extractor can extract an almost uniformrandom string from a noisy source with enough entropy such as biometric data.To reproduce an identical key from repeated readings of biometric data,the fuzzy extractor gen...Afuzzy extractor can extract an almost uniformrandom string from a noisy source with enough entropy such as biometric data.To reproduce an identical key from repeated readings of biometric data,the fuzzy extractor generates a helper data and a random string from biometric data and uses the helper data to reproduce the random string from the second reading.In 2013,Fuller et al.proposed a computational fuzzy extractor based on the learning with errors problem.Their construction,however,can tolerate a sub-linear fraction of errors and has an inefficient decoding algorithm,which causes the reproducing time to increase significantly.In 2016,Canetti et al.proposed a fuzzy extractor with inputs from low-entropy distributions based on a strong primitive,which is called digital locker.However,their construction necessitates an excessive amount of storage space for the helper data,which is stored in authentication server.Based on these observations,we propose a new efficient computational fuzzy extractorwith small size of helper data.Our scheme supports reusability and robustness,which are security notions that must be satisfied in order to use a fuzzy extractor as a secure authentication method in real life.Also,it conceals no information about the biometric data and thanks to the new decoding algorithm can tolerate linear errors.Based on the non-uniform learning with errors problem,we present a formal security proof for the proposed fuzzy extractor.Furthermore,we analyze the performance of our fuzzy extractor scheme and provide parameter sets that meet the security requirements.As a result of our implementation and analysis,we show that our scheme outperforms previous fuzzy extractor schemes in terms of the efficiency of the generation and reproduction algorithms,as well as the size of helper data.展开更多
针对视觉背景提取(visual background extractor,ViBe)算法在运动目标检测过程中容易受到噪声干扰的问题,将两帧差分法融入ViBe的前景检测阶段,提出一种融合两帧差分信息的改进ViBe算法(ViBe with two-frame differencing,ViBe-TD)。首...针对视觉背景提取(visual background extractor,ViBe)算法在运动目标检测过程中容易受到噪声干扰的问题,将两帧差分法融入ViBe的前景检测阶段,提出一种融合两帧差分信息的改进ViBe算法(ViBe with two-frame differencing,ViBe-TD)。首先,设计单阈值形ViBe(single-threshold form of ViBe,S-ViBe)检测,为信息融合做准备;其次,基于逻辑斯蒂(logistic)回归模型,实现像素点上两帧差分和S-ViBe检测信息的融合;最后,综合两类检测信息完成前景像素点的判定。实验结果表明,ViBe-TD算法在4种不同场景视频上的检测效果达到了0.932的平均精确率,0.785的平均召回率以及0.842的平均F 1值。与原算法相比,ViBe-TD算法的各项指标平均有0.158的提高,具有良好的检测效果。展开更多
基金supported by Metaverse Lab Program funded by the Ministry of Science and ICT(MSIT),and the Korea Radio Promotion Association(RAPA).
文摘The objective of style transfer is to maintain the content of an image while transferring the style of another image.However,conventional methods face challenges in preserving facial features,especially in Korean portraits where elements like the“Gat”(a traditional Korean hat)are prevalent.This paper proposes a deep learning network designed to perform style transfer that includes the“Gat”while preserving the identity of the face.Unlike traditional style transfer techniques,the proposed method aims to preserve the texture,attire,and the“Gat”in the style image by employing image sharpening and face landmark,with the GAN.The color,texture,and intensity were extracted differently based on the characteristics of each block and layer of the pre-trained VGG-16,and only the necessary elements during training were preserved using a facial landmark mask.The head area was presented using the eyebrow area to transfer the“Gat”.Furthermore,the identity of the face was retained,and style correlation was considered based on the Gram matrix.To evaluate performance,we introduced a metric using PSNR and SSIM,with an emphasis on median values through new weightings for style transfer in Korean portraits.Additionally,we have conducted a survey that evaluated the content,style,and naturalness of the transferred results,and based on the assessment,we can confidently conclude that our method to maintain the integrity of content surpasses the previous research.Our approach,enriched by landmarks preservation and diverse loss functions,including those related to“Gat”,outperformed previous researches in facial identity preservation.
基金Project supported by the National Key R&D Program of China (Grant No. 2022YFE03040004)the National Natural Science Foundation of China (Grant No. 51821005)
文摘Predicting disruptions across different tokamaks is necessary for next generation device.Future large-scale tokamaks can hardly tolerate disruptions at high performance discharge,which makes it difficult for current data-driven methods to obtain an acceptable result.A machine learning method capable of transferring a disruption prediction model trained on one tokamak to another is required to solve the problem.The key is a feature extractor which is able to extract common disruption precursor traces in tokamak diagnostic data,and can be easily transferred to other tokamaks.Based on the concerns above,this paper presents a deep feature extractor,namely,the fusion feature extractor(FFE),which is designed specifically for extracting disruption precursor features from common diagnostics on tokamaks.Furthermore,an FFE-based disruption predictor on J-TEXT is demonstrated.The feature extractor is aimed to extracting disruption-related precursors and is designed according to the precursors of disruption and their representations in common tokamak diagnostics.Strong inductive bias on tokamak diagnostics data is introduced.The paper presents the evolution of the neural network feature extractor and its comparison against general deep neural networks,as well as a physics-based feature extraction with a traditional machine learning method.Results demonstrate that the FFE may reach a similar effect with physics-guided manual feature extraction,and obtain a better result compared with other deep learning methods.
基金supported by Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2022-0-00518,Blockchain privacy preserving techniques based on data encryption).
文摘Afuzzy extractor can extract an almost uniformrandom string from a noisy source with enough entropy such as biometric data.To reproduce an identical key from repeated readings of biometric data,the fuzzy extractor generates a helper data and a random string from biometric data and uses the helper data to reproduce the random string from the second reading.In 2013,Fuller et al.proposed a computational fuzzy extractor based on the learning with errors problem.Their construction,however,can tolerate a sub-linear fraction of errors and has an inefficient decoding algorithm,which causes the reproducing time to increase significantly.In 2016,Canetti et al.proposed a fuzzy extractor with inputs from low-entropy distributions based on a strong primitive,which is called digital locker.However,their construction necessitates an excessive amount of storage space for the helper data,which is stored in authentication server.Based on these observations,we propose a new efficient computational fuzzy extractorwith small size of helper data.Our scheme supports reusability and robustness,which are security notions that must be satisfied in order to use a fuzzy extractor as a secure authentication method in real life.Also,it conceals no information about the biometric data and thanks to the new decoding algorithm can tolerate linear errors.Based on the non-uniform learning with errors problem,we present a formal security proof for the proposed fuzzy extractor.Furthermore,we analyze the performance of our fuzzy extractor scheme and provide parameter sets that meet the security requirements.As a result of our implementation and analysis,we show that our scheme outperforms previous fuzzy extractor schemes in terms of the efficiency of the generation and reproduction algorithms,as well as the size of helper data.
文摘针对视觉背景提取(visual background extractor,ViBe)算法在运动目标检测过程中容易受到噪声干扰的问题,将两帧差分法融入ViBe的前景检测阶段,提出一种融合两帧差分信息的改进ViBe算法(ViBe with two-frame differencing,ViBe-TD)。首先,设计单阈值形ViBe(single-threshold form of ViBe,S-ViBe)检测,为信息融合做准备;其次,基于逻辑斯蒂(logistic)回归模型,实现像素点上两帧差分和S-ViBe检测信息的融合;最后,综合两类检测信息完成前景像素点的判定。实验结果表明,ViBe-TD算法在4种不同场景视频上的检测效果达到了0.932的平均精确率,0.785的平均召回率以及0.842的平均F 1值。与原算法相比,ViBe-TD算法的各项指标平均有0.158的提高,具有良好的检测效果。