Video prediction is the problem of generating future frames by exploiting the spatiotemporal correlation from the past frame sequence.It is one of the crucial issues in computer vision and has many real-world applicat...Video prediction is the problem of generating future frames by exploiting the spatiotemporal correlation from the past frame sequence.It is one of the crucial issues in computer vision and has many real-world applications,mainly focused on predicting future scenarios to avoid undesirable outcomes.However,modeling future image content and object is challenging due to the dynamic evolution and complexity of the scene,such as occlusions,camera movements,delay and illumination.Direct frame synthesis or optical-flow estimation are common approaches used by researchers.However,researchers mainly focused on video prediction using one of the approaches.Both methods have limitations,such as direct frame synthesis,usually face blurry prediction due to complex pixel distributions in the scene,and optical-flow estimation,usually produce artifacts due to large object displacements or obstructions in the clip.In this paper,we constructed a deep neural network Frame Prediction Network(FPNet-OF)with multiplebranch inputs(optical flow and original frame)to predict the future video frame by adaptively fusing the future object-motion with the future frame generator.The key idea is to jointly optimize direct RGB frame synthesis and dense optical flow estimation to generate a superior video prediction network.Using various real-world datasets,we experimentally verify that our proposed framework can produce high-level video frame compared to other state-ofthe-art framework.展开更多
In this paper,a novel compression framework based on 3D point cloud data is proposed for telepresence,which consists of two parts.One is implemented to remove the spatial redundancy,i.e.,a robust Bayesian framework is...In this paper,a novel compression framework based on 3D point cloud data is proposed for telepresence,which consists of two parts.One is implemented to remove the spatial redundancy,i.e.,a robust Bayesian framework is designed to track the human motion and the 3D point cloud data of the human body is acquired by using the tracking 2D box.The other part is applied to remove the temporal redundancy of the 3D point cloud data.The temporal redundancy between point clouds is removed by using the motion vector,i.e.,the most similar cluster in the previous frame is found for the cluster in the current frame by comparing the cluster feature and the cluster in the current frame is replaced by the motion vector for compressing the current frame.The hrst,the B-SHOT(binary signatures of histograms orientation)descriptor is applied to represent the point feature for matching the corresponding point between two frames.The second,the K-mean algorithm is used to generate the cluster because there are a lot of unsuccessfully matched points in the current frame.The matching operation is exploited to find the corresponding clusters between the point cloud data of two frames.Finally,the cluster information in the current frame is replaced by the motion vector for compressing the current frame and the unsuccessfully matched clusters in the curren t and the motion vectors are transmit ted into the rem ote end.In order to reduce calculation time of the B-SHOT descriptor,we introduce an octree structure into the B-SHOT descriptor.In particular,in order to improve the robustness of the matching operation,we design the cluster feature to estimate the similarity bet ween two clusters.Experimen tai results have shown the bet ter performance of the proposed method due to the lower calculation time and the higher compression ratio.The proposed met hod achieves the compression ratio of 8.42 and the delay time of 1228 ms compared with the compression ratio of 5.99 and the delay time of 2163 ms in the octree-based compression method under conditions of similar distortion rate.展开更多
基金supported by Incheon NationalUniversity Research Grant in 2017.
文摘Video prediction is the problem of generating future frames by exploiting the spatiotemporal correlation from the past frame sequence.It is one of the crucial issues in computer vision and has many real-world applications,mainly focused on predicting future scenarios to avoid undesirable outcomes.However,modeling future image content and object is challenging due to the dynamic evolution and complexity of the scene,such as occlusions,camera movements,delay and illumination.Direct frame synthesis or optical-flow estimation are common approaches used by researchers.However,researchers mainly focused on video prediction using one of the approaches.Both methods have limitations,such as direct frame synthesis,usually face blurry prediction due to complex pixel distributions in the scene,and optical-flow estimation,usually produce artifacts due to large object displacements or obstructions in the clip.In this paper,we constructed a deep neural network Frame Prediction Network(FPNet-OF)with multiplebranch inputs(optical flow and original frame)to predict the future video frame by adaptively fusing the future object-motion with the future frame generator.The key idea is to jointly optimize direct RGB frame synthesis and dense optical flow estimation to generate a superior video prediction network.Using various real-world datasets,we experimentally verify that our proposed framework can produce high-level video frame compared to other state-ofthe-art framework.
基金This work was supported by National Nature Science Foundation of China(No.61811530281 and 61861136009)Guangdong Regional Joint Foundation(No.2019B1515120076)the Fundamental Research for the Central Universities.
文摘In this paper,a novel compression framework based on 3D point cloud data is proposed for telepresence,which consists of two parts.One is implemented to remove the spatial redundancy,i.e.,a robust Bayesian framework is designed to track the human motion and the 3D point cloud data of the human body is acquired by using the tracking 2D box.The other part is applied to remove the temporal redundancy of the 3D point cloud data.The temporal redundancy between point clouds is removed by using the motion vector,i.e.,the most similar cluster in the previous frame is found for the cluster in the current frame by comparing the cluster feature and the cluster in the current frame is replaced by the motion vector for compressing the current frame.The hrst,the B-SHOT(binary signatures of histograms orientation)descriptor is applied to represent the point feature for matching the corresponding point between two frames.The second,the K-mean algorithm is used to generate the cluster because there are a lot of unsuccessfully matched points in the current frame.The matching operation is exploited to find the corresponding clusters between the point cloud data of two frames.Finally,the cluster information in the current frame is replaced by the motion vector for compressing the current frame and the unsuccessfully matched clusters in the curren t and the motion vectors are transmit ted into the rem ote end.In order to reduce calculation time of the B-SHOT descriptor,we introduce an octree structure into the B-SHOT descriptor.In particular,in order to improve the robustness of the matching operation,we design the cluster feature to estimate the similarity bet ween two clusters.Experimen tai results have shown the bet ter performance of the proposed method due to the lower calculation time and the higher compression ratio.The proposed met hod achieves the compression ratio of 8.42 and the delay time of 1228 ms compared with the compression ratio of 5.99 and the delay time of 2163 ms in the octree-based compression method under conditions of similar distortion rate.