Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(H...Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(HSIs)due to its powerful ability of feature extraction and data reconstruction.However,most existing AE-based unmixing algorithms usually ignore the spatial information of HSIs.To solve this problem,a hypergraph regularized deep autoencoder(HGAE)is proposed for unmixing.Firstly,the traditional AE architecture is specifically improved as an unsupervised unmixing framework.Secondly,hypergraph learning is employed to reformulate the loss function,which facilitates the expression of high-order similarity among locally neighboring pixels and promotes the consistency of their abundances.Moreover,L_(1/2)norm is further used to enhance abundances sparsity.Finally,the experiments on simulated data,real hyperspectral remote sensing images,and textile cloth images are used to verify that the proposed method can perform better than several state-of-the-art unmixing algorithms.展开更多
Hyperspectral unmixing aims to acquire pure spectra of distinct substances(endmembers)and fractional abundances from highly mixed pixels.In this paper,a deep unmixing network framework is designed to deal with the noi...Hyperspectral unmixing aims to acquire pure spectra of distinct substances(endmembers)and fractional abundances from highly mixed pixels.In this paper,a deep unmixing network framework is designed to deal with the noise disturbance.It contains two parts:a three⁃dimensional convolutional autoencoder(denoising 3D CAE)which recovers data from noised input,and a restrictive non⁃negative sparse autoencoder(NNSAE)which incorporates a hypergraph regularizer as well as a l2,1⁃norm sparsity constraint to improve the unmixing performance.The deep denoising 3D CAE network was constructed for noisy data retrieval,and had strong capacity of extracting the principle and robust local features in spatial and spectral domains efficiently by training with corrupted data.Furthermore,a part⁃based nonnegative sparse autoencoder with l2,1⁃norm penalty was concatenated,and a hypergraph regularizer was designed elaborately to represent similarity of neighboring pixels in spatial dimensions.Comparative experiments were conducted on synthetic and real⁃world data,which both demonstrate the effectiveness and robustness of the proposed network.展开更多
基金National Natural Science Foundation of China(No.62001098)Fundamental Research Funds for the Central Universities of Ministry of Education of China(No.2232020D-33)。
文摘Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(HSIs)due to its powerful ability of feature extraction and data reconstruction.However,most existing AE-based unmixing algorithms usually ignore the spatial information of HSIs.To solve this problem,a hypergraph regularized deep autoencoder(HGAE)is proposed for unmixing.Firstly,the traditional AE architecture is specifically improved as an unsupervised unmixing framework.Secondly,hypergraph learning is employed to reformulate the loss function,which facilitates the expression of high-order similarity among locally neighboring pixels and promotes the consistency of their abundances.Moreover,L_(1/2)norm is further used to enhance abundances sparsity.Finally,the experiments on simulated data,real hyperspectral remote sensing images,and textile cloth images are used to verify that the proposed method can perform better than several state-of-the-art unmixing algorithms.
基金Sponsored by the National Natural Science Foundation of China(Grant No.61876054)the National Key Research and Development Program of China(Grant No.2019YFC0117400).
文摘Hyperspectral unmixing aims to acquire pure spectra of distinct substances(endmembers)and fractional abundances from highly mixed pixels.In this paper,a deep unmixing network framework is designed to deal with the noise disturbance.It contains two parts:a three⁃dimensional convolutional autoencoder(denoising 3D CAE)which recovers data from noised input,and a restrictive non⁃negative sparse autoencoder(NNSAE)which incorporates a hypergraph regularizer as well as a l2,1⁃norm sparsity constraint to improve the unmixing performance.The deep denoising 3D CAE network was constructed for noisy data retrieval,and had strong capacity of extracting the principle and robust local features in spatial and spectral domains efficiently by training with corrupted data.Furthermore,a part⁃based nonnegative sparse autoencoder with l2,1⁃norm penalty was concatenated,and a hypergraph regularizer was designed elaborately to represent similarity of neighboring pixels in spatial dimensions.Comparative experiments were conducted on synthetic and real⁃world data,which both demonstrate the effectiveness and robustness of the proposed network.