Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by reta...Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast.展开更多
Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful ...Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful compared to the original input images,image fusion merges two or more initial images of the same item.Image fusion aims to produce,enhance,and transform significant elements of the source images into combined images for the sake of human visual perception.Image fusion is commonly employed for feature extraction in smart robots,clinical imaging,audiovisual camera integration,manufacturing process monitoring,electronic circuit design,advanced device diagnostics,and intelligent assembly line robots,with image quality varying depending on application.The research paper presents various methods for merging images in spatial and frequency domains,including a blend of stable and curvelet transformations,everageMax-Min,weighted principal component analysis(PCA),HIS(Hue,Intensity,Saturation),wavelet transform,discrete cosine transform(DCT),dual-tree Complex Wavelet Transform(CWT),and multiple wavelet transform.Image fusion methods integrate data from several source images of an identical target,thereby enhancing information in an extremely efficient manner.More precisely,in imaging techniques,the depth of field constraint precludes images from focusing on every object,leading to the exclusion of certain characteristics.To tackle thess challanges,a very efficient multi-focus wavelet decomposition and recompositionmethod is proposed.The use of these wavelet decomposition and recomposition techniques enables this method to make use of existing optimized wavelet code and filter choice.The simulated outcomes provide evidence that the suggested approach initially extracts particular characteristics from images in order to accurately reflect the level of clarity portrayed in the original images.This study enhances the performance of the eXtreme Gradient Boosting(XGBoost)algorithm in detecting brain malignancies with greater precision through the integration of computational image analysis and feature selection.The performance of images is improved by segmenting them employing the K-Means algorithm.The segmentation method aids in identifying specific regions of interest,using Particle Swarm Optimization(PCA)for trait selection and XGBoost for data classification.Extensive trials confirm the model’s exceptional visual performance,achieving an accuracy of up to 97.067%and providing good objective indicators.展开更多
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne...A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.展开更多
The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedi...The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disor-ders.This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion(AD)and non-subsampled contourlet transform(NSCT).First,the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely split two features of input images such as structural and textural information.The detail and base layers are further combined utilizing a sum-based fusion rule which maximizes noise filtering contrast level by effectively preserving most of the structural and textural details.NSCT is utilized to further decompose these images into their low and high-frequency coefficients.These coefficients are then combined utilizing the principal component analysis/Karhunen-Loeve(PCA/KL)based fusion rule independently by substantiating eigenfeature reinforcement in the fusion results.An NSCT-based multiresolution analysis is performed on the combined salient feature information and the contrast-enhanced fusion coefficients.Finally,an inverse NSCT is applied to each coef-ficient to produce the final fusion result.Experimental results demonstrate an advantage of the proposed technique using a publicly accessible dataset and conducted comparative studies on three pairs of medical images from different modalities and health.Our approach offers better visual and robust performance with better objective measurements for research development since it excellently preserves significant salient features and precision without producing abnormal information in the case of qualitative and quantitative analysis.展开更多
Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fus...Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fusion aims to improve the image quality and preserve the specific features.The methods of medical image fusion generally use knowledge in many differentfields such as clinical medicine,computer vision,digital imaging,machine learning,pattern recognition to fuse different medical images.There are two main approaches in fusing image,including spatial domain approach and transform domain approachs.This paper proposes a new algorithm to fusion multimodal images.This algorithm is based on Entropy optimization and the Sobel operator.Wavelet transform is used to split the input images into components over the low and high frequency domains.Then,two fusion rules are used for obtaining the fusing images.Thefirst rule,based on the Sobel operator,is used for high frequency components.The second rule,based on Entropy optimization by using Particle Swarm Optimization(PSO)algorithm,is used for low frequency components.Proposed algorithm is implemented on the images related to central nervous system diseases.The experimental results of the paper show that the proposed algorithm is better than some recent methods in term of brightness level,the contrast,the entropy,the gradient and visual informationfidelity for fusion(VIFF),Feature Mutual Information(FMI)indices.展开更多
Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image f...Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image fusion represents an indispensible role infixing major solutions for the complicated medical predicaments,while the recent research results have an enhanced affinity towards the preservation of medical image details,leaving color distortion and halo artifacts to remain unaddressed.This paper proposes a novel method of fusing Computer Tomography(CT)and Magnetic Resonance Imaging(MRI)using a hybrid model of Non Sub-sampled Contourlet Transform(NSCT)and Joint Sparse Representation(JSR).This model gratifies the need for precise integration of medical images of different modalities,which is an essential requirement in the diagnosing process towards clinical activities and treating the patients accordingly.In the proposed model,the medical image is decomposed using NSCT which is an efficient shift variant decomposition transformation method.JSR is exercised to extricate the common features of the medical image for the fusion process.The performance analysis of the proposed system proves that the proposed image fusion technique for medical image fusion is more efficient,provides better results,and a high level of distinctness by integrating the advantages of complementary images.The comparative analysis proves that the proposed technique exhibits better-quality than the existing medical image fusion practices.展开更多
An accurate and early diagnosis of brain tumors based on medical ima-ging modalities is of great interest because brain tumors are a harmful threat to a person’s health worldwide.Several medical imaging techniques ha...An accurate and early diagnosis of brain tumors based on medical ima-ging modalities is of great interest because brain tumors are a harmful threat to a person’s health worldwide.Several medical imaging techniques have been used to analyze brain tumors,including computed tomography(CT)and magnetic reso-nance imaging(MRI).CT provides information about dense tissues,whereas MRI gives information about soft tissues.However,the fusion of CT and MRI images has little effect on enhancing the accuracy of the diagnosis of brain tumors.Therefore,machine learning methods have been adopted to diagnose brain tumors in recent years.This paper intends to develop a novel scheme to detect and classify brain tumors based on fused CT and MRI images.The pro-posed approach starts with preprocessing the images to reduce the noise.Then,fusion rules are applied to get the fused image,and a segmentation algorithm is employed to isolate the tumor region from the background to isolate the tumor region.Finally,a machine learning classifier classified the brain images into benign and malignant tumors.Computing statistical measures evaluate the classi-fication potential of the proposed scheme.Experimental outcomes are provided,and the Enhanced Flower Pollination Algorithm(EFPA)system shows that it out-performs other brain tumor classification methods considered for comparison.展开更多
Medical image fusion has been developed as an efficient assistive technology in various clinical applications such as medical diagnosis and treatment planning.Aiming at the problem of insufficient protection of image ...Medical image fusion has been developed as an efficient assistive technology in various clinical applications such as medical diagnosis and treatment planning.Aiming at the problem of insufficient protection of image contour and detail information by traditional image fusion methods,a new multimodal medical image fusion method is proposed.This method first uses non-subsampled shearlet transform to decompose the source image to obtain high and low frequency subband coefficients,then uses the latent low rank representation algorithm to fuse the low frequency subband coefficients,and applies the improved PAPCNN algorithm to fuse the high frequency subband coefficients.Finally,based on the automatic setting of parameters,the optimization method configuration of the time decay factorαe is carried out.The experimental results show that the proposed method solves the problems of difficult parameter setting and insufficient detail protection ability in traditional PCNN algorithm fusion images,and at the same time,it has achieved great improvement in visual quality and objective evaluation indicators.展开更多
This study aimed to propose road crack detection method based on infrared image fusion technology.By analyzing the characteristics of road crack images,this method uses a variety of infrared image fusion methods to pr...This study aimed to propose road crack detection method based on infrared image fusion technology.By analyzing the characteristics of road crack images,this method uses a variety of infrared image fusion methods to process different types of images.The use of this method allows the detection of road cracks,which not only reduces the professional requirements for inspectors,but also improves the accuracy of road crack detection.Based on infrared image processing technology,on the basis of in-depth analysis of infrared image features,a road crack detection method is proposed,which can accurately identify the road crack location,direction,length,and other characteristic information.Experiments showed that this method has a good effect,and can meet the requirement of road crack detection.展开更多
This study proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer,termed as SwinFusion.On the one hand,an attention-guided cross-domain module is devised to achi...This study proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer,termed as SwinFusion.On the one hand,an attention-guided cross-domain module is devised to achieve sufficient integration of complementary information and global interaction.More specifically,the proposed method involves an intra-domain fusion unit based on self-attention and an interdomain fusion unit based on cross-attention,which mine and integrate long dependencies within the same domain and across domains.Through long-range dependency modeling,the network is able to fully implement domain-specific information extraction and cross-domain complementary information integration as well as maintaining the appropriate apparent intensity from a global perspective.In particular,we introduce the shifted windows mechanism into the self-attention and cross-attention,which allows our model to receive images with arbitrary sizes.On the other hand,the multi-scene image fusion problems are generalized to a unified framework with structure maintenance,detail preservation,and proper intensity control.Moreover,an elaborate loss function,consisting of SSIM loss,texture loss,and intensity loss,drives the network to preserve abundant texture details and structural information,as well as presenting optimal apparent intensity.Extensive experiments on both multi-modal image fusion and digital photography image fusion demonstrate the superiority of our SwinFusion compared to the state-of-theart unified image fusion algorithms and task-specific alternatives.Implementation code and pre-trained weights can be accessed at https://github.com/Linfeng-Tang/SwinFusion.展开更多
In the process of in situ leaching of uranium,the microstructure controls and influences the flow distribution,percolation characteristics,and reaction mechanism of lixivium in the pores of reservoir rocks and directl...In the process of in situ leaching of uranium,the microstructure controls and influences the flow distribution,percolation characteristics,and reaction mechanism of lixivium in the pores of reservoir rocks and directly affects the leaching of useful components.In this study,the pore throat,pore size distribution,and mineral composition of low-permeability uranium-bearing sandstone were quantitatively analyzed by high pressure mercury injection,nuclear magnetic resonance,X-ray diffraction,and wavelength-dispersive X-ray fluorescence.The distribution characteristics of pores and minerals in the samples were qualitatively analyzed using energy-dispersive scanning electron microscopy and multi-resolution CT images.Image registration with the landmarks algorithm provided by FEI Avizo was used to accurately match the CT images with different resolutions.The multi-scale and multi-mineral digital core model of low-permeability uranium-bearing sandstone is reconstructed through pore segmentation and mineral segmentation of fusion core scanning images.The results show that the pore structure of low-permeability uranium-bearing sandstone is complex and has multi-scale and multi-crossing characteristics.The intergranular pores determine the main seepage channel in the pore space,and the secondary pores have poor connectivity with other pores.Pyrite and coffinite are isolated from the connected pores and surrounded by a large number of clay minerals and ankerite cements,which increases the difficulty of uranium leaching.Clays and a large amount of ankerite cement are filled in the primary and secondary pores and pore throats of the low-permeability uraniumbearing sandstone,which significantly reduces the porosity of the movable fluid and results in low overall permeability of the cores.The multi-scale and multi-mineral digital core proposed in this study provides a basis for characterizing macroscopic and microscopic pore-throat structures and mineral distributions of low-permeability uranium-bearing sandstone and can better understand the seepage characteristics.展开更多
In the fusion of image,how to measure the local character and clarity is called activity measurement. According to the problem,the traditional measurement is decided only by the high-frequency detail coefficients, whi...In the fusion of image,how to measure the local character and clarity is called activity measurement. According to the problem,the traditional measurement is decided only by the high-frequency detail coefficients, which will make the energy expression insufficient to reflect the local clarity. Therefore,in this paper,a novel construction method for activity measurement is proposed. Firstly,it uses the wavelet decomposition for the fusion resource image, and then utilizes the high and low frequency wavelet coefficients synthetically. Meantime,it takes the normalized variance as the weight of high-frequency energy. Secondly,it calculates the measurement by the weighted energy,which can be used to measure the local character. Finally,the fusion coefficients can be got. In order to illustrate the superiority of this new method,three kinds of assessing indicators are provided. The experiment results show that,comparing with the traditional methods,this new method weakens the fuzzy and promotes the indicator value. Therefore,it has much more advantages for practical application.展开更多
The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients ar...The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced.展开更多
The rise of urban traffic flow highlights the growing importance of traffic safety.In order to reduce the occurrence rate of traffic accidents,and improve front vision information of vehicle drivers,the method to impr...The rise of urban traffic flow highlights the growing importance of traffic safety.In order to reduce the occurrence rate of traffic accidents,and improve front vision information of vehicle drivers,the method to improve visual information of the vehicle driver in low visibility conditions is put forward based on infrared and visible image fusion technique.The wavelet image confusion algorithm is adopted to decompose the image into low-frequency approximation components and high-frequency detail components.Low-frequency component contains information representing gray value differences.High-frequency component contains the detail information of the image,which is frequently represented by gray standard deviation to assess image quality.To extract feature information of low-frequency component and high-frequency component with different emphases,different fusion operators are used separately by low-frequency and high-frequency components.In the processing of low-frequency component,the fusion rule of weighted regional energy proportion is adopted to improve the brightness of the image,and the fusion rule of weighted regional proportion of standard deviation is used in all the three high-frequency components to enhance the image contrast.The experiments on image fusion of infrared and visible light demonstrate that this image fusion method can effectively improve the image brightness and contrast,and it is suitable for vision enhancement of the low-visibility images.展开更多
Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhanc...Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhancement and visual improvement.To deal with these problems,a sub-regional infrared-visible image fusion method(SRF)is proposed.First,morphology and threshold segmentation is applied to extract targets interested in infrared images.Second,the infrared back-ground is reconstructed based on extracted targets and the visible image.Finally,target and back-ground regions are fused using a multi-scale transform.Experimental results are obtained using public data for comparison and evaluation,which demonstrate that the proposed SRF has poten-tial benefits over other methods.展开更多
Multimodal medical image fusion is a powerful tool for diagnosing diseases in medical field. The main objective is to capture the relevant information from input images into a single output image, which plays an impor...Multimodal medical image fusion is a powerful tool for diagnosing diseases in medical field. The main objective is to capture the relevant information from input images into a single output image, which plays an important role in clinical applications. In this paper, an image fusion technique for the fusion of multimodal medical images is proposed based on Non-Subsampled Contourlet Transform. The proposed technique uses the Non-Subsampled Contourlet Transform (NSCT) to decompose the images into lowpass and highpass subbands. The lowpass and highpass subbands are fused by using mean based and variance based fusion rules. The reconstructed image is obtained by taking Inverse Non-Subsampled Contourlet Transform (INSCT) on fused subbands. The experimental results on six pairs of medical images are compared in terms of entropy, mean, standard deviation, Q<sup>AB/F</sup> as performance parameters. It reveals that the proposed image fusion technique outperforms the existing image fusion techniques in terms of quantitative and qualitative outcomes of the images. The percentage improvement in entropy is 0% - 40%, mean is 3% - 42%, standard deviation is 1% - 42%, Q<sup>AB/F</sup>is 0.4% - 48% in proposed method comparing to conventional methods for six pairs of medical images.展开更多
The speed and quality of the image fusion always restrain each other.The real-time image fusion is one of the problems which needs to be studied and solved urgently.The windowing processing technology for the image fu...The speed and quality of the image fusion always restrain each other.The real-time image fusion is one of the problems which needs to be studied and solved urgently.The windowing processing technology for the image fusion proposed in this paper can solve this problem in a certain extent.The windowing rules were put forward and the applicable scope for the windowing fusion and the calculation method for the maximum windowing area were determined.And,the results of the windowing fusion were analyzed,verified and compared to confirm the feasibility of this technology.展开更多
Due to limited depth-of-field of digital single-lens reflex cameras,the scene content within a limited distance from the imaging plane remains in focus while other objects closer to or further away from the point of f...Due to limited depth-of-field of digital single-lens reflex cameras,the scene content within a limited distance from the imaging plane remains in focus while other objects closer to or further away from the point of focus appear as blurred(out-of-focus)in the image.Multi-Focus Image Fusion can be used to reconstruct a fully focused image from two or more partially focused images of the same scene.In this paper,a new Fuzzy Based Hybrid Focus Measure(FBHFM)for multi-focus image fusion has been proposed.Optimal block size is very critical step for multi-focus image fusion.Particle Swarm Optimization(PSO)algorithm has been used to find optimal size of the block of the images for extraction of focus measure features.After finding optimal blocks,three focus measures Sum of Modified Laplacian,Gray Level Variance and Contrast Visibility has been extracted and combined these focus measures by using intelligent fuzzy technique.Fuzzy based hybrid intelligent focus values were estimated using contrast visibility measure to generate focused image.Different sets of multi-focus images have been used in detailed experimentation and compared the results with state-of-the-art existing techniques such as Genetic Algorithm(GA),Principal Component Analysis(PCA),Laplacian Pyramid discrete wavelet transform(DWT),and aDWT for image fusion.It has been found that proposed method performs well as compare to existing methods.展开更多
Melanoma,due to its higher mortality rate,is considered as one of the most pernicious types of skin cancers,mostly affecting the white populations.It has been reported a number of times and is now widely accepted,that...Melanoma,due to its higher mortality rate,is considered as one of the most pernicious types of skin cancers,mostly affecting the white populations.It has been reported a number of times and is now widely accepted,that early detection of melanoma increases the chances of the subject’s survival.Computer-aided diagnostic systems help the experts in diagnosing the skin lesion at earlier stages using machine learning techniques.In thiswork,we propose a framework that accurately segments,and later classifies,the lesion using improved image segmentation and fusion methods.The proposed technique takes an image and passes it through two methods simultaneously;one is the weighted visual saliency-based method,and the second is improved HDCT based saliency estimation.The resultant image maps are later fused using the proposed image fusion technique to generate a localized lesion region.The resultant binary image is later mapped back to the RGB image and fed into the Inception-ResNet-V2 pre-trained model-trained by applying transfer learning.The simulation results show improved performance compared to several existing methods.展开更多
文摘Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast.
基金Princess Nourah bint Abdulrahman University and Researchers Supporting Project Number(PNURSP2024R346)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful compared to the original input images,image fusion merges two or more initial images of the same item.Image fusion aims to produce,enhance,and transform significant elements of the source images into combined images for the sake of human visual perception.Image fusion is commonly employed for feature extraction in smart robots,clinical imaging,audiovisual camera integration,manufacturing process monitoring,electronic circuit design,advanced device diagnostics,and intelligent assembly line robots,with image quality varying depending on application.The research paper presents various methods for merging images in spatial and frequency domains,including a blend of stable and curvelet transformations,everageMax-Min,weighted principal component analysis(PCA),HIS(Hue,Intensity,Saturation),wavelet transform,discrete cosine transform(DCT),dual-tree Complex Wavelet Transform(CWT),and multiple wavelet transform.Image fusion methods integrate data from several source images of an identical target,thereby enhancing information in an extremely efficient manner.More precisely,in imaging techniques,the depth of field constraint precludes images from focusing on every object,leading to the exclusion of certain characteristics.To tackle thess challanges,a very efficient multi-focus wavelet decomposition and recompositionmethod is proposed.The use of these wavelet decomposition and recomposition techniques enables this method to make use of existing optimized wavelet code and filter choice.The simulated outcomes provide evidence that the suggested approach initially extracts particular characteristics from images in order to accurately reflect the level of clarity portrayed in the original images.This study enhances the performance of the eXtreme Gradient Boosting(XGBoost)algorithm in detecting brain malignancies with greater precision through the integration of computational image analysis and feature selection.The performance of images is improved by segmenting them employing the K-Means algorithm.The segmentation method aids in identifying specific regions of interest,using Particle Swarm Optimization(PCA)for trait selection and XGBoost for data classification.Extensive trials confirm the model’s exceptional visual performance,achieving an accuracy of up to 97.067%and providing good objective indicators.
文摘A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.
文摘The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disor-ders.This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion(AD)and non-subsampled contourlet transform(NSCT).First,the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely split two features of input images such as structural and textural information.The detail and base layers are further combined utilizing a sum-based fusion rule which maximizes noise filtering contrast level by effectively preserving most of the structural and textural details.NSCT is utilized to further decompose these images into their low and high-frequency coefficients.These coefficients are then combined utilizing the principal component analysis/Karhunen-Loeve(PCA/KL)based fusion rule independently by substantiating eigenfeature reinforcement in the fusion results.An NSCT-based multiresolution analysis is performed on the combined salient feature information and the contrast-enhanced fusion coefficients.Finally,an inverse NSCT is applied to each coef-ficient to produce the final fusion result.Experimental results demonstrate an advantage of the proposed technique using a publicly accessible dataset and conducted comparative studies on three pairs of medical images from different modalities and health.Our approach offers better visual and robust performance with better objective measurements for research development since it excellently preserves significant salient features and precision without producing abnormal information in the case of qualitative and quantitative analysis.
文摘Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fusion aims to improve the image quality and preserve the specific features.The methods of medical image fusion generally use knowledge in many differentfields such as clinical medicine,computer vision,digital imaging,machine learning,pattern recognition to fuse different medical images.There are two main approaches in fusing image,including spatial domain approach and transform domain approachs.This paper proposes a new algorithm to fusion multimodal images.This algorithm is based on Entropy optimization and the Sobel operator.Wavelet transform is used to split the input images into components over the low and high frequency domains.Then,two fusion rules are used for obtaining the fusing images.Thefirst rule,based on the Sobel operator,is used for high frequency components.The second rule,based on Entropy optimization by using Particle Swarm Optimization(PSO)algorithm,is used for low frequency components.Proposed algorithm is implemented on the images related to central nervous system diseases.The experimental results of the paper show that the proposed algorithm is better than some recent methods in term of brightness level,the contrast,the entropy,the gradient and visual informationfidelity for fusion(VIFF),Feature Mutual Information(FMI)indices.
文摘Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image fusion represents an indispensible role infixing major solutions for the complicated medical predicaments,while the recent research results have an enhanced affinity towards the preservation of medical image details,leaving color distortion and halo artifacts to remain unaddressed.This paper proposes a novel method of fusing Computer Tomography(CT)and Magnetic Resonance Imaging(MRI)using a hybrid model of Non Sub-sampled Contourlet Transform(NSCT)and Joint Sparse Representation(JSR).This model gratifies the need for precise integration of medical images of different modalities,which is an essential requirement in the diagnosing process towards clinical activities and treating the patients accordingly.In the proposed model,the medical image is decomposed using NSCT which is an efficient shift variant decomposition transformation method.JSR is exercised to extricate the common features of the medical image for the fusion process.The performance analysis of the proposed system proves that the proposed image fusion technique for medical image fusion is more efficient,provides better results,and a high level of distinctness by integrating the advantages of complementary images.The comparative analysis proves that the proposed technique exhibits better-quality than the existing medical image fusion practices.
文摘An accurate and early diagnosis of brain tumors based on medical ima-ging modalities is of great interest because brain tumors are a harmful threat to a person’s health worldwide.Several medical imaging techniques have been used to analyze brain tumors,including computed tomography(CT)and magnetic reso-nance imaging(MRI).CT provides information about dense tissues,whereas MRI gives information about soft tissues.However,the fusion of CT and MRI images has little effect on enhancing the accuracy of the diagnosis of brain tumors.Therefore,machine learning methods have been adopted to diagnose brain tumors in recent years.This paper intends to develop a novel scheme to detect and classify brain tumors based on fused CT and MRI images.The pro-posed approach starts with preprocessing the images to reduce the noise.Then,fusion rules are applied to get the fused image,and a segmentation algorithm is employed to isolate the tumor region from the background to isolate the tumor region.Finally,a machine learning classifier classified the brain images into benign and malignant tumors.Computing statistical measures evaluate the classi-fication potential of the proposed scheme.Experimental outcomes are provided,and the Enhanced Flower Pollination Algorithm(EFPA)system shows that it out-performs other brain tumor classification methods considered for comparison.
文摘Medical image fusion has been developed as an efficient assistive technology in various clinical applications such as medical diagnosis and treatment planning.Aiming at the problem of insufficient protection of image contour and detail information by traditional image fusion methods,a new multimodal medical image fusion method is proposed.This method first uses non-subsampled shearlet transform to decompose the source image to obtain high and low frequency subband coefficients,then uses the latent low rank representation algorithm to fuse the low frequency subband coefficients,and applies the improved PAPCNN algorithm to fuse the high frequency subband coefficients.Finally,based on the automatic setting of parameters,the optimization method configuration of the time decay factorαe is carried out.The experimental results show that the proposed method solves the problems of difficult parameter setting and insufficient detail protection ability in traditional PCNN algorithm fusion images,and at the same time,it has achieved great improvement in visual quality and objective evaluation indicators.
文摘This study aimed to propose road crack detection method based on infrared image fusion technology.By analyzing the characteristics of road crack images,this method uses a variety of infrared image fusion methods to process different types of images.The use of this method allows the detection of road cracks,which not only reduces the professional requirements for inspectors,but also improves the accuracy of road crack detection.Based on infrared image processing technology,on the basis of in-depth analysis of infrared image features,a road crack detection method is proposed,which can accurately identify the road crack location,direction,length,and other characteristic information.Experiments showed that this method has a good effect,and can meet the requirement of road crack detection.
基金This work was supported by the National Natural Science Foundation of China(62075169,62003247,62061160370)the Key Research and Development Program of Hubei Province(2020BAB113).
文摘This study proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer,termed as SwinFusion.On the one hand,an attention-guided cross-domain module is devised to achieve sufficient integration of complementary information and global interaction.More specifically,the proposed method involves an intra-domain fusion unit based on self-attention and an interdomain fusion unit based on cross-attention,which mine and integrate long dependencies within the same domain and across domains.Through long-range dependency modeling,the network is able to fully implement domain-specific information extraction and cross-domain complementary information integration as well as maintaining the appropriate apparent intensity from a global perspective.In particular,we introduce the shifted windows mechanism into the self-attention and cross-attention,which allows our model to receive images with arbitrary sizes.On the other hand,the multi-scene image fusion problems are generalized to a unified framework with structure maintenance,detail preservation,and proper intensity control.Moreover,an elaborate loss function,consisting of SSIM loss,texture loss,and intensity loss,drives the network to preserve abundant texture details and structural information,as well as presenting optimal apparent intensity.Extensive experiments on both multi-modal image fusion and digital photography image fusion demonstrate the superiority of our SwinFusion compared to the state-of-theart unified image fusion algorithms and task-specific alternatives.Implementation code and pre-trained weights can be accessed at https://github.com/Linfeng-Tang/SwinFusion.
基金This work was supported by the National Natural Science Foundation of China(No.11775107)the Key Projects of Education Department of Hunan Province of China(No.16A184).
文摘In the process of in situ leaching of uranium,the microstructure controls and influences the flow distribution,percolation characteristics,and reaction mechanism of lixivium in the pores of reservoir rocks and directly affects the leaching of useful components.In this study,the pore throat,pore size distribution,and mineral composition of low-permeability uranium-bearing sandstone were quantitatively analyzed by high pressure mercury injection,nuclear magnetic resonance,X-ray diffraction,and wavelength-dispersive X-ray fluorescence.The distribution characteristics of pores and minerals in the samples were qualitatively analyzed using energy-dispersive scanning electron microscopy and multi-resolution CT images.Image registration with the landmarks algorithm provided by FEI Avizo was used to accurately match the CT images with different resolutions.The multi-scale and multi-mineral digital core model of low-permeability uranium-bearing sandstone is reconstructed through pore segmentation and mineral segmentation of fusion core scanning images.The results show that the pore structure of low-permeability uranium-bearing sandstone is complex and has multi-scale and multi-crossing characteristics.The intergranular pores determine the main seepage channel in the pore space,and the secondary pores have poor connectivity with other pores.Pyrite and coffinite are isolated from the connected pores and surrounded by a large number of clay minerals and ankerite cements,which increases the difficulty of uranium leaching.Clays and a large amount of ankerite cement are filled in the primary and secondary pores and pore throats of the low-permeability uraniumbearing sandstone,which significantly reduces the porosity of the movable fluid and results in low overall permeability of the cores.The multi-scale and multi-mineral digital core proposed in this study provides a basis for characterizing macroscopic and microscopic pore-throat structures and mineral distributions of low-permeability uranium-bearing sandstone and can better understand the seepage characteristics.
基金Sponsored by the Nation Nature Science Foundation of China(Grant No.61275010,61201237)the Fundamental Research Funds for the Central Universities(Grant No.HEUCFZ1129,No.HEUCF120805)
文摘In the fusion of image,how to measure the local character and clarity is called activity measurement. According to the problem,the traditional measurement is decided only by the high-frequency detail coefficients, which will make the energy expression insufficient to reflect the local clarity. Therefore,in this paper,a novel construction method for activity measurement is proposed. Firstly,it uses the wavelet decomposition for the fusion resource image, and then utilizes the high and low frequency wavelet coefficients synthetically. Meantime,it takes the normalized variance as the weight of high-frequency energy. Secondly,it calculates the measurement by the weighted energy,which can be used to measure the local character. Finally,the fusion coefficients can be got. In order to illustrate the superiority of this new method,three kinds of assessing indicators are provided. The experiment results show that,comparing with the traditional methods,this new method weakens the fuzzy and promotes the indicator value. Therefore,it has much more advantages for practical application.
基金Project supported by the National Natural Science Foundation of China(Grant No.61402368)Aerospace Support Fund,China(Grant No.2017-HT-XGD)Aerospace Science and Technology Innovation Foundation,China(Grant No.2017 ZD 53047)
文摘The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced.
基金the Science and Technology Development Program of Beijing Municipal Commission of Education (No.KM201010011002)the National College Students'Scientific Research and Entrepreneurial Action Plan(SJ201401011)
文摘The rise of urban traffic flow highlights the growing importance of traffic safety.In order to reduce the occurrence rate of traffic accidents,and improve front vision information of vehicle drivers,the method to improve visual information of the vehicle driver in low visibility conditions is put forward based on infrared and visible image fusion technique.The wavelet image confusion algorithm is adopted to decompose the image into low-frequency approximation components and high-frequency detail components.Low-frequency component contains information representing gray value differences.High-frequency component contains the detail information of the image,which is frequently represented by gray standard deviation to assess image quality.To extract feature information of low-frequency component and high-frequency component with different emphases,different fusion operators are used separately by low-frequency and high-frequency components.In the processing of low-frequency component,the fusion rule of weighted regional energy proportion is adopted to improve the brightness of the image,and the fusion rule of weighted regional proportion of standard deviation is used in all the three high-frequency components to enhance the image contrast.The experiments on image fusion of infrared and visible light demonstrate that this image fusion method can effectively improve the image brightness and contrast,and it is suitable for vision enhancement of the low-visibility images.
基金supported by the China Postdoctoral Science Foundation Funded Project(No.2021M690385)the National Natural Science Foundation of China(No.62101045).
文摘Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhancement and visual improvement.To deal with these problems,a sub-regional infrared-visible image fusion method(SRF)is proposed.First,morphology and threshold segmentation is applied to extract targets interested in infrared images.Second,the infrared back-ground is reconstructed based on extracted targets and the visible image.Finally,target and back-ground regions are fused using a multi-scale transform.Experimental results are obtained using public data for comparison and evaluation,which demonstrate that the proposed SRF has poten-tial benefits over other methods.
文摘Multimodal medical image fusion is a powerful tool for diagnosing diseases in medical field. The main objective is to capture the relevant information from input images into a single output image, which plays an important role in clinical applications. In this paper, an image fusion technique for the fusion of multimodal medical images is proposed based on Non-Subsampled Contourlet Transform. The proposed technique uses the Non-Subsampled Contourlet Transform (NSCT) to decompose the images into lowpass and highpass subbands. The lowpass and highpass subbands are fused by using mean based and variance based fusion rules. The reconstructed image is obtained by taking Inverse Non-Subsampled Contourlet Transform (INSCT) on fused subbands. The experimental results on six pairs of medical images are compared in terms of entropy, mean, standard deviation, Q<sup>AB/F</sup> as performance parameters. It reveals that the proposed image fusion technique outperforms the existing image fusion techniques in terms of quantitative and qualitative outcomes of the images. The percentage improvement in entropy is 0% - 40%, mean is 3% - 42%, standard deviation is 1% - 42%, Q<sup>AB/F</sup>is 0.4% - 48% in proposed method comparing to conventional methods for six pairs of medical images.
文摘The speed and quality of the image fusion always restrain each other.The real-time image fusion is one of the problems which needs to be studied and solved urgently.The windowing processing technology for the image fusion proposed in this paper can solve this problem in a certain extent.The windowing rules were put forward and the applicable scope for the windowing fusion and the calculation method for the maximum windowing area were determined.And,the results of the windowing fusion were analyzed,verified and compared to confirm the feasibility of this technology.
文摘Due to limited depth-of-field of digital single-lens reflex cameras,the scene content within a limited distance from the imaging plane remains in focus while other objects closer to or further away from the point of focus appear as blurred(out-of-focus)in the image.Multi-Focus Image Fusion can be used to reconstruct a fully focused image from two or more partially focused images of the same scene.In this paper,a new Fuzzy Based Hybrid Focus Measure(FBHFM)for multi-focus image fusion has been proposed.Optimal block size is very critical step for multi-focus image fusion.Particle Swarm Optimization(PSO)algorithm has been used to find optimal size of the block of the images for extraction of focus measure features.After finding optimal blocks,three focus measures Sum of Modified Laplacian,Gray Level Variance and Contrast Visibility has been extracted and combined these focus measures by using intelligent fuzzy technique.Fuzzy based hybrid intelligent focus values were estimated using contrast visibility measure to generate focused image.Different sets of multi-focus images have been used in detailed experimentation and compared the results with state-of-the-art existing techniques such as Genetic Algorithm(GA),Principal Component Analysis(PCA),Laplacian Pyramid discrete wavelet transform(DWT),and aDWT for image fusion.It has been found that proposed method performs well as compare to existing methods.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through research Group No.(RG-1438-034)and co-authors K.A.and M.A.
文摘Melanoma,due to its higher mortality rate,is considered as one of the most pernicious types of skin cancers,mostly affecting the white populations.It has been reported a number of times and is now widely accepted,that early detection of melanoma increases the chances of the subject’s survival.Computer-aided diagnostic systems help the experts in diagnosing the skin lesion at earlier stages using machine learning techniques.In thiswork,we propose a framework that accurately segments,and later classifies,the lesion using improved image segmentation and fusion methods.The proposed technique takes an image and passes it through two methods simultaneously;one is the weighted visual saliency-based method,and the second is improved HDCT based saliency estimation.The resultant image maps are later fused using the proposed image fusion technique to generate a localized lesion region.The resultant binary image is later mapped back to the RGB image and fed into the Inception-ResNet-V2 pre-trained model-trained by applying transfer learning.The simulation results show improved performance compared to several existing methods.