期刊文献+
共找到80篇文章
< 1 2 4 >
每页显示 20 50 100
Multimodality Medical Image Fusion Based on Pixel Significance with Edge-Preserving Processing for Clinical Applications
1
作者 Bhawna Goyal Ayush Dogra +4 位作者 Dawa Chyophel Lepcha Rajesh Singh Hemant Sharma Ahmed Alkhayyat Manob Jyoti Saikia 《Computers, Materials & Continua》 SCIE EI 2024年第3期4317-4342,共26页
Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by reta... Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast. 展开更多
关键词 image fusion fractal data analysis BIOMEDICAL diseases research multiresolution analysis numerical analysis
下载PDF
Image Fusion UsingWavelet Transformation and XGboost Algorithm
2
作者 Shahid Naseem Tariq Mahmood +4 位作者 Amjad Rehman Khan Umer Farooq Samra Nawazish Faten S.Alamri Tanzila Saba 《Computers, Materials & Continua》 SCIE EI 2024年第4期801-817,共17页
Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful ... Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful compared to the original input images,image fusion merges two or more initial images of the same item.Image fusion aims to produce,enhance,and transform significant elements of the source images into combined images for the sake of human visual perception.Image fusion is commonly employed for feature extraction in smart robots,clinical imaging,audiovisual camera integration,manufacturing process monitoring,electronic circuit design,advanced device diagnostics,and intelligent assembly line robots,with image quality varying depending on application.The research paper presents various methods for merging images in spatial and frequency domains,including a blend of stable and curvelet transformations,everageMax-Min,weighted principal component analysis(PCA),HIS(Hue,Intensity,Saturation),wavelet transform,discrete cosine transform(DCT),dual-tree Complex Wavelet Transform(CWT),and multiple wavelet transform.Image fusion methods integrate data from several source images of an identical target,thereby enhancing information in an extremely efficient manner.More precisely,in imaging techniques,the depth of field constraint precludes images from focusing on every object,leading to the exclusion of certain characteristics.To tackle thess challanges,a very efficient multi-focus wavelet decomposition and recompositionmethod is proposed.The use of these wavelet decomposition and recomposition techniques enables this method to make use of existing optimized wavelet code and filter choice.The simulated outcomes provide evidence that the suggested approach initially extracts particular characteristics from images in order to accurately reflect the level of clarity portrayed in the original images.This study enhances the performance of the eXtreme Gradient Boosting(XGBoost)algorithm in detecting brain malignancies with greater precision through the integration of computational image analysis and feature selection.The performance of images is improved by segmenting them employing the K-Means algorithm.The segmentation method aids in identifying specific regions of interest,using Particle Swarm Optimization(PCA)for trait selection and XGBoost for data classification.Extensive trials confirm the model’s exceptional visual performance,achieving an accuracy of up to 97.067%and providing good objective indicators. 展开更多
关键词 image fusion max-min average CWT XGBoost DCT inclusive innovations spatial and frequency domain
下载PDF
Infrared and Visible Image Fusion Based on Res2Net-Transformer Automatic Encoding and Decoding
3
作者 Chunming Wu Wukai Liu Xin Ma 《Computers, Materials & Continua》 SCIE EI 2024年第4期1441-1461,共21页
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne... A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations. 展开更多
关键词 image fusion Res2Net-Transformer infrared image visible image
下载PDF
Medical Image Fusion Based on Anisotropic Diffusion and Non-Subsampled Contourlet Transform
4
作者 Bhawna Goyal Ayush Dogra +3 位作者 Rahul Khoond Dawa Chyophel Lepcha Vishal Goyal Steven LFernandes 《Computers, Materials & Continua》 SCIE EI 2023年第7期311-327,共17页
The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedi... The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disor-ders.This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion(AD)and non-subsampled contourlet transform(NSCT).First,the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely split two features of input images such as structural and textural information.The detail and base layers are further combined utilizing a sum-based fusion rule which maximizes noise filtering contrast level by effectively preserving most of the structural and textural details.NSCT is utilized to further decompose these images into their low and high-frequency coefficients.These coefficients are then combined utilizing the principal component analysis/Karhunen-Loeve(PCA/KL)based fusion rule independently by substantiating eigenfeature reinforcement in the fusion results.An NSCT-based multiresolution analysis is performed on the combined salient feature information and the contrast-enhanced fusion coefficients.Finally,an inverse NSCT is applied to each coef-ficient to produce the final fusion result.Experimental results demonstrate an advantage of the proposed technique using a publicly accessible dataset and conducted comparative studies on three pairs of medical images from different modalities and health.Our approach offers better visual and robust performance with better objective measurements for research development since it excellently preserves significant salient features and precision without producing abnormal information in the case of qualitative and quantitative analysis. 展开更多
关键词 Anisotropic diffusion BIOMEDICAL medical HEALTH DISEASES adversarial attacks image fusion research and development PRECISION
下载PDF
Combining Entropy Optimization and Sobel Operator for Medical Image Fusion
5
作者 Nguyen Tu Trung Tran Thi Ngan +1 位作者 Tran Manh Tuan To Huu Nguyen 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期535-544,共10页
Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fus... Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fusion aims to improve the image quality and preserve the specific features.The methods of medical image fusion generally use knowledge in many differentfields such as clinical medicine,computer vision,digital imaging,machine learning,pattern recognition to fuse different medical images.There are two main approaches in fusing image,including spatial domain approach and transform domain approachs.This paper proposes a new algorithm to fusion multimodal images.This algorithm is based on Entropy optimization and the Sobel operator.Wavelet transform is used to split the input images into components over the low and high frequency domains.Then,two fusion rules are used for obtaining the fusing images.Thefirst rule,based on the Sobel operator,is used for high frequency components.The second rule,based on Entropy optimization by using Particle Swarm Optimization(PSO)algorithm,is used for low frequency components.Proposed algorithm is implemented on the images related to central nervous system diseases.The experimental results of the paper show that the proposed algorithm is better than some recent methods in term of brightness level,the contrast,the entropy,the gradient and visual informationfidelity for fusion(VIFF),Feature Mutual Information(FMI)indices. 展开更多
关键词 Medical image fusion WAVELET entropy optimization PSO Sobel operator
下载PDF
Non Sub-Sampled Contourlet with Joint Sparse Representation Based Medical Image Fusion
6
作者 Kandasamy Kittusamy Latha Shanmuga Vadivu Sampath Kumar 《Computer Systems Science & Engineering》 SCIE EI 2023年第3期1989-2005,共17页
Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image f... Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image fusion represents an indispensible role infixing major solutions for the complicated medical predicaments,while the recent research results have an enhanced affinity towards the preservation of medical image details,leaving color distortion and halo artifacts to remain unaddressed.This paper proposes a novel method of fusing Computer Tomography(CT)and Magnetic Resonance Imaging(MRI)using a hybrid model of Non Sub-sampled Contourlet Transform(NSCT)and Joint Sparse Representation(JSR).This model gratifies the need for precise integration of medical images of different modalities,which is an essential requirement in the diagnosing process towards clinical activities and treating the patients accordingly.In the proposed model,the medical image is decomposed using NSCT which is an efficient shift variant decomposition transformation method.JSR is exercised to extricate the common features of the medical image for the fusion process.The performance analysis of the proposed system proves that the proposed image fusion technique for medical image fusion is more efficient,provides better results,and a high level of distinctness by integrating the advantages of complementary images.The comparative analysis proves that the proposed technique exhibits better-quality than the existing medical image fusion practices. 展开更多
关键词 Medical image fusion computer tomography magnetic resonance imaging non sub-sampled contourlet transform(NSCT) joint sparse representation(JSR)
下载PDF
Brain Tumor Classification Using Image Fusion and EFPA-SVM Classifier
7
作者 P.P.Fathimathul Rajeena R.Sivakumar 《Intelligent Automation & Soft Computing》 SCIE 2023年第3期2837-2855,共19页
An accurate and early diagnosis of brain tumors based on medical ima-ging modalities is of great interest because brain tumors are a harmful threat to a person’s health worldwide.Several medical imaging techniques ha... An accurate and early diagnosis of brain tumors based on medical ima-ging modalities is of great interest because brain tumors are a harmful threat to a person’s health worldwide.Several medical imaging techniques have been used to analyze brain tumors,including computed tomography(CT)and magnetic reso-nance imaging(MRI).CT provides information about dense tissues,whereas MRI gives information about soft tissues.However,the fusion of CT and MRI images has little effect on enhancing the accuracy of the diagnosis of brain tumors.Therefore,machine learning methods have been adopted to diagnose brain tumors in recent years.This paper intends to develop a novel scheme to detect and classify brain tumors based on fused CT and MRI images.The pro-posed approach starts with preprocessing the images to reduce the noise.Then,fusion rules are applied to get the fused image,and a segmentation algorithm is employed to isolate the tumor region from the background to isolate the tumor region.Finally,a machine learning classifier classified the brain images into benign and malignant tumors.Computing statistical measures evaluate the classi-fication potential of the proposed scheme.Experimental outcomes are provided,and the Enhanced Flower Pollination Algorithm(EFPA)system shows that it out-performs other brain tumor classification methods considered for comparison. 展开更多
关键词 Brain tumor classification improved wavelet threshold integer wavelet transform medical image fusion
下载PDF
Multimodal Medical Image Fusion Based on Parameter Adaptive PCNN and Latent Low-rank Representation
8
作者 WANG Wenyan ZHOU Xianchun YANG Liangjian 《Instrumentation》 2023年第1期45-58,共14页
Medical image fusion has been developed as an efficient assistive technology in various clinical applications such as medical diagnosis and treatment planning.Aiming at the problem of insufficient protection of image ... Medical image fusion has been developed as an efficient assistive technology in various clinical applications such as medical diagnosis and treatment planning.Aiming at the problem of insufficient protection of image contour and detail information by traditional image fusion methods,a new multimodal medical image fusion method is proposed.This method first uses non-subsampled shearlet transform to decompose the source image to obtain high and low frequency subband coefficients,then uses the latent low rank representation algorithm to fuse the low frequency subband coefficients,and applies the improved PAPCNN algorithm to fuse the high frequency subband coefficients.Finally,based on the automatic setting of parameters,the optimization method configuration of the time decay factorαe is carried out.The experimental results show that the proposed method solves the problems of difficult parameter setting and insufficient detail protection ability in traditional PCNN algorithm fusion images,and at the same time,it has achieved great improvement in visual quality and objective evaluation indicators. 展开更多
关键词 image fusion Non-subsampled Shearlet Transform Parameter Adaptive PCNN Latent Low-rank Representation
原文传递
Research on Infrared Image Fusion Technology Based on Road Crack Detection
9
作者 Guangjun Li Lin Nan +3 位作者 Lu Zhang Manman Feng Yan Liu Xu Meng 《Journal of World Architecture》 2023年第3期21-26,共6页
This study aimed to propose road crack detection method based on infrared image fusion technology.By analyzing the characteristics of road crack images,this method uses a variety of infrared image fusion methods to pr... This study aimed to propose road crack detection method based on infrared image fusion technology.By analyzing the characteristics of road crack images,this method uses a variety of infrared image fusion methods to process different types of images.The use of this method allows the detection of road cracks,which not only reduces the professional requirements for inspectors,but also improves the accuracy of road crack detection.Based on infrared image processing technology,on the basis of in-depth analysis of infrared image features,a road crack detection method is proposed,which can accurately identify the road crack location,direction,length,and other characteristic information.Experiments showed that this method has a good effect,and can meet the requirement of road crack detection. 展开更多
关键词 Road crack detection Infrared image fusion technology Detection quality
下载PDF
SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer 被引量:11
10
作者 Jiayi Ma Linfeng Tang +3 位作者 Fan Fan Jun Huang Xiaoguang Mei Yong Ma 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第7期1200-1217,共18页
This study proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer,termed as SwinFusion.On the one hand,an attention-guided cross-domain module is devised to achi... This study proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer,termed as SwinFusion.On the one hand,an attention-guided cross-domain module is devised to achieve sufficient integration of complementary information and global interaction.More specifically,the proposed method involves an intra-domain fusion unit based on self-attention and an interdomain fusion unit based on cross-attention,which mine and integrate long dependencies within the same domain and across domains.Through long-range dependency modeling,the network is able to fully implement domain-specific information extraction and cross-domain complementary information integration as well as maintaining the appropriate apparent intensity from a global perspective.In particular,we introduce the shifted windows mechanism into the self-attention and cross-attention,which allows our model to receive images with arbitrary sizes.On the other hand,the multi-scene image fusion problems are generalized to a unified framework with structure maintenance,detail preservation,and proper intensity control.Moreover,an elaborate loss function,consisting of SSIM loss,texture loss,and intensity loss,drives the network to preserve abundant texture details and structural information,as well as presenting optimal apparent intensity.Extensive experiments on both multi-modal image fusion and digital photography image fusion demonstrate the superiority of our SwinFusion compared to the state-of-theart unified image fusion algorithms and task-specific alternatives.Implementation code and pre-trained weights can be accessed at https://github.com/Linfeng-Tang/SwinFusion. 展开更多
关键词 Cross-domain long-range learning image fusion Swin transformer
下载PDF
3D characterization of porosity and minerals of low-permeability uranium-bearing sandstone based on multi-resolution image fusion 被引量:5
11
作者 Bing Sun Shan-Shan Hou +3 位作者 Sheng Zeng Xin Bai Shu-Wen Zhang Jing Zhang 《Nuclear Science and Techniques》 SCIE CAS CSCD 2020年第10期115-134,共20页
In the process of in situ leaching of uranium,the microstructure controls and influences the flow distribution,percolation characteristics,and reaction mechanism of lixivium in the pores of reservoir rocks and directl... In the process of in situ leaching of uranium,the microstructure controls and influences the flow distribution,percolation characteristics,and reaction mechanism of lixivium in the pores of reservoir rocks and directly affects the leaching of useful components.In this study,the pore throat,pore size distribution,and mineral composition of low-permeability uranium-bearing sandstone were quantitatively analyzed by high pressure mercury injection,nuclear magnetic resonance,X-ray diffraction,and wavelength-dispersive X-ray fluorescence.The distribution characteristics of pores and minerals in the samples were qualitatively analyzed using energy-dispersive scanning electron microscopy and multi-resolution CT images.Image registration with the landmarks algorithm provided by FEI Avizo was used to accurately match the CT images with different resolutions.The multi-scale and multi-mineral digital core model of low-permeability uranium-bearing sandstone is reconstructed through pore segmentation and mineral segmentation of fusion core scanning images.The results show that the pore structure of low-permeability uranium-bearing sandstone is complex and has multi-scale and multi-crossing characteristics.The intergranular pores determine the main seepage channel in the pore space,and the secondary pores have poor connectivity with other pores.Pyrite and coffinite are isolated from the connected pores and surrounded by a large number of clay minerals and ankerite cements,which increases the difficulty of uranium leaching.Clays and a large amount of ankerite cement are filled in the primary and secondary pores and pore throats of the low-permeability uraniumbearing sandstone,which significantly reduces the porosity of the movable fluid and results in low overall permeability of the cores.The multi-scale and multi-mineral digital core proposed in this study provides a basis for characterizing macroscopic and microscopic pore-throat structures and mineral distributions of low-permeability uranium-bearing sandstone and can better understand the seepage characteristics. 展开更多
关键词 Low-permeability uranium-bearing sandstone Digital core MICRO-CT SEM–EDS image fusion
下载PDF
Multi-Focus Image Fusion Based on Wavelet Transformation 被引量:4
12
作者 Peng Zhang Ying-Xun Tang +1 位作者 Yan-Hua Liang Xu-Bo Liu 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2013年第2期124-128,共5页
In the fusion of image,how to measure the local character and clarity is called activity measurement. According to the problem,the traditional measurement is decided only by the high-frequency detail coefficients, whi... In the fusion of image,how to measure the local character and clarity is called activity measurement. According to the problem,the traditional measurement is decided only by the high-frequency detail coefficients, which will make the energy expression insufficient to reflect the local clarity. Therefore,in this paper,a novel construction method for activity measurement is proposed. Firstly,it uses the wavelet decomposition for the fusion resource image, and then utilizes the high and low frequency wavelet coefficients synthetically. Meantime,it takes the normalized variance as the weight of high-frequency energy. Secondly,it calculates the measurement by the weighted energy,which can be used to measure the local character. Finally,the fusion coefficients can be got. In order to illustrate the superiority of this new method,three kinds of assessing indicators are provided. The experiment results show that,comparing with the traditional methods,this new method weakens the fuzzy and promotes the indicator value. Therefore,it has much more advantages for practical application. 展开更多
关键词 variance MEASURE image fusion wavelet transformation multi-resolution analysis
下载PDF
An infrared and visible image fusion method based upon multi-scale and top-hat transforms 被引量:1
13
作者 何贵青 张琪琦 +3 位作者 纪佳琪 董丹丹 张海曦 王珺 《Chinese Physics B》 SCIE EI CAS CSCD 2018年第11期340-348,共9页
The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients ar... The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced. 展开更多
关键词 infrared and visible image fusion multi-scale transform mathematical morphology top-hat transform
原文传递
Vision Enhancement Technology of Drivers Based on Image Fusion 被引量:1
14
作者 陈天华 周爱德 +1 位作者 李会希 邢素霞 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2015年第5期495-501,共7页
The rise of urban traffic flow highlights the growing importance of traffic safety.In order to reduce the occurrence rate of traffic accidents,and improve front vision information of vehicle drivers,the method to impr... The rise of urban traffic flow highlights the growing importance of traffic safety.In order to reduce the occurrence rate of traffic accidents,and improve front vision information of vehicle drivers,the method to improve visual information of the vehicle driver in low visibility conditions is put forward based on infrared and visible image fusion technique.The wavelet image confusion algorithm is adopted to decompose the image into low-frequency approximation components and high-frequency detail components.Low-frequency component contains information representing gray value differences.High-frequency component contains the detail information of the image,which is frequently represented by gray standard deviation to assess image quality.To extract feature information of low-frequency component and high-frequency component with different emphases,different fusion operators are used separately by low-frequency and high-frequency components.In the processing of low-frequency component,the fusion rule of weighted regional energy proportion is adopted to improve the brightness of the image,and the fusion rule of weighted regional proportion of standard deviation is used in all the three high-frequency components to enhance the image contrast.The experiments on image fusion of infrared and visible light demonstrate that this image fusion method can effectively improve the image brightness and contrast,and it is suitable for vision enhancement of the low-visibility images. 展开更多
关键词 image fusion vision enhancement infrared image processing wavelet transform(WT)
下载PDF
Sub-Regional Infrared-Visible Image Fusion Using Multi-Scale Transformation 被引量:1
15
作者 Yexin Liu Ben Xu +2 位作者 Mengmeng Zhang Wei Li Ran Tao 《Journal of Beijing Institute of Technology》 EI CAS 2022年第6期535-550,共16页
Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhanc... Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhancement and visual improvement.To deal with these problems,a sub-regional infrared-visible image fusion method(SRF)is proposed.First,morphology and threshold segmentation is applied to extract targets interested in infrared images.Second,the infrared back-ground is reconstructed based on extracted targets and the visible image.Finally,target and back-ground regions are fused using a multi-scale transform.Experimental results are obtained using public data for comparison and evaluation,which demonstrate that the proposed SRF has poten-tial benefits over other methods. 展开更多
关键词 image fusion infrared image visible image multi-scale transform
下载PDF
Multimodal Medical Image Fusion in Non-Subsampled Contourlet Transform Domain 被引量:3
16
作者 Periyavattam Shanmugam Gomathi Bhuvanesh Kalaavathi 《Circuits and Systems》 2016年第8期1598-1610,共13页
Multimodal medical image fusion is a powerful tool for diagnosing diseases in medical field. The main objective is to capture the relevant information from input images into a single output image, which plays an impor... Multimodal medical image fusion is a powerful tool for diagnosing diseases in medical field. The main objective is to capture the relevant information from input images into a single output image, which plays an important role in clinical applications. In this paper, an image fusion technique for the fusion of multimodal medical images is proposed based on Non-Subsampled Contourlet Transform. The proposed technique uses the Non-Subsampled Contourlet Transform (NSCT) to decompose the images into lowpass and highpass subbands. The lowpass and highpass subbands are fused by using mean based and variance based fusion rules. The reconstructed image is obtained by taking Inverse Non-Subsampled Contourlet Transform (INSCT) on fused subbands. The experimental results on six pairs of medical images are compared in terms of entropy, mean, standard deviation, Q<sup>AB/F</sup> as performance parameters. It reveals that the proposed image fusion technique outperforms the existing image fusion techniques in terms of quantitative and qualitative outcomes of the images. The percentage improvement in entropy is 0% - 40%, mean is 3% - 42%, standard deviation is 1% - 42%, Q<sup>AB/F</sup>is 0.4% - 48% in proposed method comparing to conventional methods for six pairs of medical images. 展开更多
关键词 image fusion Non-Subsampled Contourlet Transform (NSCT) Medical Imaging fusion Rules
下载PDF
Research on Windowing Image Fusion
17
作者 田思 张俊举 +1 位作者 袁轶慧 常本康 《Defence Technology(防务技术)》 SCIE EI CAS 2010年第4期279-283,共5页
The speed and quality of the image fusion always restrain each other.The real-time image fusion is one of the problems which needs to be studied and solved urgently.The windowing processing technology for the image fu... The speed and quality of the image fusion always restrain each other.The real-time image fusion is one of the problems which needs to be studied and solved urgently.The windowing processing technology for the image fusion proposed in this paper can solve this problem in a certain extent.The windowing rules were put forward and the applicable scope for the windowing fusion and the calculation method for the maximum windowing area were determined.And,the results of the windowing fusion were analyzed,verified and compared to confirm the feasibility of this technology. 展开更多
关键词 OPTICS image fusion WINDOWING real-time fusion
下载PDF
Fuzzy Based Hybrid Focus Value Estimation for Multi Focus Image Fusion
18
作者 Muhammad Ahmad M.Arfan Jaffar +2 位作者 Fawad Nasim Tehreem Masood Sheeraz Akram 《Computers, Materials & Continua》 SCIE EI 2022年第4期735-752,共18页
Due to limited depth-of-field of digital single-lens reflex cameras,the scene content within a limited distance from the imaging plane remains in focus while other objects closer to or further away from the point of f... Due to limited depth-of-field of digital single-lens reflex cameras,the scene content within a limited distance from the imaging plane remains in focus while other objects closer to or further away from the point of focus appear as blurred(out-of-focus)in the image.Multi-Focus Image Fusion can be used to reconstruct a fully focused image from two or more partially focused images of the same scene.In this paper,a new Fuzzy Based Hybrid Focus Measure(FBHFM)for multi-focus image fusion has been proposed.Optimal block size is very critical step for multi-focus image fusion.Particle Swarm Optimization(PSO)algorithm has been used to find optimal size of the block of the images for extraction of focus measure features.After finding optimal blocks,three focus measures Sum of Modified Laplacian,Gray Level Variance and Contrast Visibility has been extracted and combined these focus measures by using intelligent fuzzy technique.Fuzzy based hybrid intelligent focus values were estimated using contrast visibility measure to generate focused image.Different sets of multi-focus images have been used in detailed experimentation and compared the results with state-of-the-art existing techniques such as Genetic Algorithm(GA),Principal Component Analysis(PCA),Laplacian Pyramid discrete wavelet transform(DWT),and aDWT for image fusion.It has been found that proposed method performs well as compare to existing methods. 展开更多
关键词 Fuzzy logic multi-focus image fusion DEFOCUS FOCUS contrast visibility focus measure
下载PDF
A Saliency Based Image Fusion Framework for Skin Lesion Segmentation and Classification
19
作者 Javaria Tahir Syed Rameez Naqvi +1 位作者 Khursheed Aurangzeb Musaed Alhussein 《Computers, Materials & Continua》 SCIE EI 2022年第2期3235-3250,共16页
Melanoma,due to its higher mortality rate,is considered as one of the most pernicious types of skin cancers,mostly affecting the white populations.It has been reported a number of times and is now widely accepted,that... Melanoma,due to its higher mortality rate,is considered as one of the most pernicious types of skin cancers,mostly affecting the white populations.It has been reported a number of times and is now widely accepted,that early detection of melanoma increases the chances of the subject’s survival.Computer-aided diagnostic systems help the experts in diagnosing the skin lesion at earlier stages using machine learning techniques.In thiswork,we propose a framework that accurately segments,and later classifies,the lesion using improved image segmentation and fusion methods.The proposed technique takes an image and passes it through two methods simultaneously;one is the weighted visual saliency-based method,and the second is improved HDCT based saliency estimation.The resultant image maps are later fused using the proposed image fusion technique to generate a localized lesion region.The resultant binary image is later mapped back to the RGB image and fed into the Inception-ResNet-V2 pre-trained model-trained by applying transfer learning.The simulation results show improved performance compared to several existing methods. 展开更多
关键词 Skin lesion segmentation image fusion saliency detection skin lesion classification deep neural networks transfer learning
下载PDF
上一页 1 2 4 下一页 到第
使用帮助 返回顶部