期刊文献+
共找到31篇文章
< 1 2 >
每页显示 20 50 100
DCFNet:An Effective Dual-Branch Cross-Attention Fusion Network for Medical Image Segmentation
1
作者 Chengzhang Zhu Renmao Zhang +5 位作者 Yalong Xiao Beiji Zou Xian Chai Zhangzheng Yang Rong Hu Xuanchu Duan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1103-1128,共26页
Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Trans... Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance. 展开更多
关键词 Convolutional neural networks Swin Transformer dual branch medical image segmentation feature cross fusion
下载PDF
ATFF: Advanced Transformer with Multiscale Contextual Fusion for Medical Image Segmentation
2
作者 Xinping Guo Lei Wang +2 位作者 Zizhen Huang Yukun Zhang Yaolong Han 《Journal of Computer and Communications》 2024年第3期238-251,共14页
Deep convolutional neural network (CNN) greatly promotes the automatic segmentation of medical images. However, due to the inherent properties of convolution operations, CNN usually cannot establish long-distance inte... Deep convolutional neural network (CNN) greatly promotes the automatic segmentation of medical images. However, due to the inherent properties of convolution operations, CNN usually cannot establish long-distance interdependence, which limits the segmentation performance. Transformer has been successfully applied to various computer vision, using self-attention mechanism to simulate long-distance interaction, so as to capture global information. However, self-attention lacks spatial location and high-performance computing. In order to solve the above problems, we develop a new medical transformer, which has a multi-scale context fusion function and can be used for medical image segmentation. The proposed model combines convolution operation and attention mechanism to form a u-shaped framework, which can capture both local and global information. First, the traditional converter module is improved to an advanced converter module, which uses post-layer normalization to obtain mild activation values, and uses scaled cosine attention with a moving window to obtain accurate spatial information. Secondly, we also introduce a deep supervision strategy to guide the model to fuse multi-scale feature information. It further enables the proposed model to effectively propagate feature information across layers, Thanks to this, it can achieve better segmentation performance while being more robust and efficient. The proposed model is evaluated on multiple medical image segmentation datasets. Experimental results demonstrate that the proposed model achieves better performance on a challenging dataset (ETIS) compared to existing methods that rely only on convolutional neural networks, transformers, or a combination of both. The mDice and mIou indicators increased by 2.74% and 3.3% respectively. 展开更多
关键词 medical image segmentation Advanced Transformer Deep Supervision Attention Mechanism
下载PDF
Application of U-Net and Optimized Clustering in Medical Image Segmentation:A Review 被引量:2
3
作者 Jiaqi Shao Shuwen Chen +3 位作者 Jin Zhou Huisheng Zhu Ziyi Wang Mackenzie Brown 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第9期2173-2219,共47页
As a mainstream research direction in the field of image segmentation,medical image segmentation plays a key role in the quantification of lesions,three-dimensional reconstruction,region of interest extraction and so ... As a mainstream research direction in the field of image segmentation,medical image segmentation plays a key role in the quantification of lesions,three-dimensional reconstruction,region of interest extraction and so on.Compared with natural images,medical images have a variety of modes.Besides,the emphasis of information which is conveyed by images of different modes is quite different.Because it is time-consuming and inefficient to manually segment medical images only by professional and experienced doctors.Therefore,large quantities of automated medical image segmentation methods have been developed.However,until now,researchers have not developed a universal method for all types of medical image segmentation.This paper reviews the literature on segmentation techniques that have produced major breakthroughs in recent years.Among the large quantities of medical image segmentation methods,this paper mainly discusses two categories of medical image segmentation methods.One is the improved strategies based on traditional clustering method.The other is the research progress of the improved image segmentation network structure model based on U-Net.The power of technology proves that the performance of the deep learning-based method is significantly better than that of the traditional method.This paper discussed both advantages and disadvantages of different algorithms and detailed how these methods can be used for the segmentation of lesions or other organs and tissues,as well as possible technical trends for future work. 展开更多
关键词 medical image segmentation clustering algorithm U-Net
下载PDF
TC-Fuse: A Transformers Fusing CNNs Network for Medical Image Segmentation
4
作者 Peng Geng Ji Lu +3 位作者 Ying Zhang Simin Ma Zhanzhong Tang Jianhua Liu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第11期2001-2023,共23页
In medical image segmentation task,convolutional neural networks(CNNs)are difficult to capture long-range dependencies,but transformers can model the long-range dependencies effectively.However,transformers have a fle... In medical image segmentation task,convolutional neural networks(CNNs)are difficult to capture long-range dependencies,but transformers can model the long-range dependencies effectively.However,transformers have a flexible structure and seldom assume the structural bias of input data,so it is difficult for transformers to learn positional encoding of the medical images when using fewer images for training.To solve these problems,a dual branch structure is proposed.In one branch,Mix-Feed-Forward Network(Mix-FFN)and axial attention are adopted to capture long-range dependencies and keep the translation invariance of the model.Mix-FFN whose depth-wise convolutions can provide position information is better than ordinary positional encoding.In the other branch,traditional convolutional neural networks(CNNs)are used to extract different features of fewer medical images.In addition,the attention fusion module BiFusion is used to effectively integrate the information from the CNN branch and Transformer branch,and the fused features can effectively capture the global and local context of the current spatial resolution.On the public standard datasets Gland Segmentation(GlaS),Colorectal adenocarcinoma gland(CRAG)and COVID-19 CT Images Segmentation,the F1-score,Intersection over Union(IoU)and parameters of the proposed TC-Fuse are superior to those by Axial Attention U-Net,U-Net,Medical Transformer and other methods.And F1-score increased respectively by 2.99%,3.42%and 3.95%compared with Medical Transformer. 展开更多
关键词 TRANSFORMERS convolutional neural networks fusion medical image segmentation axial attention
下载PDF
AF-Net:A Medical Image Segmentation Network Based on Attention Mechanism and Feature Fusion 被引量:4
5
作者 Guimin Hou Jiaohua Qin +2 位作者 Xuyu Xiang Yun Tan Neal N.Xiong 《Computers, Materials & Continua》 SCIE EI 2021年第11期1877-1891,共15页
Medical image segmentation is an important application field of computer vision in medical image processing.Due to the close location and high similarity of different organs in medical images,the current segmentation ... Medical image segmentation is an important application field of computer vision in medical image processing.Due to the close location and high similarity of different organs in medical images,the current segmentation algorithms have problems with mis-segmentation and poor edge segmentation.To address these challenges,we propose a medical image segmentation network(AF-Net)based on attention mechanism and feature fusion,which can effectively capture global information while focusing the network on the object area.In this approach,we add dual attention blocks(DA-block)to the backbone network,which comprises parallel channels and spatial attention branches,to adaptively calibrate and weigh features.Secondly,the multi-scale feature fusion block(MFF-block)is proposed to obtain feature maps of different receptive domains and get multi-scale information with less computational consumption.Finally,to restore the locations and shapes of organs,we adopt the global feature fusion blocks(GFF-block)to fuse high-level and low-level information,which can obtain accurate pixel positioning.We evaluate our method on multiple datasets(the aorta and lungs dataset),and the experimental results achieve 94.0%in mIoU and 96.3%in DICE,showing that our approach performs better than U-Net and other state-of-art methods. 展开更多
关键词 Deep learning medical image segmentation feature fusion attention mechanism
下载PDF
Mu-Net:Multi-Path Upsampling Convolution Network for Medical Image Segmentation 被引量:2
6
作者 Jia Chen Zhiqiang He +3 位作者 Dayong Zhu Bei Hui Rita Yi Man Li Xiao-Guang Yue 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第4期73-95,共23页
Medical image segmentation plays an important role in clinical diagnosis,quantitative analysis,and treatment process.Since 2015,U-Net-based approaches have been widely used formedical image segmentation.The purpose of... Medical image segmentation plays an important role in clinical diagnosis,quantitative analysis,and treatment process.Since 2015,U-Net-based approaches have been widely used formedical image segmentation.The purpose of the U-Net expansive path is to map low-resolution encoder feature maps to full input resolution feature maps.However,the consecutive deconvolution and convolutional operations in the expansive path lead to the loss of some high-level information.More high-level information can make the segmentationmore accurate.In this paper,we propose MU-Net,a novel,multi-path upsampling convolution network to retain more high-level information.The MU-Net mainly consists of three parts:contracting path,skip connection,and multi-expansive paths.The proposed MU-Net architecture is evaluated based on three different medical imaging datasets.Our experiments show that MU-Net improves the segmentation performance of U-Net-based methods on different datasets.At the same time,the computational efficiency is significantly improved by reducing the number of parameters by more than half. 展开更多
关键词 medical image segmentation MU-Net(multi-path upsampling convolution network) U-Net clinical diagnosis encoder-decoder networks
下载PDF
Improved Medical Image Segmentation Model Based on 3D U-Net 被引量:1
7
作者 LIN Wei FAN Hong +3 位作者 HU Chenxi YANG Yi YU Suping NI Lin 《Journal of Donghua University(English Edition)》 CAS 2022年第4期311-316,共6页
With the widespread application of deep learning in the field of computer vision,gradually allowing medical image technology to assist doctors in making diagnoses has great practical and research significance.Aiming a... With the widespread application of deep learning in the field of computer vision,gradually allowing medical image technology to assist doctors in making diagnoses has great practical and research significance.Aiming at the shortcomings of the traditional U-Net model in 3D spatial information extraction,model over-fitting,and low degree of semantic information fusion,an improved medical image segmentation model has been used to achieve more accurate segmentation of medical images.In this model,we make full use of the residual network(ResNet)to solve the over-fitting problem.In order to process and aggregate data at different scales,the inception network is used instead of the traditional convolutional layer,and the dilated convolution is used to increase the receptive field.The conditional random field(CRF)can complete the contour refinement work.Compared with the traditional 3D U-Net network,the segmentation accuracy of the improved liver and tumor images increases by 2.89%and 7.66%,respectively.As a part of the image processing process,the method in this paper not only can be used for medical image segmentation,but also can lay the foundation for subsequent image 3D reconstruction work. 展开更多
关键词 medical image segmentation 3D U-Net residual network(ResNet) inception model conditional random field(CRF)
下载PDF
SVM for density estimation and application to medical image segmentation
8
作者 ZHANG Zhao ZHANG Su ZHANG Chen-xi CHEN Ya-zhu 《Journal of Zhejiang University-Science B(Biomedicine & Biotechnology)》 SCIE CAS CSCD 2006年第5期365-372,共8页
A method of medical image segmentation based on support vector machine (SVM) for density estimation is presented. We used this estimator to construct a prior model of the image intensity and curvature profile of the s... A method of medical image segmentation based on support vector machine (SVM) for density estimation is presented. We used this estimator to construct a prior model of the image intensity and curvature profile of the structure from training images. When segmenting a novel image similar to the training images, the technique of narrow level set method is used. The higher dimensional surface evolution metric is defined by the prior model instead of by energy minimization function. This method offers several advantages. First, SVM for density estimation is consistent and its solution is sparse. Second, compared to the traditional level set methods, this method incorporates shape information on the object to be segmented into the segmentation process. Segmentation results are demonstrated on synthetic images, MR images and ultrasonic images. 展开更多
关键词 Support vector machine (SVM) Density estimation medical image segmentation Level set method
下载PDF
MEDICAL IMAGE SEGMENTATION BASED ON A MODIFIED LEVEL SET ALGORITHM
9
作者 杨勇 林盘 +1 位作者 郑崇勋 顾建文 《Journal of Pharmaceutical Analysis》 SCIE CAS 2005年第1期29-32,56,共5页
Objective To present a novel modified level set algorithm for medical image segmentation. Methods The algorithm is developed by substituting the speed function of level set algorithm with the region and gradient infor... Objective To present a novel modified level set algorithm for medical image segmentation. Methods The algorithm is developed by substituting the speed function of level set algorithm with the region and gradient information of the image instead of the conventional gradient information. This new algorithm has been tested by a series of different modality medical images. Results We present various examples and also evaluate and compare the performance of our method with the classical level set method on weak boundaries and noisy images. Conclusion Experimental results show the proposed algorithm is effective and robust. 展开更多
关键词 medical image segmentation level set speed function region information
下载PDF
Designing a High-Performance Deep Learning Theoretical Model for Biomedical Image Segmentation by Using Key Elements of the Latest U-Net-Based Architectures
10
作者 Andreea Roxana Luca Tudor Florin Ursuleanu +5 位作者 Liliana Gheorghe Roxana Grigorovici Stefan Iancu Maria Hlusneac Cristina Preda Alexandru Grigorovici 《Journal of Computer and Communications》 2021年第7期8-20,共13页
Deep learning (DL) has experienced an exponential development in recent years, with major impact in many medical fields, especially in the field of medical image and, respectively, as a specific task, in the segmentat... Deep learning (DL) has experienced an exponential development in recent years, with major impact in many medical fields, especially in the field of medical image and, respectively, as a specific task, in the segmentation of the medical image. We aim to create a computer assisted diagnostic method, optimized by the use of deep learning (DL) and validated by a randomized controlled clinical trial, is a highly automated tool for diagnosing and staging precancerous and cervical cancer and thyroid cancers. We aim to design a high-performance deep learning model, combined from convolutional neural network (U-Net)-based architectures, for segmentation of the medical image that is independent of the type of organs/tissues, dimensions or type of image (2D/3D) and to validate the DL model in a randomized, controlled clinical trial. We used as a methodology primarily the analysis of U-Net-based architectures to identify the key elements that we considered important in the design and optimization of the combined DL model, from the U-Net-based architectures, imagined by us. Secondly, we will validate the performance of the DL model through a randomized controlled clinical trial. The DL model designed by us will be a highly automated tool for diagnosing and staging precancers and cervical cancer and thyroid cancers. The combined model we designed takes into account the key features of each of the architectures Overcomplete Convolutional Network Kite-Net (Kite-Net), Attention gate mechanism is an improvement added on convolutional network architecture for fast and precise segmentation of images (Attention U-Net), Harmony Densely Connected Network-Medical image Segmentation (HarDNet-MSEG). In this regard, we will create a comprehensive computer assisted diagnostic methodology validated by a randomized controlled clinical trial. The model will be a highly automated tool for diagnosing and staging precancers and cervical cancer and thyroid cancers. This would help drastically minimize the time and effort that specialists put into analyzing medical images, help to achieve a better therapeutic plan, and can provide a “second opinion” of computer assisted diagnosis. 展开更多
关键词 Combined Model of U-Net-Based Architectures medical image segmentation 2D/3D/CT/RMN images
下载PDF
Rethinking the Encoder-decoder Structure in Medical Image Segmentation from Releasing Decoder Structure
11
作者 Jiajia Ni Wei Mu +1 位作者 An Pan Zhengming Chen 《Journal of Bionic Engineering》 SCIE EI CSCD 2024年第3期1511-1521,共11页
Medical image segmentation has witnessed rapid advancements with the emergence of encoder-decoder based methods.In the encoder-decoder structure,the primary goal of the decoding phase is not only to restore feature ma... Medical image segmentation has witnessed rapid advancements with the emergence of encoder-decoder based methods.In the encoder-decoder structure,the primary goal of the decoding phase is not only to restore feature map resolution,but also to mitigate the loss of feature information incurred during the encoding phase.However,this approach gives rise to a challenge:multiple up-sampling operations in the decoder segment result in the loss of feature information.To address this challenge,we propose a novel network that removes the decoding structure to reduce feature information loss(CBL-Net).In particular,we introduce a Parallel Pooling Module(PPM)to counteract the feature information loss stemming from conventional and pooling operations during the encoding stage.Furthermore,we incorporate a Multiplexed Dilation Convolution(MDC)module to expand the network's receptive field.Also,although we have removed the decoding stage,we still need to recover the feature map resolution.Therefore,we introduced the Global Feature Recovery(GFR)module.It uses attention mechanism for the image feature map resolution recovery,which can effectively reduce the loss of feature information.We conduct extensive experimental evaluations on three publicly available medical image segmentation datasets:DRIVE,CHASEDB and MoNuSeg datasets.Experimental results show that our proposed network outperforms state-of-the-art methods in medical image segmentation.In addition,it achieves higher efficiency than the current network of coding and decoding structures by eliminating the decoding component. 展开更多
关键词 medical image segmentation Encoder-decoder architecture Attention mechanisms Releasing decoder architecture Neural network
下载PDF
Multi-rater Prism:Learning self-calibrated medical image segmentation from multiple raters
12
作者 Junde Wu Huihui Fang +14 位作者 Jiayuan Zhu Yu Zhang Xiang Li Yuanpei Liu Huiying Liu Yueming Jin Weimin Huang Qi Liu Cen Chen Yanfei Liu Lixin Duan Yanwu Xu Li Xiao Weihua Yang Yue Liu 《Science Bulletin》 SCIE EI CAS CSCD 2024年第18期2906-2919,共14页
In medical image segmentation,it is often necessary to collect opinions from multiple experts to make the final decision.This clinical routine helps to mitigate individual bias.However,when data is annotated by multip... In medical image segmentation,it is often necessary to collect opinions from multiple experts to make the final decision.This clinical routine helps to mitigate individual bias.However,when data is annotated by multiple experts,standard deep learning models are often not applicable.In this paper,we propose a novel neural network framework called Multi-rater Prism(MrPrism)to learn medical image segmentation from multiple labels.Inspired by iterative half-quadratic optimization,MrPrism combines the task of assigning multi-rater confidences and calibrated segmentation in a recurrent manner.During this process,MrPrism learns inter-observer variability while taking into account the image's semantic properties and finally converges to a self-calibrated segmentation result reflecting inter-observer agreement.Specifically,we propose Converging Prism(ConP)and Diverging Prism(DivP)to iteratively process the two tasks.ConP learns calibrated segmentation based on multi-rater confidence maps estimated by DivP,and DivP generates multi-rater confidence maps based on segmentation masks estimated by ConP.Experimental results show that the two tasks can mutually improve each other through this recurrent process.The final converged segmentation result of MrPrism outperforms state-of-the-art(SOTA)methods for a wide range of medical image segmentation tasks.The code is available at https://github.-com/WuJunde/MrPrism. 展开更多
关键词 medical image segmentation Multiple raters SELF-CALIBRATION Half-quadratic algorithm
原文传递
A review of medical ocular image segmentation
13
作者 Lai WEI Menghan HU 《虚拟现实与智能硬件(中英文)》 EI 2024年第3期181-202,共22页
Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in ... Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in 2015.However,the application of deep learning models to ocular medical image segmentation poses unique challenges,especially compared to other body parts,due to the complexity,small size,and blurriness of such images,coupled with the scarcity of data.This article aims to provide a comprehensive review of medical image segmentation from two perspectives:the development of deep network structures and the application of segmentation in ocular imaging.Initially,the article introduces an overview of medical imaging,data processing,and performance evaluation metrics.Subsequently,it analyzes recent developments in U-Net-based network structures.Finally,for the segmentation of ocular medical images,the application of deep learning is reviewed and categorized by the type of ocular tissue. 展开更多
关键词 medical image segmentation ORBIT TUMOR U-Net TRANSFORMER
下载PDF
Interactivemedical image segmentation with self-adaptive confidence calibration
14
作者 Chuyun SHEN Wenhao LI +6 位作者 Qisen XU Bin HU Bo JIN Haibin CAI Fengping ZHU Yuxin LI Xiangfeng WANG 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2023年第9期1332-1348,共17页
Interactive medical image segmentation based on human-in-the-loop machine learning is a novel paradigm that draws on human expert knowledge to assist medical image segmentation.However,existing methods often fall into... Interactive medical image segmentation based on human-in-the-loop machine learning is a novel paradigm that draws on human expert knowledge to assist medical image segmentation.However,existing methods often fall into what we call interactive misunderstanding,the essence of which is the dilemma in trading off short-and long-term interaction information.To better use the interaction information at various timescales,we propose an interactive segmentation framework,called interactive MEdical image segmentation with self-adaptive Confidence CAlibration(MECCA),which combines action-based confidence learning and multi-agent reinforcement learning.A novel confidence network is learned by predicting the alignment level of the action with short-term interaction information.A confidence-based reward-shaping mechanism is then proposed to explicitly incorporate confidence in the policy gradient calculation,thus directly correcting the model’s interactive misunderstanding.MECCA also enables user-friendly interactions by reducing the interaction intensity and difficulty via label generation and interaction guidance,respectively.Numerical experiments on different segmentation tasks show that MECCA can significantly improve short-and long-term interaction information utilization efficiency with remarkably fewer labeled samples.The demo video is available at https://bit.ly/mecca-demo-video. 展开更多
关键词 medical image segmentation Interactive segmentation Multi-agent reinforcement learning Confidence learning Semi-supervised learning
原文传递
A network lightweighting method for difficult segmentation of 3D medical images
15
作者 KANG Li 龚智鑫 +1 位作者 黄建军 ZHOU Ziqi 《中国体视学与图像分析》 2023年第4期390-400,共11页
Currently,deep learning is widely used in medical image segmentation and has achieved good results.However,3D medical image segmentation tasks with diverse lesion characters,blurred edges,and unstable positions requir... Currently,deep learning is widely used in medical image segmentation and has achieved good results.However,3D medical image segmentation tasks with diverse lesion characters,blurred edges,and unstable positions require complex networks with a large number of parameters.It is computationally expensive and results in high requirements on equipment,making it hard to deploy the network in hospitals.In this work,we propose a method for network lightweighting and applied it to a 3D CNN based network.We experimented on a COVID-19 lesion segmentation dataset.Specifically,we use three cascaded one-dimensional convolutions to replace a 3D convolution,and integrate instance normalization with the previous layer of one-dimensional convolutions to accelerate network inference.In addition,we simplify test-time augmentation and deep supervision of the network.Experiments show that the lightweight network can reduce the prediction time of each sample and the memory usage by 50%and reduce the number of parameters by 60%compared with the original network.The training time of one epoch is also reduced by 50%with the segmentation accuracy dropped within the acceptable range. 展开更多
关键词 3D medical image segmentation 3D U-Net lightweight network COVID-19 lesion segmentation
原文传递
Medical Image Segmentation using PCNN based on Multi-feature Grey Wolf Optimizer Bionic Algorithm 被引量:7
16
作者 Xue Wang Zhanshan Li +2 位作者 Heng Kang Yongping Huang Di Gai 《Journal of Bionic Engineering》 SCIE EI CSCD 2021年第3期711-720,共10页
Medical image segmentation is a challenging task especially in multimodality medical image analysis.In this paper,an improved pulse coupled neural network based on multiple hybrid features grey wolf optimizer(MFGWO-PC... Medical image segmentation is a challenging task especially in multimodality medical image analysis.In this paper,an improved pulse coupled neural network based on multiple hybrid features grey wolf optimizer(MFGWO-PCNN)is proposed for multimodality medical image segmentation.Specifically,a two-stage medical image segmentation method based on bionic algorithm is presented,including image fusion and image segmentation.The image fusion stage fuses rich information from different modalities by utilizing a multimodality medical image fusion model based on maximum energy region.In the stage of image segmentation,an improved PCNN model based on MFGWO is proposed,which can adaptively set the parameters of PCNN according to the features of the image.Two modalities of FLAIR and TIC brain MRIs are applied to verify the effectiveness of the proposed MFGWO-PCNN algorithm.The experimental results demonstrate that the proposed method outperforms the other seven algorithms in subjective vision and objective evaluation indicators. 展开更多
关键词 grey wolf optimizer pulse coupled neural network bionic algorithm medical image segmentation
下载PDF
Intelligent Beetle Antenna Search with Deep Transfer Learning Enabled Medical Image Classification Model
17
作者 Mohamed Ibrahim Waly 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期3159-3174,共16页
Recently,computer assisted diagnosis(CAD)model creation has become more dependent on medical picture categorization.It is often used to identify several conditions,including brain disorders,diabetic retinopathy,and sk... Recently,computer assisted diagnosis(CAD)model creation has become more dependent on medical picture categorization.It is often used to identify several conditions,including brain disorders,diabetic retinopathy,and skin cancer.Most traditional CAD methods relied on textures,colours,and forms.Because many models are issue-oriented,they need a more substantial capacity to generalize and cannot capture high-level problem domain notions.Recent deep learning(DL)models have been published,providing a practical way to develop models specifically for classifying input medical pictures.This paper offers an intelligent beetle antenna search(IBAS-DTL)method for classifying medical images facilitated by deep transfer learning.The IBAS-DTL model aims to recognize and classify medical pictures into various groups.In order to segment medical pictures,the current IBASDTLM model first develops an entropy based weighting and first-order cumulative moment(EWFCM)approach.Additionally,the DenseNet-121 techniquewas used as a module for extracting features.ABASwith an extreme learning machine(ELM)model is used to classify the medical photos.A wide variety of tests were carried out using a benchmark medical imaging dataset to demonstrate the IBAS-DTL model’s noteworthy performance.The results gained indicated the IBAS-DTL model’s superiority over its pre-existing techniques. 展开更多
关键词 medical image segmentation image classification decision making computer aided diagnosis deep learning
下载PDF
A Multiscale Approach to Automatic Medical Image Segmentation Using Self-Organizing Map 被引量:1
18
作者 马峰 夏绍玮 《Journal of Computer Science & Technology》 SCIE EI CSCD 1998年第5期402-409,共8页
In this paper, a new medical image classification scheme is proposed using selforganizing map (SOM) combined with multiscale technique. It addresses the problem of the handling of edge pixels in the traditional multis... In this paper, a new medical image classification scheme is proposed using selforganizing map (SOM) combined with multiscale technique. It addresses the problem of the handling of edge pixels in the traditional multiscale SOM classifiers. First, to solve the difficulty in manual selection of edge pixels, a multiscale edge detection algorithm based on wavelet transform is proposed. Edge pixels detected are then selected into the training set as a new class and a mu1tiscale SoM classifier is trained using this training set. In this new scheme, the SoM classifier can perform both the classification on the entire image and the edge detection simultaneously. On the other hand, the misclassification of the traditional multiscale SoM classifier in regions near edges is greatly reduced and the correct classification is improved at the same time. 展开更多
关键词 medical image segmentation multiscale self-organizing map multiscale edge detection algorithm wavelet transform
原文传递
Guided-YNet: Saliency Feature-Guided Interactive Feature Enhancement Lung Tumor Segmentation Network
19
作者 Tao Zhou Yunfeng Pan +3 位作者 Huiling Lu Pei Dang Yujie Guo Yaxing Wang 《Computers, Materials & Continua》 SCIE EI 2024年第9期4813-4832,共20页
Multimodal lung tumor medical images can provide anatomical and functional information for the same lesion.Such as Positron Emission Computed Tomography(PET),Computed Tomography(CT),and PET-CT.How to utilize the lesio... Multimodal lung tumor medical images can provide anatomical and functional information for the same lesion.Such as Positron Emission Computed Tomography(PET),Computed Tomography(CT),and PET-CT.How to utilize the lesion anatomical and functional information effectively and improve the network segmentation performance are key questions.To solve the problem,the Saliency Feature-Guided Interactive Feature Enhancement Lung Tumor Segmentation Network(Guide-YNet)is proposed in this paper.Firstly,a double-encoder single-decoder U-Net is used as the backbone in this model,a single-coder single-decoder U-Net is used to generate the saliency guided feature using PET image and transmit it into the skip connection of the backbone,and the high sensitivity of PET images to tumors is used to guide the network to accurately locate lesions.Secondly,a Cross Scale Feature Enhancement Module(CSFEM)is designed to extract multi-scale fusion features after downsampling.Thirdly,a Cross-Layer Interactive Feature Enhancement Module(CIFEM)is designed in the encoder to enhance the spatial position information and semantic information.Finally,a Cross-Dimension Cross-Layer Feature Enhancement Module(CCFEM)is proposed in the decoder,which effectively extractsmultimodal image features through global attention and multi-dimension local attention.The proposed method is verified on the lung multimodal medical image datasets,and the results showthat theMean Intersection overUnion(MIoU),Accuracy(Acc),Dice Similarity Coefficient(Dice),Volumetric overlap error(Voe),Relative volume difference(Rvd)of the proposed method on lung lesion segmentation are 87.27%,93.08%,97.77%,95.92%,89.28%,and 88.68%,respectively.It is of great significance for computer-aided diagnosis. 展开更多
关键词 medical image segmentation U-Net saliency feature guidance cross-modal feature enhancement cross-dimension feature enhancement
下载PDF
Artificial intelligence-based medical image segmentation for 3D printing and naked eye 3D visualization
20
作者 Guang Jia Xunan Huang +10 位作者 Sen Tao Xianghuai Zhang Yue Zhao Hongcai Wang Jie He Jiaxue Hao Bo Liu Jiejing Zhou Tanping Li Xiaoling Zhang Jinglong Gao 《Intelligent Medicine》 2022年第1期48-53,共6页
Image segmentation for 3D printing and 3D visualization has become an essential component in many fields of medical research,teaching,and clinical practice.Medical image segmentation requires sophisticated computerize... Image segmentation for 3D printing and 3D visualization has become an essential component in many fields of medical research,teaching,and clinical practice.Medical image segmentation requires sophisticated computerized quantifications and visualization tools.Recently,with the development of artificial intelligence(AI)technology,tumors or organs can be quickly and accurately detected and automatically contoured from medical images.This paper introduces a platform-independent,multi-modality image registration,segmentation,and 3D visualization program,named artificial intelligence-based medical image segmentation for 3D printing and naked eye 3D visualization(AIMIS3D).YOLOV3 algorithm was used to recognize prostate organ from T2-weighted MRI images with proper training.Prostate cancer and bladder cancer were segmented based on U-net from MRI images.CT images of osteosarcoma were loaded into the platform for the segmentation of lumbar spine,osteosarcoma,vessels,and local nerves for 3D printing.Breast displacement during each radiation therapy was quantitatively evaluated by automatically identifying the position of the 3D printed plastic breast bra.Brain vessel from multimodality MRI images was segmented by using model-based transfer learning for 3D printing and naked eye 3D visualization in AIMIS3D platform. 展开更多
关键词 medical image segmentation Artificial intelligence Tumor segmentation 3D printing Voice recognition Gesture recognition
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部