期刊文献+
共找到185篇文章
< 1 2 10 >
每页显示 20 50 100
Investigation of feature contribution to shield tunneling-induced settlement using Shapley additive explanations method 被引量:9
1
作者 K.K.Pabodha M.Kannangara Wanhuan Zhou +1 位作者 Zhi Ding Zhehao Hong 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2022年第4期1052-1063,共12页
Accurate prediction of shield tunneling-induced settlement is a complex problem that requires consideration of many influential parameters.Recent studies reveal that machine learning(ML)algorithms can predict the sett... Accurate prediction of shield tunneling-induced settlement is a complex problem that requires consideration of many influential parameters.Recent studies reveal that machine learning(ML)algorithms can predict the settlement caused by tunneling.However,well-performing ML models are usually less interpretable.Irrelevant input features decrease the performance and interpretability of an ML model.Nonetheless,feature selection,a critical step in the ML pipeline,is usually ignored in most studies that focused on predicting tunneling-induced settlement.This study applies four techniques,i.e.Pearson correlation method,sequential forward selection(SFS),sequential backward selection(SBS)and Boruta algorithm,to investigate the effect of feature selection on the model’s performance when predicting the tunneling-induced maximum surface settlement(S_(max)).The data set used in this study was compiled from two metro tunnel projects excavated in Hangzhou,China using earth pressure balance(EPB)shields and consists of 14 input features and a single output(i.e.S_(max)).The ML model that is trained on features selected from the Boruta algorithm demonstrates the best performance in both the training and testing phases.The relevant features chosen from the Boruta algorithm further indicate that tunneling-induced settlement is affected by parameters related to tunnel geometry,geological conditions and shield operation.The recently proposed Shapley additive explanations(SHAP)method explores how the input features contribute to the output of a complex ML model.It is observed that the larger settlements are induced during shield tunneling in silty clay.Moreover,the SHAP analysis reveals that the low magnitudes of face pressure at the top of the shield increase the model’s output。 展开更多
关键词 feature Selection Shield operational parameters Pearson correlation method Boruta algorithm Shapley additive explanations(SHAP) analysis
下载PDF
CONSORT 2010 checklist of information to include when reporting a randomised trial and further explanations
2
《Neural Regeneration Research》 SCIE CAS CSCD 2011年第28期2237-2240,共4页
关键词 WHEN CONSORT 2010 checklist of information to include when reporting a randomised trial and further explanations 2010
下载PDF
Review on Gesture and Speech in the Vocabulary Explanations of One ESL Teacher: A Microanalytic Inquiry" by Anne Lazaraton
3
作者 ZHANG Zi-hong 《Sino-US English Teaching》 2011年第12期747-753,共7页
This paper takes a microanalytic perspective on the speech and gestures used by one teacher of ESL (English as a Second Language) in an intensive English program classroom. Videotaped excerpts from her intermediate-... This paper takes a microanalytic perspective on the speech and gestures used by one teacher of ESL (English as a Second Language) in an intensive English program classroom. Videotaped excerpts from her intermediate-level grammar course were transcribed to represent the speech, gesture and other non-verbal behavior that accompanied unplanned explanations of vocabulary that arose during three focus-on-form lessons. The gesture classification system of McNeill (1992), which delineates different types of hand movements (iconics metaphorics, deictics, beats), was used to understand the role the gestures played in these explanations. Results suggest that gestures and other non-verbal behavior are forms of input to classroom second language learners that must be considered a salient factor in classroom-based SLA (Second Language Acquisition) research 展开更多
关键词 speech and gestures vocabulary explanations ESL (English as a Second Language) Anne Lazaraton
下载PDF
Explaining How: The Intelligibility of Mechanical Explanations in Boyle
4
作者 Jan-Erik Jones 《Journal of Philosophy Study》 2012年第5期337-346,共10页
In this paper I examine the following claims by William Eaton in his monograph Boyle on Fire: (i) that Boyle's religious convictions led him to believe that the world was not completely explicable, and this shows ... In this paper I examine the following claims by William Eaton in his monograph Boyle on Fire: (i) that Boyle's religious convictions led him to believe that the world was not completely explicable, and this shows that there is a shortcoming in the power of mechanical explanations; (ii) that mechanical explanations offer only sufficient, not necessary explanations, and this too was taken by Boyle to be a limit in the explanatory power of mechanical explanations; (iii) that the mature Boyle thought that there could be more intelligible explanatory models than mechanism; and (iv) that what Boyle says at any point in his career is incompatible with the statement of Maria Boas-Hall, i.e., that the mechanical hypothesis can explicate all natural phenomena. Since all four of these claims are part of Eaton's developmental argument, my rejection of them will not only show how the particular developmental story Eaton diagnoses is inaccurate, but will also explain what limits there actually are in Boyle's account of the intelligibility of mechanical explanations. My account will also show why important philosophers like Locke and Leibniz should be interested in Boyle's philosophical work. 展开更多
关键词 Robert Boyle William Eaton Maria Boas-Hall mechanism EXPLANATION INTELLIGIBILITY
下载PDF
Transfer learning-based encoder-decoder model with visual explanations for infrastructure crack segmentation:New open database and comprehensive evaluation
5
作者 Fangyu Liu Wenqi Ding +1 位作者 Yafei Qiao Linbing Wang 《Underground Space》 SCIE EI CSCD 2024年第4期60-81,共22页
Contemporary demands necessitate the swift and accurate detection of cracks in critical infrastructures,including tunnels and pavements.This study proposed a transfer learning-based encoder-decoder method with visual ... Contemporary demands necessitate the swift and accurate detection of cracks in critical infrastructures,including tunnels and pavements.This study proposed a transfer learning-based encoder-decoder method with visual explanations for infrastructure crack segmentation.Firstly,a vast dataset containing 7089 images was developed,comprising diverse conditions—simple and complex crack patterns as well as clean and rough backgrounds.Secondly,leveraging transfer learning,an encoder-decoder model with visual explanations was formulated,utilizing varied pre-trained convolutional neural network(CNN)as the encoder.Visual explanations were achieved through gradient-weighted class activation mapping(Grad-CAM)to interpret the CNN segmentation model.Thirdly,accuracy,complexity(computation and model),and memory usage assessed CNN feasibility in practical engineering.Model performance was gauged via prediction and visual explanation.The investigation encompassed hyperparameters,data augmentation,deep learning from scratch vs.transfer learning,segmentation model architectures,segmentation model encoders,and encoder pre-training strategies.Results underscored transfer learning’s potency in enhancing CNN accuracy for crack segmentation,surpassing deep learning from scratch.Notably,encoder classification accuracy bore no significant correlation with CNN segmentation accuracy.Among all tested models,UNet-EfficientNet_B7 excelled in crack segmentation,harmonizing accuracy,complexity,memory usage,prediction,and visual explanation. 展开更多
关键词 Crack segmentation Transfer learning Visual explanation INFRASTRUCTURE Database
下载PDF
Early Detection of Colletotrichum Kahawae Disease in Coffee Cherry Based on Computer Vision Techniques
6
作者 Raveena Selvanarayanan Surendran Rajendran Youseef Alotaibi 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期759-782,共24页
Colletotrichum kahawae(Coffee Berry Disease)spreads through spores that can be carried by wind,rain,and insects affecting coffee plantations,and causes 80%yield losses and poor-quality coffee beans.The deadly disease ... Colletotrichum kahawae(Coffee Berry Disease)spreads through spores that can be carried by wind,rain,and insects affecting coffee plantations,and causes 80%yield losses and poor-quality coffee beans.The deadly disease is hard to control because wind,rain,and insects carry spores.Colombian researchers utilized a deep learning system to identify CBD in coffee cherries at three growth stages and classify photographs of infected and uninfected cherries with 93%accuracy using a random forest method.If the dataset is too small and noisy,the algorithm may not learn data patterns and generate accurate predictions.To overcome the existing challenge,early detection of Colletotrichum Kahawae disease in coffee cherries requires automated processes,prompt recognition,and accurate classifications.The proposed methodology selects CBD image datasets through four different stages for training and testing.XGBoost to train a model on datasets of coffee berries,with each image labeled as healthy or diseased.Once themodel is trained,SHAP algorithmto figure out which features were essential formaking predictions with the proposed model.Some of these characteristics were the cherry’s colour,whether it had spots or other damage,and how big the Lesions were.Virtual inception is important for classification to virtualize the relationship between the colour of the berry is correlated with the presence of disease.To evaluate themodel’s performance andmitigate excess fitting,a 10-fold cross-validation approach is employed.This involves partitioning the dataset into ten subsets,training the model on each subset,and evaluating its performance.In comparison to other contemporary methodologies,the model put forth achieved an accuracy of 98.56%. 展开更多
关键词 Computer vision coffee berry disease colletotrichum kahawae XG boost shapley additive explanations
下载PDF
Landslide susceptibility mapping(LSM)based on different boosting and hyperparameter optimization algorithms:A case of Wanzhou District,China
7
作者 Deliang Sun Jing Wang +2 位作者 Haijia Wen YueKai Ding Changlin Mi 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第8期3221-3232,共12页
Boosting algorithms have been widely utilized in the development of landslide susceptibility mapping(LSM)studies.However,these algorithms possess distinct computational strategies and hyperparameters,making it challen... Boosting algorithms have been widely utilized in the development of landslide susceptibility mapping(LSM)studies.However,these algorithms possess distinct computational strategies and hyperparameters,making it challenging to propose an ideal LSM model.To investigate the impact of different boosting algorithms and hyperparameter optimization algorithms on LSM,this study constructed a geospatial database comprising 12 conditioning factors,such as elevation,stratum,and annual average rainfall.The XGBoost(XGB),LightGBM(LGBM),and CatBoost(CB)algorithms were employed to construct the LSM model.Furthermore,the Bayesian optimization(BO),particle swarm optimization(PSO),and Hyperband optimization(HO)algorithms were applied to optimizing the LSM model.The boosting algorithms exhibited varying performances,with CB demonstrating the highest precision,followed by LGBM,and XGB showing poorer precision.Additionally,the hyperparameter optimization algorithms displayed different performances,with HO outperforming PSO and BO showing poorer performance.The HO-CB model achieved the highest precision,boasting an accuracy of 0.764,an F1-score of 0.777,an area under the curve(AUC)value of 0.837 for the training set,and an AUC value of 0.863 for the test set.The model was interpreted using SHapley Additive exPlanations(SHAP),revealing that slope,curvature,topographic wetness index(TWI),degree of relief,and elevation significantly influenced landslides in the study area.This study offers a scientific reference for LSM and disaster prevention research.This study examines the utilization of various boosting algorithms and hyperparameter optimization algorithms in Wanzhou District.It proposes the HO-CB-SHAP framework as an effective approach to accurately forecast landslide disasters and interpret LSM models.However,limitations exist concerning the generalizability of the model and the data processing,which require further exploration in subsequent studies. 展开更多
关键词 Landslide susceptibility Hyperparameter optimization Boosting algorithms SHapley additive explanations(SHAP)
下载PDF
Dynamic Forecasting of Traffic Event Duration in Istanbul:A Classification Approach with Real-Time Data Integration
8
作者 Mesut Ulu Yusuf Sait Türkan +2 位作者 Kenan Menguc Ersin Namlı Tarık Kucukdeniz 《Computers, Materials & Continua》 SCIE EI 2024年第8期2259-2281,共23页
Today,urban traffic,growing populations,and dense transportation networks are contributing to an increase in traffic incidents.These incidents include traffic accidents,vehicle breakdowns,fires,and traffic disputes,re... Today,urban traffic,growing populations,and dense transportation networks are contributing to an increase in traffic incidents.These incidents include traffic accidents,vehicle breakdowns,fires,and traffic disputes,resulting in long waiting times,high carbon emissions,and other undesirable situations.It is vital to estimate incident response times quickly and accurately after traffic incidents occur for the success of incident-related planning and response activities.This study presents a model for forecasting the traffic incident duration of traffic events with high precision.The proposed model goes through a 4-stage process using various features to predict the duration of four different traffic events and presents a feature reduction approach to enable real-time data collection and prediction.In the first stage,the dataset consisting of 24,431 data points and 75 variables is prepared by data collection,merging,missing data processing and data cleaning.In the second stage,models such as Decision Trees(DT),K-Nearest Neighbour(KNN),Random Forest(RF)and Support Vector Machines(SVM)are used and hyperparameter optimisation is performed with GridSearchCV.In the third stage,feature selection and reduction are performed and real-time data are used.In the last stage,model performance with 14 variables is evaluated with metrics such as accuracy,precision,recall,F1-score,MCC,confusion matrix and SHAP.The RF model outperforms other models with an accuracy of 98.5%.The study’s prediction results demonstrate that the proposed dynamic prediction model can achieve a high level of success. 展开更多
关键词 Traffic event duration forecasting machine learning feature reduction shapley additive explanations(SHAP)
下载PDF
A Lightweight IoT Malware Detection and Family Classification Method
9
作者 Changguang Wang Ziqi Ma +2 位作者 Qingru Li Dongmei Zhao Fangwei Wang 《Journal of Computer and Communications》 2024年第4期201-227,共27页
A lightweight malware detection and family classification system for the Internet of Things (IoT) was designed to solve the difficulty of deploying defense models caused by the limited computing and storage resources ... A lightweight malware detection and family classification system for the Internet of Things (IoT) was designed to solve the difficulty of deploying defense models caused by the limited computing and storage resources of IoT devices. By training complex models with IoT software gray-scale images and utilizing the gradient-weighted class-activated mapping technique, the system can identify key codes that influence model decisions. This allows for the reconstruction of gray-scale images to train a lightweight model called LMDNet for malware detection. Additionally, the multi-teacher knowledge distillation method is employed to train KD-LMDNet, which focuses on classifying malware families. The results indicate that the model’s identification speed surpasses that of traditional methods by 23.68%. Moreover, the accuracy achieved on the Malimg dataset for family classification is an impressive 99.07%. Furthermore, with a model size of only 0.45M, it appears to be well-suited for the IoT environment. By training complex models using IoT software gray-scale images and utilizing the gradient-weighted class-activated mapping technique, the system can identify key codes that influence model decisions. This allows for the reconstruction of gray-scale images to train a lightweight model called LMDNet for malware detection. Thus, the presented approach can address the challenges associated with malware detection and family classification in IoT devices. 展开更多
关键词 IoT Security Visual explanations Multi-Teacher Knowledge Distillation Lightweight CNN
下载PDF
Parallel Vision ■ Image Synthesis/Augmentation 被引量:1
10
作者 Wenwen Zhang Wenbo Zheng +1 位作者 Qiang Li Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第3期782-784,共3页
Dear Editor,Scene understanding is an essential task in computer vision.The ultimate objective of scene understanding is to instruct computers to understand and reason about the scenes as humans do.Parallel vision is ... Dear Editor,Scene understanding is an essential task in computer vision.The ultimate objective of scene understanding is to instruct computers to understand and reason about the scenes as humans do.Parallel vision is a research framework that unifies the explanation and perception of dynamic and complex scenes. 展开更多
关键词 instru EXPLANATION COMPUTER
下载PDF
Explainable machine learning model for predicting molten steel temperature in the LF refining process
11
作者 Zicheng Xin Jiangshan Zhang +5 位作者 Kaixiang Peng Junguo Zhang Chunhui Zhang Jun Wu Bo Zhang Qing Liu 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2024年第12期2657-2669,共13页
Accurate prediction of molten steel temperature in the ladle furnace(LF)refining process has an important influence on the quality of molten steel and the control of steelmaking cost.Extensive research on establishing... Accurate prediction of molten steel temperature in the ladle furnace(LF)refining process has an important influence on the quality of molten steel and the control of steelmaking cost.Extensive research on establishing models to predict molten steel temperature has been conducted.However,most researchers focus solely on improving the accuracy of the model,neglecting its explainability.The present study aims to develop a high-precision and explainable model with improved reliability and transparency.The eXtreme gradient boosting(XGBoost)and light gradient boosting machine(LGBM)were utilized,along with bayesian optimization and grey wolf optimiz-ation(GWO),to establish the prediction model.Different performance evaluation metrics and graphical representations were applied to compare the optimal XGBoost and LGBM models obtained through varying hyperparameter optimization methods with the other models.The findings indicated that the GWO-LGBM model outperformed other methods in predicting molten steel temperature,with a high pre-diction accuracy of 89.35%within the error range of±5°C.The model’s learning/decision process was revealed,and the influence degree of different variables on the molten steel temperature was clarified using the tree structure visualization and SHapley Additive exPlana-tions(SHAP)analysis.Consequently,the explainability of the optimal GWO-LGBM model was enhanced,providing reliable support for prediction results. 展开更多
关键词 ladle furnace refining molten steel temperature eXtreme gradient boosting light gradient boosting machine grey wolf op-timization SHapley Additive exPlanation
下载PDF
What-If XAI Framework (WiXAI): From Counterfactuals towards Causal Understanding
12
作者 Neelabh Kshetry Mehmed Kantardzic 《Journal of Computer and Communications》 2024年第6期169-198,共30页
People learn causal relations since childhood using counterfactual reasoning. Counterfactual reasoning uses counterfactual examples which take the form of “what if this has happened differently”. Counterfactual exam... People learn causal relations since childhood using counterfactual reasoning. Counterfactual reasoning uses counterfactual examples which take the form of “what if this has happened differently”. Counterfactual examples are also the basis of counterfactual explanation in explainable artificial intelligence (XAI). However, a framework that relies solely on optimization algorithms to find and present counterfactual samples cannot help users gain a deeper understanding of the system. Without a way to verify their understanding, the users can even be misled by such explanations. Such limitations can be overcome through an interactive and iterative framework that allows the users to explore their desired “what-if” scenarios. The purpose of our research is to develop such a framework. In this paper, we present our “what-if” XAI framework (WiXAI), which visualizes the artificial intelligence (AI) classification model from the perspective of the user’s sample and guides their “what-if” exploration. We also formulated how to use the WiXAI framework to generate counterfactuals and understand the feature-feature and feature-output relations in-depth for a local sample. These relations help move the users toward causal understanding. 展开更多
关键词 XAI AI WiXAI Causal Understanding COUNTERFACTUALS Counterfactual Explanation
下载PDF
Explainable Artificial Intelligence-Based Model Drift Detection Applicable to Unsupervised Environments
13
作者 Yongsoo Lee Yeeun Lee +1 位作者 Eungyu Lee Taejin Lee 《Computers, Materials & Continua》 SCIE EI 2023年第8期1701-1719,共19页
Cybersecurity increasingly relies on machine learning(ML)models to respond to and detect attacks.However,the rapidly changing data environment makes model life-cycle management after deployment essential.Real-time det... Cybersecurity increasingly relies on machine learning(ML)models to respond to and detect attacks.However,the rapidly changing data environment makes model life-cycle management after deployment essential.Real-time detection of drift signals from various threats is fundamental for effectively managing deployed models.However,detecting drift in unsupervised environments can be challenging.This study introduces a novel approach leveraging Shapley additive explanations(SHAP),a widely recognized explainability technique in ML,to address drift detection in unsupervised settings.The proposed method incorporates a range of plots and statistical techniques to enhance drift detection reliability and introduces a drift suspicion metric that considers the explanatory aspects absent in the current approaches.To validate the effectiveness of the proposed approach in a real-world scenario,we applied it to an environment designed to detect domain generation algorithms(DGAs).The dataset was obtained from various types of DGAs provided by NetLab.Based on this dataset composition,we sought to validate the proposed SHAP-based approach through drift scenarios that occur when a previously deployed model detects new data types in an environment that detects real-world DGAs.The results revealed that more than 90%of the drift data exceeded the threshold,demonstrating the high reliability of the approach to detect drift in an unsupervised environment.The proposed method distinguishes itself fromexisting approaches by employing explainable artificial intelligence(XAI)-based detection,which is not limited by model or system environment constraints.In conclusion,this paper proposes a novel approach to detect drift in unsupervised ML settings for cybersecurity.The proposed method employs SHAP-based XAI and a drift suspicion metric to improve drift detection reliability.It is versatile and suitable for various realtime data analysis contexts beyond DGA detection environments.This study significantly contributes to theMLcommunity by addressing the critical issue of managing ML models in real-world cybersecurity settings.Our approach is distinguishable from existing techniques by employing XAI-based detection,which is not limited by model or system environment constraints.As a result,our method can be applied in critical domains that require adaptation to continuous changes,such as cybersecurity.Through extensive validation across diverse settings beyond DGA detection environments,the proposed method will emerge as a versatile drift detection technique suitable for a wide range of real-time data analysis contexts.It is also anticipated to emerge as a new approach to protect essential systems and infrastructures from attacks. 展开更多
关键词 CYBERSECURITY machine learning(ML) model life-cycle management drift detection unsupervised environments shapley additive explanations(SHAP) explainability
下载PDF
Detecting Deepfake Images Using Deep Learning Techniques and Explainable AI Methods
14
作者 Wahidul Hasan Abir Faria Rahman Khanam +5 位作者 Kazi Nabiul Alam Myriam Hadjouni Hela Elmannai Sami Bourouis Rajesh Dey Mohammad Monirujjaman Khan 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期2151-2169,共19页
Nowadays,deepfake is wreaking havoc on society.Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded vid... Nowadays,deepfake is wreaking havoc on society.Deepfake content is created with the help of artificial intelligence and machine learning to replace one person’s likeness with another person in pictures or recorded videos.Although visual media manipulations are not new,the introduction of deepfakes has marked a breakthrough in creating fake media and information.These manipulated pic-tures and videos will undoubtedly have an enormous societal impact.Deepfake uses the latest technology like Artificial Intelligence(AI),Machine Learning(ML),and Deep Learning(DL)to construct automated methods for creating fake content that is becoming increasingly difficult to detect with the human eye.Therefore,automated solutions employed by DL can be an efficient approach for detecting deepfake.Though the“black-box”nature of the DL system allows for robust predictions,they cannot be completely trustworthy.Explainability is thefirst step toward achieving transparency,but the existing incapacity of DL to explain its own decisions to human users limits the efficacy of these systems.Though Explainable Artificial Intelligence(XAI)can solve this problem by inter-preting the predictions of these systems.This work proposes to provide a compre-hensive study of deepfake detection using the DL method and analyze the result of the most effective algorithm with Local Interpretable Model-Agnostic Explana-tions(LIME)to assure its validity and reliability.This study identifies real and deepfake images using different Convolutional Neural Network(CNN)models to get the best accuracy.It also explains which part of the image caused the model to make a specific classification using the LIME algorithm.To apply the CNN model,the dataset is taken from Kaggle,which includes 70 k real images from the Flickr dataset collected by Nvidia and 70 k fake faces generated by StyleGAN of 256 px in size.For experimental results,Jupyter notebook,TensorFlow,Num-Py,and Pandas were used as software,InceptionResnetV2,DenseNet201,Incep-tionV3,and ResNet152V2 were used as CNN models.All these models’performances were good enough,such as InceptionV3 gained 99.68%accuracy,ResNet152V2 got an accuracy of 99.19%,and DenseNet201 performed with 99.81%accuracy.However,InceptionResNetV2 achieved the highest accuracy of 99.87%,which was verified later with the LIME algorithm for XAI,where the proposed method performed the best.The obtained results and dependability demonstrate its preference for detecting deepfake images effectively. 展开更多
关键词 Deepfake deep learning explainable artificial intelligence(XAI) convolutional neural network(CNN) local interpretable model-agnostic explanations(LIME)
下载PDF
Explainable prediction of loan default based on machine learning models
15
作者 Xu Zhu Qingyong Chu +2 位作者 Xinchang Song Ping Hu Lu Peng 《Data Science and Management》 2023年第3期123-133,共11页
Owing to the convenience of online loans,an increasing number of people are borrowing money on online platforms.With the emergence of machine learning technology,predicting loan defaults has become a popular topic.How... Owing to the convenience of online loans,an increasing number of people are borrowing money on online platforms.With the emergence of machine learning technology,predicting loan defaults has become a popular topic.However,machine learning models have a black-box problem that cannot be disregarded.To make the prediction model rules more understandable and thereby increase the user’s faith in the model,an explanatory model must be used.Logistic regression,decision tree,XGBoost,and LightGBM models are employed to predict a loan default.The prediction results show that LightGBM and XGBoost outperform logistic regression and decision tree models in terms of the predictive ability.The area under curve for LightGBM is 0.7213.The accuracies of LightGBM and XGBoost exceed 0.8.The precisions of LightGBM and XGBoost exceed 0.55.Simultaneously,we employed the local interpretable model-agnostic explanations approach to undertake an explainable analysis of the prediction findings.The results show that factors such as the loan term,loan grade,credit rating,and loan amount affect the predicted outcomes. 展开更多
关键词 Explainable prediction Machine learning Loan default Local interpretable model-agnostic explanations
下载PDF
Visualization for Explanation of Deep Learning-Based Defect Detection Model Using Class Activation Map 被引量:1
16
作者 Hyunkyu Shin Yonghan Ahn +3 位作者 Mihwa Song Heungbae Gil Jungsik Choi Sanghyo Lee 《Computers, Materials & Continua》 SCIE EI 2023年第6期4753-4766,共14页
Recently,convolutional neural network(CNN)-based visual inspec-tion has been developed to detect defects on building surfaces automatically.The CNN model demonstrates remarkable accuracy in image data analysis;however... Recently,convolutional neural network(CNN)-based visual inspec-tion has been developed to detect defects on building surfaces automatically.The CNN model demonstrates remarkable accuracy in image data analysis;however,the predicted results have uncertainty in providing accurate informa-tion to users because of the“black box”problem in the deep learning model.Therefore,this study proposes a visual explanation method to overcome the uncertainty limitation of CNN-based defect identification.The visual repre-sentative gradient-weights class activation mapping(Grad-CAM)method is adopted to provide visually explainable information.A visualizing evaluation index is proposed to quantitatively analyze visual representations;this index reflects a rough estimate of the concordance rate between the visualized heat map and intended defects.In addition,an ablation study,adopting three-branch combinations with the VGG16,is implemented to identify perfor-mance variations by visualizing predicted results.Experiments reveal that the proposed model,combined with hybrid pooling,batch normalization,and multi-attention modules,achieves the best performance with an accuracy of 97.77%,corresponding to an improvement of 2.49%compared with the baseline model.Consequently,this study demonstrates that reliable results from an automatic defect classification model can be provided to an inspector through the visual representation of the predicted results using CNN models. 展开更多
关键词 Defect detection VISUALIZATION class activation map deep learning EXPLANATION visualizing evaluation index
下载PDF
考虑建成环境交互影响的共享单车需求预测
17
作者 魏晋 安实 张炎棠 《科学技术与工程》 北大核心 2023年第26期11424-11430,共7页
共享单车的发展有利于交通的节能减排绿色发展。建成环境是影响共享单车出行需求的重要因素,然而很少有学者探究考虑其交互作用。为了准确分析建成环境中各影响因素的交互作用以达到精确预测共享单车出行需求的目的,使用了深圳市共享单... 共享单车的发展有利于交通的节能减排绿色发展。建成环境是影响共享单车出行需求的重要因素,然而很少有学者探究考虑其交互作用。为了准确分析建成环境中各影响因素的交互作用以达到精确预测共享单车出行需求的目的,使用了深圳市共享单车出行数据、兴趣点数据(point of interest,POI)、路网数据和公交线路数据等多源数据,采用梯度提升决策树(gradient boosting decision tree,GBDT)模型预测共享单车出行需求,并与BP(back propagation)神经网络模型预测结果进行比较;最后借助SHAP(shapley additive explanation)方法解释GBDT模型中各种影响因子对共享单车出行需求产生的影响,并分析各影响因素及其交互作用。实验结果表明:GBDT模型预测结果平均绝对误差为0.683,均方根误差为0.728,较BP神经网络模型预测准确性更高;通过SHAP方法发现自行车道密度、公交站点数等交通属性因素对于共享单车出行需求作用明显,土地利用中土地利用混合度不是简单线性作用且不同POI间存在复杂交互关系。可见通过借助GBDT模型和SHAP方法可以用来共享单车出行需求预测以及影响因素分析,从而为共享单车发展提出改善建议。 展开更多
关键词 共享单车 需求预测 POI数据 梯度提升决策树 SHAP(shapley additive explanation)
下载PDF
Improving Ultrasonic Testing by Using Machine Learning Framework Based on Model Interpretation Strategy
18
作者 Siqi Shi Shijie Jin +3 位作者 Donghui Zhang Jingyu Liao Dongxin Fu Li Lin 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2023年第5期174-186,共13页
Ultrasonic testing(UT)is increasingly combined with machine learning(ML)techniques for intelligently identifying damage.Extracting signifcant features from UT data is essential for efcient defect characterization.More... Ultrasonic testing(UT)is increasingly combined with machine learning(ML)techniques for intelligently identifying damage.Extracting signifcant features from UT data is essential for efcient defect characterization.Moreover,the hidden physics behind ML is unexplained,reducing the generalization capability and versatility of ML methods in UT.In this paper,a generally applicable ML framework based on the model interpretation strategy is proposed to improve the detection accuracy and computational efciency of UT.Firstly,multi-domain features are extracted from the UT signals with signal processing techniques to construct an initial feature space.Subsequently,a feature selection method based on model interpretable strategy(FS-MIS)is innovatively developed by integrating Shapley additive explanation(SHAP),flter method,embedded method and wrapper method.The most efective ML model and the optimal feature subset with better correlation to the target defects are determined self-adaptively.The proposed framework is validated by identifying and locating side-drilled holes(SDHs)with 0.5λcentral distance and different depths.An ultrasonic array probe is adopted to acquire FMC datasets from several aluminum alloy specimens containing two SDHs by experiments.The optimal feature subset selected by FS-MIS is set as the input of the chosen ML model to train and predict the times of arrival(ToAs)of the scattered waves emitted by adjacent SDHs.The experimental results demonstrate that the relative errors of the predicted ToAs are all below 3.67%with an average error of 0.25%,signifcantly improving the time resolution of UT signals.On this basis,the predicted ToAs are assigned to the corresponding original signals for decoupling overlapped pulse-echoes and reconstructing high-resolution FMC datasets.The imaging resolution is enhanced to 0.5λby implementing the total focusing method(TFM).The relative errors of hole depths and central distance are no more than 0.51%and 3.57%,respectively.Finally,the superior performance of the proposed FS-MIS is validated by comparing it with initial feature space and conventional dimensionality reduction techniques. 展开更多
关键词 Ultrasonic testing Machine learning Feature extraction Feature selection Shapley additive explanation
下载PDF
Gas liquid cylindrical cyclone flow regime identification using machine learning combined with experimental mechanism explanation
19
作者 Zhao-Ming Yang Yu-Xuan He +6 位作者 Qi Xiang Enrico Zio Li-Min He Xiao-Ming Luo Huai Su Ji Wang Jin-Jun Zhang 《Petroleum Science》 SCIE EI CAS CSCD 2023年第1期540-558,共19页
The flow regimes of GLCC with horizon inlet and a vertical pipe are investigated in experiments,and the velocities and pressure drops data labeled by the corresponding flow regimes are collected.Combined with the flow... The flow regimes of GLCC with horizon inlet and a vertical pipe are investigated in experiments,and the velocities and pressure drops data labeled by the corresponding flow regimes are collected.Combined with the flow regimes data of other GLCC positions from other literatures in existence,the gas and liquid superficial velocities and pressure drops are used as the input of the machine learning algorithms respectively which are applied to identify the flow regimes.The choosing of input data types takes the availability of data for practical industry fields into consideration,and the twelve machine learning algorithms are chosen from the classical and popular algorithms in the area of classification,including the typical ensemble models,SVM,KNN,Bayesian Model and MLP.The results of flow regimes identification show that gas and liquid superficial velocities are the ideal type of input data for the flow regimes identification by machine learning.Most of the ensemble models can identify the flow regimes of GLCC by gas and liquid velocities with the accuracy of 0.99 and more.For the pressure drops as the input of each algorithm,it is not the suitable as gas and liquid velocities,and only XGBoost and Bagging Tree can identify the GLCC flow regimes accurately.The success and confusion of each algorithm are analyzed and explained based on the experimental phenomena of flow regimes evolution processes,the flow regimes map,and the principles of algorithms.The applicability and feasibility of each algorithm according to different types of data for GLCC flow regimes identification are proposed. 展开更多
关键词 Gas liquid cylindrical cyclone Machine learning Flow regimes identification Mechanism explanation ALGORITHMS
下载PDF
On fine-grained visual explanation in convolutional neural networks
20
作者 Xia Lei Yongkai Fan Xiong-Lin Luo 《Digital Communications and Networks》 SCIE CSCD 2023年第5期1141-1147,共7页
Existing explanation methods for Convolutional Neural Networks(CNNs)lack the pixel-level visualization explanations to generate the reliable fine-grained decision features.Since there are inconsistencies between the e... Existing explanation methods for Convolutional Neural Networks(CNNs)lack the pixel-level visualization explanations to generate the reliable fine-grained decision features.Since there are inconsistencies between the explanation and the actual behavior of the model to be interpreted,we propose a Fine-Grained Visual Explanation for CNN,namely F-GVE,which produces a fine-grained explanation with higher consistency to the decision of the original model.The exact backward class-specific gradients with respect to the input image is obtained to highlight the object-related pixels the model used to make prediction.In addition,for better visualization and less noise,F-GVE selects an appropriate threshold to filter the gradient during the calculation and the explanation map is obtained by element-wise multiplying the gradient and the input image to show fine-grained classification decision features.Experimental results demonstrate that F-GVE has good visual performances and highlights the importance of fine-grained decision features.Moreover,the faithfulness of the explanation in this paper is high and it is effective and practical on troubleshooting and debugging detection. 展开更多
关键词 Convolutional neural network EXPLANATION Class-specific gradient FINE-GRAINED
下载PDF
上一页 1 2 10 下一页 到第
使用帮助 返回顶部