Electrocatalytic nitrogen reduction to ammonia has garnered significant attention with the blooming of single-atom catalysts(SACs),showcasing their potential for sustainable and energy-efficient ammonia production.How...Electrocatalytic nitrogen reduction to ammonia has garnered significant attention with the blooming of single-atom catalysts(SACs),showcasing their potential for sustainable and energy-efficient ammonia production.However,cost-effectively designing and screening efficient electrocatalysts remains a challenge.In this study,we have successfully established interpretable machine learning(ML)models to evaluate the catalytic activity of SACs by directly and accurately predicting reaction Gibbs free energy.Our models were trained using non-density functional theory(DFT)calculated features from a dataset comprising 90 graphene-supported SACs.Our results underscore the superior prediction accuracy of the gradient boosting regression(GBR)model for bothΔg(N_(2)→NNH)andΔG(NH_(2)→NH_(3)),boasting coefficient of determination(R^(2))score of 0.972 and 0.984,along with root mean square error(RMSE)of 0.051 and 0.085 eV,respectively.Moreover,feature importance analysis elucidates that the high accuracy of GBR model stems from its adept capture of characteristics pertinent to the active center and coordination environment,unveilling the significance of elementary descriptors,with the colvalent radius playing a dominant role.Additionally,Shapley additive explanations(SHAP)analysis provides global and local interpretation of the working mechanism of the GBR model.Our analysis identifies that a pyrrole-type coordination(flag=0),d-orbitals with a moderate occupation(N_(d)=5),and a moderate difference in covalent radius(r_(TM-ave)near 140 pm)are conducive to achieving high activity.Furthermore,we extend the prediction of activity to more catalysts without additional DFT calculations,validating the reliability of our feature engineering,model training,and design strategy.These findings not only highlight new opportunity for accelerating catalyst design using non-DFT calculated features,but also shed light on the working mechanism of"black box"ML model.Moreover,the model provides valuable guidance for catalytic material design in multiple proton-electron coupling reactions,particularly in driving sustainable CO_(2),O_(2),and N_(2) conversion.展开更多
The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising sol...The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising solution.Here,we introduce an ML technique based on multimodal strategies,focusing specifically on intelligent aeration control in wastewater treatment plants(WWTPs).The generalization of the multimodal strategy is demonstrated on eight ML models.The results demonstrate that this multimodal strategy significantly enhances model indicators for ML in environmental science and the efficiency of aeration control,exhibiting exceptional performance and interpretability.Integrating random forest with visual models achieves the highest accuracy in forecasting aeration quantity in multimodal models,with a mean absolute percentage error of 4.4%and a coefficient of determination of 0.948.Practical testing in a full-scale plant reveals that the multimodal model can reduce operation costs by 19.8%compared to traditional fuzzy control methods.The potential application of these strategies in critical water science domains is discussed.To foster accessibility and promote widespread adoption,the multimodal ML models are freely available on GitHub,thereby eliminating technical barriers and encouraging the application of artificial intelligence in urban wastewater treatment.展开更多
Understanding the relationship between attribute performance(AP)and customer satisfaction(CS)is crucial for the hospitality industry.However,accurately modeling this relationship remains challenging.To address this is...Understanding the relationship between attribute performance(AP)and customer satisfaction(CS)is crucial for the hospitality industry.However,accurately modeling this relationship remains challenging.To address this issue,we propose an interpretable machine learning-based dynamic asymmetric analysis(IML-DAA)approach that leverages interpretable machine learning(IML)to improve traditional relationship analysis methods.The IML-DAA employs extreme gradient boosting(XGBoost)and SHapley Additive exPlanations(SHAP)to construct relationships and explain the significance of each attribute.Following this,an improved version of penalty-reward contrast analysis(PRCA)is used to classify attributes,whereas asymmetric impact-performance analysis(AIPA)is employed to determine the attribute improvement priority order.A total of 29,724 user ratings in New York City collected from TripAdvisor were investigated.The results suggest that IML-DAA can effectively capture non-linear relationships and that there is a dynamic asymmetric effect between AP and CS,as identified by the dynamic AIPA model.This study enhances our understanding of the relationship between AP and CS and contributes to the literature on the hotel service industry.展开更多
An algorithm named InterOpt for optimizing operational parameters is proposed based on interpretable machine learning,and is demonstrated via optimization of shale gas development.InterOpt consists of three parts:a ne...An algorithm named InterOpt for optimizing operational parameters is proposed based on interpretable machine learning,and is demonstrated via optimization of shale gas development.InterOpt consists of three parts:a neural network is used to construct an emulator of the actual drilling and hydraulic fracturing process in the vector space(i.e.,virtual environment);:the Sharpley value method in inter-pretable machine learning is applied to analyzing the impact of geological and operational parameters in each well(i.e.,single well feature impact analysis):and ensemble randomized maximum likelihood(EnRML)is conducted to optimize the operational parameters to comprehensively improve the efficiency of shale gas development and reduce the average cost.In the experiment,InterOpt provides different drilling and fracturing plans for each well according to its specific geological conditions,and finally achieves an average cost reduction of 9.7%for a case study with 104 wells.展开更多
Defining the structure characteristics of amorphous materials is one of the fundamental problems that need to be solved urgently in complex materials because of their complex structure and long-range disorder.In this ...Defining the structure characteristics of amorphous materials is one of the fundamental problems that need to be solved urgently in complex materials because of their complex structure and long-range disorder.In this study,we develop an interpretable deep learning model capable of accurately classifying amorphous configurations and characterizing their structural properties.The results demonstrate that the multi-dimensional hybrid convolutional neural network can classify the two-dimensional(2D)liquids and amorphous solids of molecular dynamics simulation.The classification process does not make a priori assumptions on the amorphous particle environment,and the accuracy is 92.75%,which is better than other convolutional neural networks.Moreover,our model utilizes the gradient-weighted activation-like mapping method,which generates activation-like heat maps that can precisely identify important structures in the amorphous configuration maps.We obtain an order parameter from the heatmap and conduct finite scale analysis of this parameter.Our findings demonstrate that the order parameter effectively captures the amorphous phase transition process across various systems.These results hold significant scientific implications for the study of amorphous structural characteristics via deep learning.展开更多
Thermoelectric and thermal materials are essential in achieving carbon neutrality. However, the high cost of lattice thermal conductivity calculations and the limited applicability of classical physical models have le...Thermoelectric and thermal materials are essential in achieving carbon neutrality. However, the high cost of lattice thermal conductivity calculations and the limited applicability of classical physical models have led to the inefficient development of thermoelectric materials. In this study, we proposed a two-stage machine learning framework with physical interpretability incorporating domain knowledge to calculate high/low thermal conductivity rapidly. Specifically, crystal graph convolutional neural network(CGCNN) is constructed to predict the fundamental physical parameters related to lattice thermal conductivity. Based on the above physical parameters, an interpretable machine learning model–sure independence screening and sparsifying operator(SISSO), is trained to predict the lattice thermal conductivity. We have predicted the lattice thermal conductivity of all available materials in the open quantum materials database(OQMD)(https://www.oqmd.org/). The proposed approach guides the next step of searching for materials with ultra-high or ultralow lattice thermal conductivity and promotes the development of new thermal insulation materials and thermoelectric materials.展开更多
To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new lig...To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new light attention module,and a residue module—that are specially designed to learn the general dynamic behavior,transient disturbances,and other input factors of chemical processes,respectively.Combined with a hyperparameter optimization framework,Optuna,the effectiveness of the proposed LACG is tested by distributed control system data-driven modeling experiments on the discharge flowrate of an actual deethanization process.The LACG model provides significant advantages in prediction accuracy and model generalization compared with other models,including the feedforward neural network,convolution neural network,long short-term memory(LSTM),and attention-LSTM.Moreover,compared with the simulation results of a deethanization model built using Aspen Plus Dynamics V12.1,the LACG parameters are demonstrated to be interpretable,and more details on the variable interactions can be observed from the model parameters in comparison with the traditional interpretable model attention-LSTM.This contribution enriches interpretable machine learning knowledge and provides a reliable method with high accuracy for actual chemical process modeling,paving a route to intelligent manufacturing.展开更多
Roof falls due to geological conditions are major hazards in the mining industry,causing work time loss,injuries,and fatalities.There are roof fall problems caused by high horizontal stress in several largeopening lim...Roof falls due to geological conditions are major hazards in the mining industry,causing work time loss,injuries,and fatalities.There are roof fall problems caused by high horizontal stress in several largeopening limestone mines in the eastern and midwestern United States.The typical hazard management approach for this type of roof fall hazards relies heavily on visual inspections and expert knowledge.In this context,we proposed a deep learning system for detection of the roof fall hazards caused by high horizontal stress.We used images depicting hazardous and non-hazardous roof conditions to develop a convolutional neural network(CNN)for autonomous detection of hazardous roof conditions.To compensate for limited input data,we utilized a transfer learning approach.In the transfer learning approach,an already-trained network is used as a starting point for classification in a similar domain.Results show that this approach works well for classifying roof conditions as hazardous or safe,achieving a statistical accuracy of 86.4%.This result is also compared with a random forest classifier,and the deep learning approach is more successful at classification of roof conditions.However,accuracy alone is not enough to ensure a reliable hazard management system.System constraints and reliability are improved when the features used by the network are understood.Therefore,we used a deep learning interpretation technique called integrated gradients to identify the important geological features in each image for prediction.The analysis of integrated gradients shows that the system uses the same roof features as the experts do on roof fall hazards detection.The system developed in this paper demonstrates the potential of deep learning in geotechnical hazard management to complement human experts,and likely to become an essential part of autonomous operations in cases where hazard identification heavily depends on expert knowledge.Moreover,deep learning-based systems reduce expert exposure to hazardous conditions.展开更多
Major issues currently restricting the use of learning analytics are the lack of interpretability and adaptability of the machine learning models used in this domain.Interpretability makes it easy for the stakeholders...Major issues currently restricting the use of learning analytics are the lack of interpretability and adaptability of the machine learning models used in this domain.Interpretability makes it easy for the stakeholders to understand the working of these models and adaptability makes it easy to use the same model for multiple cohorts and courses in educational institutions.Recently,some models in learning analytics are constructed with the consideration of interpretability but their interpretability is not quantified.However,adaptability is not specifically considered in this domain.This paper presents a new framework based on hybrid statistical fuzzy theory to overcome these limitations.It also provides explainability in the form of rules describing the reasoning behind a particular output.The paper also discusses the system evaluation on a benchmark dataset showing promising results.The measure of explainability,fuzzy index,shows that the model is highly interpretable.This system achieves more than 82%recall in both the classification and the context adaptation stages.展开更多
Facing the escalating effects of climate change,it is critical to improve the prediction and understanding of the hurricane evacuation decisions made by households in order to enhance emergency management.Current stud...Facing the escalating effects of climate change,it is critical to improve the prediction and understanding of the hurricane evacuation decisions made by households in order to enhance emergency management.Current studies in this area often have relied on psychology-driven linear models,which frequently exhibited limitations in practice.The present study proposed a novel interpretable machine learning approach to predict household-level evacuation decisions by leveraging easily accessible demographic and resource-related predictors,compared to existing models that mainly rely on psychological factors.An enhanced logistic regression model(that is,an interpretable machine learning approach) was developed for accurate predictions by automatically accounting for nonlinearities and interactions(that is,univariate and bivariate threshold effects).Specifically,nonlinearity and interaction detection were enabled by low-depth decision trees,which offer transparent model structure and robustness.A survey dataset collected in the aftermath of Hurricanes Katrina and Rita,two of the most intense tropical storms of the last two decades,was employed to test the new methodology.The findings show that,when predicting the households’ evacuation decisions,the enhanced logistic regression model outperformed previous linear models in terms of both model fit and predictive capability.This outcome suggests that our proposed methodology could provide a new tool and framework for emergency management authorities to improve the prediction of evacuation traffic demands in a timely and accurate manner.展开更多
Traffic flow forecasting constitutes a crucial component of intelligent transportation systems(ITSs).Numerous studies have been conducted for traffic flow forecasting during the past decades.However,most existing stud...Traffic flow forecasting constitutes a crucial component of intelligent transportation systems(ITSs).Numerous studies have been conducted for traffic flow forecasting during the past decades.However,most existing studies have concentrated on developing advanced algorithms or models to attain state-of-the-art forecasting accuracy.For real-world ITS applications,the interpretability of the developed models is extremely important but has largely been ignored.This study presents an interpretable traffic flow forecasting framework based on popular tree-ensemble algorithms.The framework comprises multiple key components integrated into a highly flexible and customizable multi-stage pipeline,enabling the seamless incorporation of various algorithms and tools.To evaluate the effectiveness of the framework,the developed tree-ensemble models and another three typical categories of baseline models,including statistical time series,shallow learning,and deep learning,were compared on three datasets collected from different types of roads(i.e.,arterial,expressway,and freeway).Further,the study delves into an in-depth interpretability analysis of the most competitive tree-ensemble models using six categories of interpretable machine learning methods.Experimental results highlight the potential of the proposed framework.The tree-ensemble models developed within this framework achieve competitive accuracy while maintaining high inference efficiency similar to statistical time series and shallow learning models.Meanwhile,these tree-ensemble models offer interpretability from multiple perspectives via interpretable machine-learning techniques.The proposed framework is anticipated to provide reliable and trustworthy decision support across various ITS applications.展开更多
Most of the existing machine learning studies in logs interpretation do not consider the data distribution discrepancy issue,so the trained model cannot well generalize to the unseen data without calibrating the logs....Most of the existing machine learning studies in logs interpretation do not consider the data distribution discrepancy issue,so the trained model cannot well generalize to the unseen data without calibrating the logs.In this paper,we formulated the geophysical logs calibration problem and give its statistical explanation,and then exhibited an interpretable machine learning method,i.e.,Unilateral Alignment,which could align the logs from one well to another without losing the physical meanings.The involved UA method is an unsupervised feature domain adaptation method,so it does not rely on any labels from cores.The experiments in 3 wells and 6 tasks showed the effectiveness and interpretability from multiple views.展开更多
The present study extracts human-understandable insights from machine learning(ML)-based mesoscale closure in fluid-particle flows via several novel data-driven analysis approaches,i.e.,maximal information coefficient...The present study extracts human-understandable insights from machine learning(ML)-based mesoscale closure in fluid-particle flows via several novel data-driven analysis approaches,i.e.,maximal information coefficient(MIC),interpretable ML,and automated ML.It is previously shown that the solidvolume fraction has the greatest effect on the drag force.The present study aims to quantitativelyinvestigate the influence of flow properties on mesoscale drag correction(H_(d)).The MIC results showstrong correlations between the features(i.e.,slip velocity(u^(*)_(sy))and particle volume fraction(εs))and thelabel H_(d).The interpretable ML analysis confirms this conclusion,and quantifies the contribution of u^(*)_(sy),εs and gas pressure gradient to the model as 71.9%,27.2%and 0.9%,respectively.Automated ML without theneed to select the model structure and hyperparameters is used for modeling,improving the predictionaccuracy over our previous model(Zhu et al.,2020;Ouyang,Zhu,Su,&Luo,2021).展开更多
The identification of factors that may be forcing ecological observations to approach the upper boundary provides insight into potential mechanisms affecting driver-response relationships,and can help inform ecosystem...The identification of factors that may be forcing ecological observations to approach the upper boundary provides insight into potential mechanisms affecting driver-response relationships,and can help inform ecosystem management,but has rarely been explored.In this study,we propose a novel framework integrating quantile regression with interpretable machine learning.In the first stage of the framework,we estimate the upper boundary of a driver-response relationship using quantile regression.Next,we calculate“potentials”of the response variable depending on the driver,which are defined as vertical distances from the estimated upper boundary of the relationship to observations in the driver-response variable scatter plot.Finally,we identify key factors impacting the potential using a machine learning model.We illustrate the necessary steps to implement the framework using the total phosphorus(TP)-Chlorophyll a(CHL)relationship in lakes across the continental US.We found that the nitrogen to phosphorus ratio(N:P),annual average precipitation,total nitrogen(TN),and summer average air temperature were key factors impacting the potential of CHL depending on TP.We further revealed important implications of our findings for lake eutrophication management.The important role of N:P and TN on the potential highlights the co-limitation of phosphorus and nitrogen and indicates the need for dual nutrient criteria.Future wetter and/or warmer climate scenarios can decrease the potential which may reduce the efficacy of lake eutrophication management.The novel framework advances the application of quantile regression to identify factors driving observations to approach the upper boundary of driver-response relationships.展开更多
Childhood asthma is one of the most common respiratory diseases with rising mortality and morbidity.The multi-omics data is providing a new chance to explore collaborative biomarkers and corresponding diagnostic model...Childhood asthma is one of the most common respiratory diseases with rising mortality and morbidity.The multi-omics data is providing a new chance to explore collaborative biomarkers and corresponding diagnostic models of childhood asthma.To capture the nonlinear association of multi-omics data and improve interpretability of diagnostic model,we proposed a novel deep association model(DAM)and corresponding efficient analysis framework.First,the Deep Subspace Reconstruction was used to fuse the omics data and diagnostic information,thereby correcting the distribution of the original omics data and reducing the influence of unnecessary data noises.Second,the Joint Deep Semi-Negative Matrix Factorization was applied to identify different latent sample patterns and extract biomarkers from different omics data levels.Third,our newly proposed Deep Orthogonal Canonical Correlation Analysis can rank features in the collaborative module,which are able to construct the diagnostic model considering nonlinear correlation between different omics data levels.Using DAM,we deeply analyzed the transcriptome and methylation data of childhood asthma.The effectiveness of DAM is verified from the perspectives of algorithm performance and biological significance on the independent test dataset,by ablation experiment and comparison with many baseline methods from clinical and biological studies.The DAM-induced diagnostic model can achieve a prediction AUC of o.912,which is higher than that of many other alternative methods.Meanwhile,relevant pathways and biomarkers of childhood asthma are also recognized to be collectively altered on the gene expression and methylation levels.As an interpretable machine learning approach,DAM simultaneously considers the non-linear associations among samples and those among biological features,which should help explore interpretative biomarker candidates and efficient diagnostic models from multi-omics data analysis for human complexdiseases.展开更多
Landslide inventory is an indispensable output variable of landslide susceptibility prediction(LSP)modelling.However,the influence of landslide inventory incompleteness on LSP and the transfer rules of LSP resulting e...Landslide inventory is an indispensable output variable of landslide susceptibility prediction(LSP)modelling.However,the influence of landslide inventory incompleteness on LSP and the transfer rules of LSP resulting error in the model have not been explored.Adopting Xunwu County,China,as an example,the existing landslide inventory is first obtained and assumed to contain all landslide inventory samples under ideal conditions,after which different landslide inventory sample missing conditions are simulated by random sampling.It includes the condition that the landslide inventory samples in the whole study area are missing randomly at the proportions of 10%,20%,30%,40%and 50%,as well as the condition that the landslide inventory samples in the south of Xunwu County are missing in aggregation.Then,five machine learning models,namely,Random Forest(RF),and Support Vector Machine(SVM),are used to perform LSP.Finally,the LSP results are evaluated to analyze the LSP uncertainties under various conditions.In addition,this study introduces various interpretability methods of machine learning model to explore the changes in the decision basis of the RF model under various conditions.Results show that(1)randomly missing landslide inventory samples at certain proportions(10%–50%)may affect the LSP results for local areas.(2)Aggregation of missing landslide inventory samples may cause significant biases in LSP,particularly in areas where samples are missing.(3)When 50%of landslide samples are missing(either randomly or aggregated),the changes in the decision basis of the RF model are mainly manifested in two aspects:first,the importance ranking of environmental factors slightly differs;second,in regard to LSP modelling in the same test grid unit,the weights of individual model factors may drastically vary.展开更多
This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations.Although deep neural networks have exhibited ...This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations.Although deep neural networks have exhibited superior performance in various tasks,interpretability is always Achilles' heel of deep neural networks.At present,deep neural networks obtain high discrimination power at the cost of a low interpretability of their black-box representations.We believe that high model interpretability may help people break several bottlenecks of deep learning,e.g.,learning from a few annotations,learning via human–computer communications at the semantic level,and semantically debugging network representations.We focus on convolutional neural networks(CNNs),and revisit the visualization of CNN representations,methods of diagnosing representations of pre-trained CNNs,approaches for disentangling pre-trained CNN representations,learning of CNNs with disentangled representations,and middle-to-end learning based on model interpretability.Finally,we discuss prospective trends in explainable artificial intelligence.展开更多
To extract strong correlations between different energy loads and improve the interpretability and accuracy for load forecasting of a regional integrated energy system(RIES),an explainable framework for load forecasti...To extract strong correlations between different energy loads and improve the interpretability and accuracy for load forecasting of a regional integrated energy system(RIES),an explainable framework for load forecasting of an RIES is proposed.This includes the load forecasting model of RIES and its interpretation.A coupled feature extracting strat-egy is adopted to construct coupled features between loads as the input variables of the model.It is designed based on multi-task learning(MTL)with a long short-term memory(LSTM)model as the sharing layer.Based on SHapley Additive exPlanations(SHAP),this explainable framework combines global and local interpretations to improve the interpretability of load forecasting of the RIES.In addition,an input variable selection strategy based on the global SHAP value is proposed to select input feature variables of the model.A case study is given to verify the effectiveness of the proposed model,constructed coupled features,and input variable selection strategy.The results show that the explainable framework intuitively improves the interpretability of the prediction model.展开更多
Artificial intelligence and machine learning have been increasingly applied for prediction in agricultural science.However,many models are typically black boxes,meaning we cannot explain what the models learned from t...Artificial intelligence and machine learning have been increasingly applied for prediction in agricultural science.However,many models are typically black boxes,meaning we cannot explain what the models learned from the data and the reasons behind predictions.To address this issue,I introduce an emerging subdomain of artificial intelligence,explainable artificial intelligence(XAI),and associated toolkits,interpretable machine learning.This study demonstrates the usefulness of several methods by applying them to an openly available dataset.The dataset includes the no-tillage effect on crop yield relative to conventional tillage and soil,climate,and management variables.Data analysis discovered that no-tillage management can increase maize crop yield where yield in conventional tillage is<5000 kg/ha and the maximum temperature is higher than 32°.These methods are useful to answer(i)which variables are important for prediction in regression/classification,(ii)which variable interactions are important for prediction,(iii)how important variables and their interactions are associated with the response variable,(iv)what are the reasons underlying a predicted value for a certain instance,and(v)whether different machine learning algorithms offer the same answer to these questions.I argue that the goodness of model fit is overly evaluated with model performance measures in the current practice,while these questions are unanswered.XAI and interpretable machine learning can enhance trust and explainability in AI.展开更多
Geometric and working condition uncertainties are inevitable in a compressor,deviating the compressor performance from the design value.It’s necessary to explore the influence of geometric uncertainty on performance ...Geometric and working condition uncertainties are inevitable in a compressor,deviating the compressor performance from the design value.It’s necessary to explore the influence of geometric uncertainty on performance deviation under different working conditions.In this paper,the geometric uncertainty influences at near stall,peak efficiency,and near choke conditions under design speed and low speed are investigated.Firstly,manufacturing geometric uncertainties are analyzed.Next,correlation models between geometry and performance under different working conditions are constructed based on a neural network.Then the Shapley additive explanations(SHAP)method is introduced to explain the output of the neural network.Results show that under real manufacturing uncertainty,the efficiency deviation range is small under the near stall and peak efficiency conditions.However,under the near choke conditions,efficiency is highly sensitive to flow capacity changes caused by geometric uncertainty,leading to a significant increase in the efficiency deviation amplitude,up to a magnitude of-3.6%.Moreover,the tip leading-edge radius and tip thickness are two main factors affecting efficiency deviation.Therefore,to reduce efficiency uncertainty,a compressor should be avoided working near the choke condition,and the tolerances of the tip leading-edge radius and tip thickness should be strictly controlled.展开更多
基金supported by the Research Grants Council of Hong Kong (City U 11305919 and 11308620)the NSFC/RGC Joint Research Scheme N_City U104/19The Hong Kong Research Grant Council Collaborative Research Fund:C1002-21G and C1017-22G。
文摘Electrocatalytic nitrogen reduction to ammonia has garnered significant attention with the blooming of single-atom catalysts(SACs),showcasing their potential for sustainable and energy-efficient ammonia production.However,cost-effectively designing and screening efficient electrocatalysts remains a challenge.In this study,we have successfully established interpretable machine learning(ML)models to evaluate the catalytic activity of SACs by directly and accurately predicting reaction Gibbs free energy.Our models were trained using non-density functional theory(DFT)calculated features from a dataset comprising 90 graphene-supported SACs.Our results underscore the superior prediction accuracy of the gradient boosting regression(GBR)model for bothΔg(N_(2)→NNH)andΔG(NH_(2)→NH_(3)),boasting coefficient of determination(R^(2))score of 0.972 and 0.984,along with root mean square error(RMSE)of 0.051 and 0.085 eV,respectively.Moreover,feature importance analysis elucidates that the high accuracy of GBR model stems from its adept capture of characteristics pertinent to the active center and coordination environment,unveilling the significance of elementary descriptors,with the colvalent radius playing a dominant role.Additionally,Shapley additive explanations(SHAP)analysis provides global and local interpretation of the working mechanism of the GBR model.Our analysis identifies that a pyrrole-type coordination(flag=0),d-orbitals with a moderate occupation(N_(d)=5),and a moderate difference in covalent radius(r_(TM-ave)near 140 pm)are conducive to achieving high activity.Furthermore,we extend the prediction of activity to more catalysts without additional DFT calculations,validating the reliability of our feature engineering,model training,and design strategy.These findings not only highlight new opportunity for accelerating catalyst design using non-DFT calculated features,but also shed light on the working mechanism of"black box"ML model.Moreover,the model provides valuable guidance for catalytic material design in multiple proton-electron coupling reactions,particularly in driving sustainable CO_(2),O_(2),and N_(2) conversion.
基金the financial support by the National Natural Science Foundation of China(52230004 and 52293445)the Key Research and Development Project of Shandong Province(2020CXGC011202-005)the Shenzhen Science and Technology Program(KCXFZ20211020163404007 and KQTD20190929172630447).
文摘The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising solution.Here,we introduce an ML technique based on multimodal strategies,focusing specifically on intelligent aeration control in wastewater treatment plants(WWTPs).The generalization of the multimodal strategy is demonstrated on eight ML models.The results demonstrate that this multimodal strategy significantly enhances model indicators for ML in environmental science and the efficiency of aeration control,exhibiting exceptional performance and interpretability.Integrating random forest with visual models achieves the highest accuracy in forecasting aeration quantity in multimodal models,with a mean absolute percentage error of 4.4%and a coefficient of determination of 0.948.Practical testing in a full-scale plant reveals that the multimodal model can reduce operation costs by 19.8%compared to traditional fuzzy control methods.The potential application of these strategies in critical water science domains is discussed.To foster accessibility and promote widespread adoption,the multimodal ML models are freely available on GitHub,thereby eliminating technical barriers and encouraging the application of artificial intelligence in urban wastewater treatment.
基金National Key R&D Program of China(Grant No.:2022YFF0903000)National Natural Science Foundation of China(Grant Nos.:72101197 and 71988101).
文摘Understanding the relationship between attribute performance(AP)and customer satisfaction(CS)is crucial for the hospitality industry.However,accurately modeling this relationship remains challenging.To address this issue,we propose an interpretable machine learning-based dynamic asymmetric analysis(IML-DAA)approach that leverages interpretable machine learning(IML)to improve traditional relationship analysis methods.The IML-DAA employs extreme gradient boosting(XGBoost)and SHapley Additive exPlanations(SHAP)to construct relationships and explain the significance of each attribute.Following this,an improved version of penalty-reward contrast analysis(PRCA)is used to classify attributes,whereas asymmetric impact-performance analysis(AIPA)is employed to determine the attribute improvement priority order.A total of 29,724 user ratings in New York City collected from TripAdvisor were investigated.The results suggest that IML-DAA can effectively capture non-linear relationships and that there is a dynamic asymmetric effect between AP and CS,as identified by the dynamic AIPA model.This study enhances our understanding of the relationship between AP and CS and contributes to the literature on the hotel service industry.
文摘An algorithm named InterOpt for optimizing operational parameters is proposed based on interpretable machine learning,and is demonstrated via optimization of shale gas development.InterOpt consists of three parts:a neural network is used to construct an emulator of the actual drilling and hydraulic fracturing process in the vector space(i.e.,virtual environment);:the Sharpley value method in inter-pretable machine learning is applied to analyzing the impact of geological and operational parameters in each well(i.e.,single well feature impact analysis):and ensemble randomized maximum likelihood(EnRML)is conducted to optimize the operational parameters to comprehensively improve the efficiency of shale gas development and reduce the average cost.In the experiment,InterOpt provides different drilling and fracturing plans for each well according to its specific geological conditions,and finally achieves an average cost reduction of 9.7%for a case study with 104 wells.
基金National Natural Science Foundation of China(Grant No.11702289)the Key Core Technology and Generic Technology Research and Development Project of Shanxi Province,China(Grant No.2020XXX013)the National Key Research and Development Project of China。
文摘Defining the structure characteristics of amorphous materials is one of the fundamental problems that need to be solved urgently in complex materials because of their complex structure and long-range disorder.In this study,we develop an interpretable deep learning model capable of accurately classifying amorphous configurations and characterizing their structural properties.The results demonstrate that the multi-dimensional hybrid convolutional neural network can classify the two-dimensional(2D)liquids and amorphous solids of molecular dynamics simulation.The classification process does not make a priori assumptions on the amorphous particle environment,and the accuracy is 92.75%,which is better than other convolutional neural networks.Moreover,our model utilizes the gradient-weighted activation-like mapping method,which generates activation-like heat maps that can precisely identify important structures in the amorphous configuration maps.We obtain an order parameter from the heatmap and conduct finite scale analysis of this parameter.Our findings demonstrate that the order parameter effectively captures the amorphous phase transition process across various systems.These results hold significant scientific implications for the study of amorphous structural characteristics via deep learning.
基金support of the National Natural Science Foundation of China(Grant Nos.12104356 and52250191)China Postdoctoral Science Foundation(Grant No.2022M712552)+2 种基金the Opening Project of Shanghai Key Laboratory of Special Artificial Microstructure Materials and Technology(Grant No.Ammt2022B-1)the Fundamental Research Funds for the Central Universitiessupport by HPC Platform,Xi’an Jiaotong University。
文摘Thermoelectric and thermal materials are essential in achieving carbon neutrality. However, the high cost of lattice thermal conductivity calculations and the limited applicability of classical physical models have led to the inefficient development of thermoelectric materials. In this study, we proposed a two-stage machine learning framework with physical interpretability incorporating domain knowledge to calculate high/low thermal conductivity rapidly. Specifically, crystal graph convolutional neural network(CGCNN) is constructed to predict the fundamental physical parameters related to lattice thermal conductivity. Based on the above physical parameters, an interpretable machine learning model–sure independence screening and sparsifying operator(SISSO), is trained to predict the lattice thermal conductivity. We have predicted the lattice thermal conductivity of all available materials in the open quantum materials database(OQMD)(https://www.oqmd.org/). The proposed approach guides the next step of searching for materials with ultra-high or ultralow lattice thermal conductivity and promotes the development of new thermal insulation materials and thermoelectric materials.
基金support provided by the National Natural Science Foundation of China(22122802,22278044,and 21878028)the Chongqing Science Fund for Distinguished Young Scholars(CSTB2022NSCQ-JQX0021)the Fundamental Research Funds for the Central Universities(2022CDJXY-003).
文摘To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new light attention module,and a residue module—that are specially designed to learn the general dynamic behavior,transient disturbances,and other input factors of chemical processes,respectively.Combined with a hyperparameter optimization framework,Optuna,the effectiveness of the proposed LACG is tested by distributed control system data-driven modeling experiments on the discharge flowrate of an actual deethanization process.The LACG model provides significant advantages in prediction accuracy and model generalization compared with other models,including the feedforward neural network,convolution neural network,long short-term memory(LSTM),and attention-LSTM.Moreover,compared with the simulation results of a deethanization model built using Aspen Plus Dynamics V12.1,the LACG parameters are demonstrated to be interpretable,and more details on the variable interactions can be observed from the model parameters in comparison with the traditional interpretable model attention-LSTM.This contribution enriches interpretable machine learning knowledge and provides a reliable method with high accuracy for actual chemical process modeling,paving a route to intelligent manufacturing.
基金partially supported by the National Institute for Occupational Safety and Health,contract number 0000HCCR-2019-36403。
文摘Roof falls due to geological conditions are major hazards in the mining industry,causing work time loss,injuries,and fatalities.There are roof fall problems caused by high horizontal stress in several largeopening limestone mines in the eastern and midwestern United States.The typical hazard management approach for this type of roof fall hazards relies heavily on visual inspections and expert knowledge.In this context,we proposed a deep learning system for detection of the roof fall hazards caused by high horizontal stress.We used images depicting hazardous and non-hazardous roof conditions to develop a convolutional neural network(CNN)for autonomous detection of hazardous roof conditions.To compensate for limited input data,we utilized a transfer learning approach.In the transfer learning approach,an already-trained network is used as a starting point for classification in a similar domain.Results show that this approach works well for classifying roof conditions as hazardous or safe,achieving a statistical accuracy of 86.4%.This result is also compared with a random forest classifier,and the deep learning approach is more successful at classification of roof conditions.However,accuracy alone is not enough to ensure a reliable hazard management system.System constraints and reliability are improved when the features used by the network are understood.Therefore,we used a deep learning interpretation technique called integrated gradients to identify the important geological features in each image for prediction.The analysis of integrated gradients shows that the system uses the same roof features as the experts do on roof fall hazards detection.The system developed in this paper demonstrates the potential of deep learning in geotechnical hazard management to complement human experts,and likely to become an essential part of autonomous operations in cases where hazard identification heavily depends on expert knowledge.Moreover,deep learning-based systems reduce expert exposure to hazardous conditions.
文摘Major issues currently restricting the use of learning analytics are the lack of interpretability and adaptability of the machine learning models used in this domain.Interpretability makes it easy for the stakeholders to understand the working of these models and adaptability makes it easy to use the same model for multiple cohorts and courses in educational institutions.Recently,some models in learning analytics are constructed with the consideration of interpretability but their interpretability is not quantified.However,adaptability is not specifically considered in this domain.This paper presents a new framework based on hybrid statistical fuzzy theory to overcome these limitations.It also provides explainability in the form of rules describing the reasoning behind a particular output.The paper also discusses the system evaluation on a benchmark dataset showing promising results.The measure of explainability,fuzzy index,shows that the model is highly interpretable.This system achieves more than 82%recall in both the classification and the context adaptation stages.
基金supported by the National Science Foundation under Grant Nos.2303578,2303579, 05 27699,0838654,and 1212790by an Early-Career Research Fellowship from the Gulf Research Program of the National Academies of Sciences,Engineering,and Medicine
文摘Facing the escalating effects of climate change,it is critical to improve the prediction and understanding of the hurricane evacuation decisions made by households in order to enhance emergency management.Current studies in this area often have relied on psychology-driven linear models,which frequently exhibited limitations in practice.The present study proposed a novel interpretable machine learning approach to predict household-level evacuation decisions by leveraging easily accessible demographic and resource-related predictors,compared to existing models that mainly rely on psychological factors.An enhanced logistic regression model(that is,an interpretable machine learning approach) was developed for accurate predictions by automatically accounting for nonlinearities and interactions(that is,univariate and bivariate threshold effects).Specifically,nonlinearity and interaction detection were enabled by low-depth decision trees,which offer transparent model structure and robustness.A survey dataset collected in the aftermath of Hurricanes Katrina and Rita,two of the most intense tropical storms of the last two decades,was employed to test the new methodology.The findings show that,when predicting the households’ evacuation decisions,the enhanced logistic regression model outperformed previous linear models in terms of both model fit and predictive capability.This outcome suggests that our proposed methodology could provide a new tool and framework for emergency management authorities to improve the prediction of evacuation traffic demands in a timely and accurate manner.
基金funded by the National Key R&D Program of China(Grant No.2023YFE0106800)the Humanity and Social Science Youth Foundation of Ministry of Education of China(Grant No.22YJC630109).
文摘Traffic flow forecasting constitutes a crucial component of intelligent transportation systems(ITSs).Numerous studies have been conducted for traffic flow forecasting during the past decades.However,most existing studies have concentrated on developing advanced algorithms or models to attain state-of-the-art forecasting accuracy.For real-world ITS applications,the interpretability of the developed models is extremely important but has largely been ignored.This study presents an interpretable traffic flow forecasting framework based on popular tree-ensemble algorithms.The framework comprises multiple key components integrated into a highly flexible and customizable multi-stage pipeline,enabling the seamless incorporation of various algorithms and tools.To evaluate the effectiveness of the framework,the developed tree-ensemble models and another three typical categories of baseline models,including statistical time series,shallow learning,and deep learning,were compared on three datasets collected from different types of roads(i.e.,arterial,expressway,and freeway).Further,the study delves into an in-depth interpretability analysis of the most competitive tree-ensemble models using six categories of interpretable machine learning methods.Experimental results highlight the potential of the proposed framework.The tree-ensemble models developed within this framework achieve competitive accuracy while maintaining high inference efficiency similar to statistical time series and shallow learning models.Meanwhile,these tree-ensemble models offer interpretability from multiple perspectives via interpretable machine-learning techniques.The proposed framework is anticipated to provide reliable and trustworthy decision support across various ITS applications.
基金Supported in part by the National Natural Science Foundation of China under Grant 61903353in part by the SINOPEC Programmes for Science and Technology Development under Grant PE19008-8.
文摘Most of the existing machine learning studies in logs interpretation do not consider the data distribution discrepancy issue,so the trained model cannot well generalize to the unseen data without calibrating the logs.In this paper,we formulated the geophysical logs calibration problem and give its statistical explanation,and then exhibited an interpretable machine learning method,i.e.,Unilateral Alignment,which could align the logs from one well to another without losing the physical meanings.The involved UA method is an unsupervised feature domain adaptation method,so it does not rely on any labels from cores.The experiments in 3 wells and 6 tasks showed the effectiveness and interpretability from multiple views.
基金This work was supported by the National Natural ScienceFoundation of China(No.U1862201,91834303 and 22208208)the China Postdoctoral Science Foundation(No.2022M712056)the China National Postdoctoral Program for Innovative Talents(No.BX20220205).
文摘The present study extracts human-understandable insights from machine learning(ML)-based mesoscale closure in fluid-particle flows via several novel data-driven analysis approaches,i.e.,maximal information coefficient(MIC),interpretable ML,and automated ML.It is previously shown that the solidvolume fraction has the greatest effect on the drag force.The present study aims to quantitativelyinvestigate the influence of flow properties on mesoscale drag correction(H_(d)).The MIC results showstrong correlations between the features(i.e.,slip velocity(u^(*)_(sy))and particle volume fraction(εs))and thelabel H_(d).The interpretable ML analysis confirms this conclusion,and quantifies the contribution of u^(*)_(sy),εs and gas pressure gradient to the model as 71.9%,27.2%and 0.9%,respectively.Automated ML without theneed to select the model structure and hyperparameters is used for modeling,improving the predictionaccuracy over our previous model(Zhu et al.,2020;Ouyang,Zhu,Su,&Luo,2021).
基金This research was funded by the National Natural Science Foundation of China(Nos.71761147001 and 42030707)the International Partnership Program by the Chinese Academy of Sciences(No.121311KYSB20190029)+2 种基金the Fundamental Research Fund for the Central Universities(No.20720210083)the National Science Foundation(Nos.EF-1638679,EF-1638554,EF-1638539,and EF-1638550)Any use of trade,firm,or product names is for descriptive purposes only and does not imply endorsement by the US Government.
文摘The identification of factors that may be forcing ecological observations to approach the upper boundary provides insight into potential mechanisms affecting driver-response relationships,and can help inform ecosystem management,but has rarely been explored.In this study,we propose a novel framework integrating quantile regression with interpretable machine learning.In the first stage of the framework,we estimate the upper boundary of a driver-response relationship using quantile regression.Next,we calculate“potentials”of the response variable depending on the driver,which are defined as vertical distances from the estimated upper boundary of the relationship to observations in the driver-response variable scatter plot.Finally,we identify key factors impacting the potential using a machine learning model.We illustrate the necessary steps to implement the framework using the total phosphorus(TP)-Chlorophyll a(CHL)relationship in lakes across the continental US.We found that the nitrogen to phosphorus ratio(N:P),annual average precipitation,total nitrogen(TN),and summer average air temperature were key factors impacting the potential of CHL depending on TP.We further revealed important implications of our findings for lake eutrophication management.The important role of N:P and TN on the potential highlights the co-limitation of phosphorus and nitrogen and indicates the need for dual nutrient criteria.Future wetter and/or warmer climate scenarios can decrease the potential which may reduce the efficacy of lake eutrophication management.The novel framework advances the application of quantile regression to identify factors driving observations to approach the upper boundary of driver-response relationships.
基金the Self-supporting Program of Guangzhou Laboratory(SRPG22-007)R&D Program of Guangzhou National Laboratory(GZNL2024A01002)+4 种基金National Natural Science Foundation of China(12371485,11871456)II Phase External Project of Guoke Ningbo Life Science and Health Industry Research Institute(2020YJY0217)Science and Technology Project of Yunnan Province(202103AQ100002)National Key R&D Program of China(2022YFF1202100)The Strategic Priority Research Program of the Chinese Academy of Sciences(XDB38050200,XDB38040202,XDA26040304).
文摘Childhood asthma is one of the most common respiratory diseases with rising mortality and morbidity.The multi-omics data is providing a new chance to explore collaborative biomarkers and corresponding diagnostic models of childhood asthma.To capture the nonlinear association of multi-omics data and improve interpretability of diagnostic model,we proposed a novel deep association model(DAM)and corresponding efficient analysis framework.First,the Deep Subspace Reconstruction was used to fuse the omics data and diagnostic information,thereby correcting the distribution of the original omics data and reducing the influence of unnecessary data noises.Second,the Joint Deep Semi-Negative Matrix Factorization was applied to identify different latent sample patterns and extract biomarkers from different omics data levels.Third,our newly proposed Deep Orthogonal Canonical Correlation Analysis can rank features in the collaborative module,which are able to construct the diagnostic model considering nonlinear correlation between different omics data levels.Using DAM,we deeply analyzed the transcriptome and methylation data of childhood asthma.The effectiveness of DAM is verified from the perspectives of algorithm performance and biological significance on the independent test dataset,by ablation experiment and comparison with many baseline methods from clinical and biological studies.The DAM-induced diagnostic model can achieve a prediction AUC of o.912,which is higher than that of many other alternative methods.Meanwhile,relevant pathways and biomarkers of childhood asthma are also recognized to be collectively altered on the gene expression and methylation levels.As an interpretable machine learning approach,DAM simultaneously considers the non-linear associations among samples and those among biological features,which should help explore interpretative biomarker candidates and efficient diagnostic models from multi-omics data analysis for human complexdiseases.
基金the National Natural Science Foundation of China(Nos.42377164,41972280 and 42272326)National Natural Science Outstanding Youth Foundation of China(No.52222905)+1 种基金Natural Science Foundation of Jiangxi Province,China(No.20232BAB204091)Natural Science Foundation of Jiangxi Province,China(No.20232BAB204077).
文摘Landslide inventory is an indispensable output variable of landslide susceptibility prediction(LSP)modelling.However,the influence of landslide inventory incompleteness on LSP and the transfer rules of LSP resulting error in the model have not been explored.Adopting Xunwu County,China,as an example,the existing landslide inventory is first obtained and assumed to contain all landslide inventory samples under ideal conditions,after which different landslide inventory sample missing conditions are simulated by random sampling.It includes the condition that the landslide inventory samples in the whole study area are missing randomly at the proportions of 10%,20%,30%,40%and 50%,as well as the condition that the landslide inventory samples in the south of Xunwu County are missing in aggregation.Then,five machine learning models,namely,Random Forest(RF),and Support Vector Machine(SVM),are used to perform LSP.Finally,the LSP results are evaluated to analyze the LSP uncertainties under various conditions.In addition,this study introduces various interpretability methods of machine learning model to explore the changes in the decision basis of the RF model under various conditions.Results show that(1)randomly missing landslide inventory samples at certain proportions(10%–50%)may affect the LSP results for local areas.(2)Aggregation of missing landslide inventory samples may cause significant biases in LSP,particularly in areas where samples are missing.(3)When 50%of landslide samples are missing(either randomly or aggregated),the changes in the decision basis of the RF model are mainly manifested in two aspects:first,the importance ranking of environmental factors slightly differs;second,in regard to LSP modelling in the same test grid unit,the weights of individual model factors may drastically vary.
基金supported by the ONR MURI pro ject(No.N00014-16-1-2007)the DARPA XAI Award(No.N66001-17-2-4029)NSF IIS(No.1423305)
文摘This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations.Although deep neural networks have exhibited superior performance in various tasks,interpretability is always Achilles' heel of deep neural networks.At present,deep neural networks obtain high discrimination power at the cost of a low interpretability of their black-box representations.We believe that high model interpretability may help people break several bottlenecks of deep learning,e.g.,learning from a few annotations,learning via human–computer communications at the semantic level,and semantically debugging network representations.We focus on convolutional neural networks(CNNs),and revisit the visualization of CNN representations,methods of diagnosing representations of pre-trained CNNs,approaches for disentangling pre-trained CNN representations,learning of CNNs with disentangled representations,and middle-to-end learning based on model interpretability.Finally,we discuss prospective trends in explainable artificial intelligence.
基金supported in part by the National Key Research Program of China (2016YFB0900100)Key Project of Shanghai Science and Technology Committee (18DZ1100303).
文摘To extract strong correlations between different energy loads and improve the interpretability and accuracy for load forecasting of a regional integrated energy system(RIES),an explainable framework for load forecasting of an RIES is proposed.This includes the load forecasting model of RIES and its interpretation.A coupled feature extracting strat-egy is adopted to construct coupled features between loads as the input variables of the model.It is designed based on multi-task learning(MTL)with a long short-term memory(LSTM)model as the sharing layer.Based on SHapley Additive exPlanations(SHAP),this explainable framework combines global and local interpretations to improve the interpretability of load forecasting of the RIES.In addition,an input variable selection strategy based on the global SHAP value is proposed to select input feature variables of the model.A case study is given to verify the effectiveness of the proposed model,constructed coupled features,and input variable selection strategy.The results show that the explainable framework intuitively improves the interpretability of the prediction model.
基金supported by ZALF Integrated Priority Project(IPP2022)“Co-designing smart,resilient,sustainable agricultural landscapes with cross-scale diversification”,Bundesministerium für Bildung und Forschung(BMBF)Land-Innovation-Lausitz project“Landschaftsinnovationen in der Lausitz für eine klimaangepasste Bioökonomie und naturnahen Bioökonomie-Tourismus”(03WIR3017A)BMBF project“Multi-modale Datenintegration,domänenspezifische Methoden und KI zur Stärkung der Datenkompetenz in der Agrarforschung”(16DKWN089)Brandenburgische Technische Universität Cottbus-Senftenberg GRS cluster project“Integrated analysis of Multifunctional Fruit production landscapes to promote ecosystem services and sustainable land-use under climate change”(GRS2018/19).
文摘Artificial intelligence and machine learning have been increasingly applied for prediction in agricultural science.However,many models are typically black boxes,meaning we cannot explain what the models learned from the data and the reasons behind predictions.To address this issue,I introduce an emerging subdomain of artificial intelligence,explainable artificial intelligence(XAI),and associated toolkits,interpretable machine learning.This study demonstrates the usefulness of several methods by applying them to an openly available dataset.The dataset includes the no-tillage effect on crop yield relative to conventional tillage and soil,climate,and management variables.Data analysis discovered that no-tillage management can increase maize crop yield where yield in conventional tillage is<5000 kg/ha and the maximum temperature is higher than 32°.These methods are useful to answer(i)which variables are important for prediction in regression/classification,(ii)which variable interactions are important for prediction,(iii)how important variables and their interactions are associated with the response variable,(iv)what are the reasons underlying a predicted value for a certain instance,and(v)whether different machine learning algorithms offer the same answer to these questions.I argue that the goodness of model fit is overly evaluated with model performance measures in the current practice,while these questions are unanswered.XAI and interpretable machine learning can enhance trust and explainability in AI.
基金supported by the National Science and Technology Major Project,China(No.2017-II-0004-0016)。
文摘Geometric and working condition uncertainties are inevitable in a compressor,deviating the compressor performance from the design value.It’s necessary to explore the influence of geometric uncertainty on performance deviation under different working conditions.In this paper,the geometric uncertainty influences at near stall,peak efficiency,and near choke conditions under design speed and low speed are investigated.Firstly,manufacturing geometric uncertainties are analyzed.Next,correlation models between geometry and performance under different working conditions are constructed based on a neural network.Then the Shapley additive explanations(SHAP)method is introduced to explain the output of the neural network.Results show that under real manufacturing uncertainty,the efficiency deviation range is small under the near stall and peak efficiency conditions.However,under the near choke conditions,efficiency is highly sensitive to flow capacity changes caused by geometric uncertainty,leading to a significant increase in the efficiency deviation amplitude,up to a magnitude of-3.6%.Moreover,the tip leading-edge radius and tip thickness are two main factors affecting efficiency deviation.Therefore,to reduce efficiency uncertainty,a compressor should be avoided working near the choke condition,and the tolerances of the tip leading-edge radius and tip thickness should be strictly controlled.