Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir...Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.展开更多
The transformer is the key circuit component of the common-mode noise current when an isolated converter is working.The highfrequency characteristics of the transformer have an important influence on the common-mode n...The transformer is the key circuit component of the common-mode noise current when an isolated converter is working.The highfrequency characteristics of the transformer have an important influence on the common-mode noise of the converter.Traditionally,the measurement method is used for transformer modeling,and a single lumped device is used to establish the transformer model,which cannot be predicted in the transformer design stage.Based on the transformer common-mode noise transmission mechanism,this paper derives the transformer common-mode equivalent capacitance under ideal conditions.According to the principle of experimental measurement of the network analyzer,the electromagnetic field finite element simulation software three-dimensional(3D)modeling and simulation method is used to obtain the two-port parameters of the transformer,extract the high-frequency parameters of the transformer,and establish its electromagnetic compatibility equivalent circuit model.Finally,an experimental prototype is used to verify the correctness of the model by comparing the experimental measurement results with the simulation prediction results.展开更多
Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind−Magnetosphere−Ionosphere Link Explorer(SMILE)will observe magnetosheath and its boundary motion in soft X-rays for understanding magnetopause reconnectio...Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind−Magnetosphere−Ionosphere Link Explorer(SMILE)will observe magnetosheath and its boundary motion in soft X-rays for understanding magnetopause reconnection modes under various solar wind conditions after their respective launches in 2024 and 2025.Magnetosheath conditions,namely,plasma density,velocity,and temperature,are key parameters for predicting and analyzing soft X-ray images from the LEXI and SMILE missions.We developed a userfriendly model of magnetosheath that parameterizes number density,velocity,temperature,and magnetic field by utilizing the global Magnetohydrodynamics(MHD)model as well as the pre-existing gas-dynamic and analytic models.Using this parameterized magnetosheath model,scientists can easily reconstruct expected soft X-ray images and utilize them for analysis of observed images of LEXI and SMILE without simulating the complicated global magnetosphere models.First,we created an MHD-based magnetosheath model by running a total of 14 OpenGGCM global MHD simulations under 7 solar wind densities(1,5,10,15,20,25,and 30 cm)and 2 interplanetary magnetic field Bz components(±4 nT),and then parameterizing the results in new magnetosheath conditions.We compared the magnetosheath model result with THEMIS statistical data and it showed good agreement with a weighted Pearson correlation coefficient greater than 0.77,especially for plasma density and plasma velocity.Second,we compiled a suite of magnetosheath models incorporating previous magnetosheath models(gas-dynamic,analytic),and did two case studies to test the performance.The MHD-based model was comparable to or better than the previous models while providing self-consistency among the magnetosheath parameters.Third,we constructed a tool to calculate a soft X-ray image from any given vantage point,which can support the planning and data analysis of the aforementioned LEXI and SMILE missions.A release of the code has been uploaded to a Github repository.展开更多
Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the curr...Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the current state-of-the-art Coupled Model Intercomparison Project phase 6(CMIP6) models remain unknown. Here, both the strengths and weaknesses of CMIP6 models in simulating droughts and corresponding hydrothermal conditions in drylands are assessed.While the general patterns of simulated meteorological elements in drylands resemble the observations, the annual precipitation is overestimated by ~33%(with a model spread of 2.3%–77.2%), along with an underestimation of potential evapotranspiration(PET) by ~32%(17.5%–47.2%). The water deficit condition, measured by the difference between precipitation and PET, is 50%(29.1%–71.7%) weaker than observations. The CMIP6 models show weaknesses in capturing the climate mean drought characteristics in drylands, particularly with the occurrence and duration largely underestimated in the hyperarid Afro-Asian areas. Nonetheless, the drought-associated meteorological anomalies, including reduced precipitation, warmer temperatures, higher evaporative demand, and increased water deficit conditions, are reasonably reproduced. The simulated magnitude of precipitation(water deficit) associated with dryland droughts is overestimated by 28%(24%) compared to observations. The observed increasing trends in drought fractional area,occurrence, and corresponding meteorological anomalies during 1980–2014 are reasonably reproduced. Still, the increase in drought characteristics, associated precipitation and water deficit are obviously underestimated after the late 1990s,especially for mild and moderate droughts, indicative of a weaker response of dryland drought changes to global warming in CMIP6 models. Our results suggest that it is imperative to employ bias correction approaches in drought-related studies over drylands by using CMIP6 outputs.展开更多
Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but...Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.展开更多
In contrast to conventional transformers, power electronic transformers, as an integral component of new energy power system, are often subjected to high-frequency and transient electrical stresses, leading to heighte...In contrast to conventional transformers, power electronic transformers, as an integral component of new energy power system, are often subjected to high-frequency and transient electrical stresses, leading to heightened concerns regarding insulation failures. Meanwhile, the underlying mechanism behind discharge breakdown failure and nanofiller enhancement under high-frequency electrical stress remains unclear. An electric-thermal coupled discharge breakdown phase field model was constructed to study the evolution of the breakdown path in polyimide nanocomposite insulation subjected to high-frequency stress. The investigation focused on analyzing the effect of various factors, including frequency, temperature, and nanofiller shape, on the breakdown path of Polyimide(PI) composites. Additionally, it elucidated the enhancement mechanism of nano-modified composite insulation at the mesoscopic scale. The results indicated that with increasing frequency and temperature, the discharge breakdown path demonstrates accelerated development, accompanied by a gradual dominance of Joule heat energy. This enhancement is attributed to the dispersed electric field distribution and the hindering effect of the nanosheets. The research findings offer a theoretical foundation and methodological framework to inform the optimal design and performance management of new insulating materials utilized in high-frequency power equipment.展开更多
Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as s...Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.展开更多
BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are p...BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.展开更多
Emotion Recognition in Conversations(ERC)is fundamental in creating emotionally intelligentmachines.Graph-BasedNetwork(GBN)models have gained popularity in detecting conversational contexts for ERC tasks.However,their...Emotion Recognition in Conversations(ERC)is fundamental in creating emotionally intelligentmachines.Graph-BasedNetwork(GBN)models have gained popularity in detecting conversational contexts for ERC tasks.However,their limited ability to collect and acquire contextual information hinders their effectiveness.We propose a Text Augmentation-based computational model for recognizing emotions using transformers(TA-MERT)to address this.The proposed model uses the Multimodal Emotion Lines Dataset(MELD),which ensures a balanced representation for recognizing human emotions.Themodel used text augmentation techniques to producemore training data,improving the proposed model’s accuracy.Transformer encoders train the deep neural network(DNN)model,especially Bidirectional Encoder(BE)representations that capture both forward and backward contextual information.This integration improves the accuracy and robustness of the proposed model.Furthermore,we present a method for balancing the training dataset by creating enhanced samples from the original dataset.By balancing the dataset across all emotion categories,we can lessen the adverse effects of data imbalance on the accuracy of the proposed model.Experimental results on the MELD dataset show that TA-MERT outperforms earlier methods,achieving a weighted F1 score of 62.60%and an accuracy of 64.36%.Overall,the proposed TA-MERT model solves the GBN models’weaknesses in obtaining contextual data for ERC.TA-MERT model recognizes human emotions more accurately by employing text augmentation and transformer-based encoding.The balanced dataset and the additional training samples also enhance its resilience.These findings highlight the significance of transformer-based approaches for special emotion recognition in conversations.展开更多
Rock fragmentation plays a critical role in rock avalanches,yet conventional approaches such as classical granular flow models or the bonded particle model have limitations in accurately characterizing the progressive...Rock fragmentation plays a critical role in rock avalanches,yet conventional approaches such as classical granular flow models or the bonded particle model have limitations in accurately characterizing the progressive disintegration and kinematics of multi-deformable rock blocks during rockslides.The present study proposes a discrete-continuous numerical model,based on a cohesive zone model,to explicitly incorporate the progressive fragmentation and intricate interparticle interactions inherent in rockslides.Breakable rock granular assemblies are released along an inclined plane and flow onto a horizontal plane.The numerical scenarios are established to incorporate variations in slope angle,initial height,friction coefficient,and particle number.The evolutions of fragmentation,kinematic,runout and depositional characteristics are quantitatively analyzed and compared with experimental and field data.A positive linear relationship between the equivalent friction coefficient and the apparent friction coefficient is identified.In general,the granular mass predominantly exhibits characteristics of a dense granular flow,with the Savage number exhibiting a decreasing trend as the volume of mass increases.The process of particle breakage gradually occurs in a bottom-up manner,leading to a significant increase in the angular velocities of the rock blocks with increasing depth.The simulation results reproduce the field observations of inverse grading and source stratigraphy preservation in the deposit.We propose a disintegration index that incorporates factors such as drop height,rock mass volume,and rock strength.Our findings demonstrate a consistent linear relationship between this index and the fragmentation degree in all tested scenarios.展开更多
In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and comput...In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and computing power advance,the issue of increasingly larger models and a growing number of parameters has surfaced.Consequently,model training has become more costly and less efficient.To enhance the efficiency and accuracy of the training process while reducing themodel volume,this paper proposes a first-order pruningmodel PAL-BERT based on the ALBERT model according to the characteristics of question-answering(QA)system and language model.Firstly,a first-order network pruning method based on the ALBERT model is designed,and the PAL-BERT model is formed.Then,the parameter optimization strategy of the PAL-BERT model is formulated,and the Mish function was used as an activation function instead of ReLU to improve the performance.Finally,after comparison experiments with traditional deep learning models TextCNN and BiLSTM,it is confirmed that PALBERT is a pruning model compression method that can significantly reduce training time and optimize training efficiency.Compared with traditional models,PAL-BERT significantly improves the NLP task’s performance.展开更多
Interval model updating(IMU)methods have been widely used in uncertain model updating due to their low requirements for sample data.However,the surrogate model in IMU methods mostly adopts the one-time construction me...Interval model updating(IMU)methods have been widely used in uncertain model updating due to their low requirements for sample data.However,the surrogate model in IMU methods mostly adopts the one-time construction method.This makes the accuracy of the surrogate model highly dependent on the experience of users and affects the accuracy of IMU methods.Therefore,an improved IMU method via the adaptive Kriging models is proposed.This method transforms the objective function of the IMU problem into two deterministic global optimization problems about the upper bound and the interval diameter through universal grey numbers.These optimization problems are addressed through the adaptive Kriging models and the particle swarm optimization(PSO)method to quantify the uncertain parameters,and the IMU is accomplished.During the construction of these adaptive Kriging models,the sample space is gridded according to sensitivity information.Local sampling is then performed in key subspaces based on the maximum mean square error(MMSE)criterion.The interval division coefficient and random sampling coefficient are adaptively adjusted without human interference until the model meets accuracy requirements.The effectiveness of the proposed method is demonstrated by a numerical example of a three-degree-of-freedom mass-spring system and an experimental example of a butted cylindrical shell.The results show that the updated results of the interval model are in good agreement with the experimental results.展开更多
We used the geological map and published rock density measurements to compile the digital rock density model for the Hong Kong territories.We then estimated the average density for the whole territory.According to our...We used the geological map and published rock density measurements to compile the digital rock density model for the Hong Kong territories.We then estimated the average density for the whole territory.According to our result,the rock density values in Hong Kong vary from 2101 to 2681 kg·m^(-3).These density values are typically smaller than the average density of 2670 kg·m^(-3),often adopted to represent the average density of the upper continental crust in physical geodesy and gravimetric geophysics applications.This finding reflects that the geological configuration in Hong Kong is mainly formed by light volcanic formations and lava flows with overlying sedimentary deposits at many locations,while the percentage of heavier metamorphic rocks is very low(less than 1%).This product will improve the accuracy of a detailed geoid model and orthometric heights.展开更多
The tensile-shear interactive damage(TSID)model is a novel and powerful constitutive model for rock-like materials.This study proposes a methodology to calibrate the TSID model parameters to simulate sandstone.The bas...The tensile-shear interactive damage(TSID)model is a novel and powerful constitutive model for rock-like materials.This study proposes a methodology to calibrate the TSID model parameters to simulate sandstone.The basic parameters of sandstone are determined through a series of static and dynamic tests,including uniaxial compression,Brazilian disc,triaxial compression under varying confining pressures,hydrostatic compression,and dynamic compression and tensile tests with a split Hopkinson pressure bar.Based on the sandstone test results from this study and previous research,a step-by-step procedure for parameter calibration is outlined,which accounts for the categories of the strength surface,equation of state(EOS),strain rate effect,and damage.The calibrated parameters are verified through numerical tests that correspond to the experimental loading conditions.Consistency between numerical results and experimental data indicates the precision and reliability of the calibrated parameters.The methodology presented in this study is scientifically sound,straightforward,and essential for improving the TSID model.Furthermore,it has the potential to contribute to other rock constitutive models,particularly new user-defined models.展开更多
In time-variant reliability problems,there are a lot of uncertain variables from different sources.Therefore,it is important to consider these uncertainties in engineering.In addition,time-variant reliability problems...In time-variant reliability problems,there are a lot of uncertain variables from different sources.Therefore,it is important to consider these uncertainties in engineering.In addition,time-variant reliability problems typically involve a complexmultilevel nested optimization problem,which can result in an enormous amount of computation.To this end,this paper studies the time-variant reliability evaluation of structures with stochastic and bounded uncertainties using a mixed probability and convex set model.In this method,the stochastic process of a limit-state function with mixed uncertain parameters is first discretized and then converted into a timeindependent reliability problem.Further,to solve the double nested optimization problem in hybrid reliability calculation,an efficient iterative scheme is designed in standard uncertainty space to determine the most probable point(MPP).The limit state function is linearized at these points,and an innovative random variable is defined to solve the equivalent static reliability analysis model.The effectiveness of the proposed method is verified by two benchmark numerical examples and a practical engineering problem.展开更多
Understanding the anisotropic creep behaviors of shale under direct shearing is a challenging issue.In this context,we conducted shear-creep and steady-creep tests on shale with five bedding orientations (i.e.0°,...Understanding the anisotropic creep behaviors of shale under direct shearing is a challenging issue.In this context,we conducted shear-creep and steady-creep tests on shale with five bedding orientations (i.e.0°,30°,45°,60°,and 90°),under multiple levels of direct shearing for the first time.The results show that the anisotropic creep of shale exhibits a significant stress-dependent behavior.Under a low shear stress,the creep compliance of shale increases linearly with the logarithm of time at all bedding orientations,and the increase depends on the bedding orientation and creep time.Under high shear stress conditions,the creep compliance of shale is minimal when the bedding orientation is 0°,and the steady-creep rate of shale increases significantly with increasing bedding orientations of 30°,45°,60°,and 90°.The stress-strain values corresponding to the inception of the accelerated creep stage show an increasing and then decreasing trend with the bedding orientation.A semilogarithmic model that could reflect the stress dependence of the steady-creep rate while considering the hardening and damage process is proposed.The model minimizes the deviation of the calculated steady-state creep rate from the observed value and reveals the behavior of the bedding orientation's influence on the steady-creep rate.The applicability of the five classical empirical creep models is quantitatively evaluated.It shows that the logarithmic model can well explain the experimental creep strain and creep rate,and it can accurately predict long-term shear creep deformation.Based on an improved logarithmic model,the variations in creep parameters with shear stress and bedding orientations are discussed.With abovementioned findings,a mathematical method for constructing an anisotropic shear creep model of shale is proposed,which can characterize the nonlinear dependence of the anisotropic shear creep behavior of shale on the bedding orientation.展开更多
A multiphase field model coupled with a lattice Boltzmann(PF-LBM)model is proposed to simulate the distribution mechanism of bubbles and solutes at the solid-liquid interface,the interaction between dendrites and bubb...A multiphase field model coupled with a lattice Boltzmann(PF-LBM)model is proposed to simulate the distribution mechanism of bubbles and solutes at the solid-liquid interface,the interaction between dendrites and bubbles,and the effects of different temperatures,anisotropic strengths and tilting angles on the solidified organization of the SCN-0.24wt.%butanedinitrile alloy during the solidification process.The model adopts a multiphase field model to simulate the growth of dendrites,calculates the growth motions of dendrites based on the interfacial solute equilibrium;and adopts a lattice Boltzmann model(LBM)based on the Shan-Chen multiphase flow to simulate the growth and motions of bubbles in the liquid phase,which includes the interaction between solid-liquid-gas phases.The simulation results show that during the directional growth of columnar dendrites,bubbles first precipitate out slowly at the very bottom of the dendrites,and then rise up due to the different solid-liquid densities and pressure differences.The bubbles will interact with the dendrite in the process of flow migration,such as extrusion,overflow,fusion and disappearance.In the case of wide gaps in the dendrite channels,bubbles will fuse to form larger irregular bubbles,and in the case of dense channels,bubbles will deform due to the extrusion of dendrites.In the simulated region,as the dendrites converge and diverge,the bubbles precipitate out of the dendrites by compression and diffusion,which also causes physical phenomena such as fusion and spillage of the bubbles.These results reveal the physical mechanisms of bubble nucleation,growth and kinematic evolution during solidification and interaction with dendrite growth.展开更多
文摘Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.
文摘The transformer is the key circuit component of the common-mode noise current when an isolated converter is working.The highfrequency characteristics of the transformer have an important influence on the common-mode noise of the converter.Traditionally,the measurement method is used for transformer modeling,and a single lumped device is used to establish the transformer model,which cannot be predicted in the transformer design stage.Based on the transformer common-mode noise transmission mechanism,this paper derives the transformer common-mode equivalent capacitance under ideal conditions.According to the principle of experimental measurement of the network analyzer,the electromagnetic field finite element simulation software three-dimensional(3D)modeling and simulation method is used to obtain the two-port parameters of the transformer,extract the high-frequency parameters of the transformer,and establish its electromagnetic compatibility equivalent circuit model.Finally,an experimental prototype is used to verify the correctness of the model by comparing the experimental measurement results with the simulation prediction results.
基金supported by the NSF grant AGS-1928883the NASA grants,80NSSC20K1670 and 80MSFC20C0019+2 种基金support from NASA GSFC IRADHIFISFM funds。
文摘Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind−Magnetosphere−Ionosphere Link Explorer(SMILE)will observe magnetosheath and its boundary motion in soft X-rays for understanding magnetopause reconnection modes under various solar wind conditions after their respective launches in 2024 and 2025.Magnetosheath conditions,namely,plasma density,velocity,and temperature,are key parameters for predicting and analyzing soft X-ray images from the LEXI and SMILE missions.We developed a userfriendly model of magnetosheath that parameterizes number density,velocity,temperature,and magnetic field by utilizing the global Magnetohydrodynamics(MHD)model as well as the pre-existing gas-dynamic and analytic models.Using this parameterized magnetosheath model,scientists can easily reconstruct expected soft X-ray images and utilize them for analysis of observed images of LEXI and SMILE without simulating the complicated global magnetosphere models.First,we created an MHD-based magnetosheath model by running a total of 14 OpenGGCM global MHD simulations under 7 solar wind densities(1,5,10,15,20,25,and 30 cm)and 2 interplanetary magnetic field Bz components(±4 nT),and then parameterizing the results in new magnetosheath conditions.We compared the magnetosheath model result with THEMIS statistical data and it showed good agreement with a weighted Pearson correlation coefficient greater than 0.77,especially for plasma density and plasma velocity.Second,we compiled a suite of magnetosheath models incorporating previous magnetosheath models(gas-dynamic,analytic),and did two case studies to test the performance.The MHD-based model was comparable to or better than the previous models while providing self-consistency among the magnetosheath parameters.Third,we constructed a tool to calculate a soft X-ray image from any given vantage point,which can support the planning and data analysis of the aforementioned LEXI and SMILE missions.A release of the code has been uploaded to a Github repository.
基金supported by Ministry of Science and Technology of China (Grant No. 2018YFA0606501)National Natural Science Foundation of China (Grant No. 42075037)+1 种基金Key Laboratory Open Research Program of Xinjiang Science and Technology Department (Grant No. 2022D04009)the National Key Scientific and Technological Infrastructure project “Earth System Numerical Simulation Facility” (EarthLab)。
文摘Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the current state-of-the-art Coupled Model Intercomparison Project phase 6(CMIP6) models remain unknown. Here, both the strengths and weaknesses of CMIP6 models in simulating droughts and corresponding hydrothermal conditions in drylands are assessed.While the general patterns of simulated meteorological elements in drylands resemble the observations, the annual precipitation is overestimated by ~33%(with a model spread of 2.3%–77.2%), along with an underestimation of potential evapotranspiration(PET) by ~32%(17.5%–47.2%). The water deficit condition, measured by the difference between precipitation and PET, is 50%(29.1%–71.7%) weaker than observations. The CMIP6 models show weaknesses in capturing the climate mean drought characteristics in drylands, particularly with the occurrence and duration largely underestimated in the hyperarid Afro-Asian areas. Nonetheless, the drought-associated meteorological anomalies, including reduced precipitation, warmer temperatures, higher evaporative demand, and increased water deficit conditions, are reasonably reproduced. The simulated magnitude of precipitation(water deficit) associated with dryland droughts is overestimated by 28%(24%) compared to observations. The observed increasing trends in drought fractional area,occurrence, and corresponding meteorological anomalies during 1980–2014 are reasonably reproduced. Still, the increase in drought characteristics, associated precipitation and water deficit are obviously underestimated after the late 1990s,especially for mild and moderate droughts, indicative of a weaker response of dryland drought changes to global warming in CMIP6 models. Our results suggest that it is imperative to employ bias correction approaches in drought-related studies over drylands by using CMIP6 outputs.
基金supported by the Research Council of Norway under contracts 223252/F50 and 300844/F50the Trond Mohn Foundation。
文摘Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.
基金supported in part by the National Key R&D Program of China (No.2021YFB2601404)Beijing Natural Science Foundation (No.3232053)National Natural Science Foundation of China (Nos.51929701 and 52127812)。
文摘In contrast to conventional transformers, power electronic transformers, as an integral component of new energy power system, are often subjected to high-frequency and transient electrical stresses, leading to heightened concerns regarding insulation failures. Meanwhile, the underlying mechanism behind discharge breakdown failure and nanofiller enhancement under high-frequency electrical stress remains unclear. An electric-thermal coupled discharge breakdown phase field model was constructed to study the evolution of the breakdown path in polyimide nanocomposite insulation subjected to high-frequency stress. The investigation focused on analyzing the effect of various factors, including frequency, temperature, and nanofiller shape, on the breakdown path of Polyimide(PI) composites. Additionally, it elucidated the enhancement mechanism of nano-modified composite insulation at the mesoscopic scale. The results indicated that with increasing frequency and temperature, the discharge breakdown path demonstrates accelerated development, accompanied by a gradual dominance of Joule heat energy. This enhancement is attributed to the dispersed electric field distribution and the hindering effect of the nanosheets. The research findings offer a theoretical foundation and methodological framework to inform the optimal design and performance management of new insulating materials utilized in high-frequency power equipment.
基金The work is partially supported by Natural Science Foundation of Ningxia(Grant No.AAC03300)National Natural Science Foundation of China(Grant No.61962001)Graduate Innovation Project of North Minzu University(Grant No.YCX23152).
文摘Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.
文摘BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.
文摘Emotion Recognition in Conversations(ERC)is fundamental in creating emotionally intelligentmachines.Graph-BasedNetwork(GBN)models have gained popularity in detecting conversational contexts for ERC tasks.However,their limited ability to collect and acquire contextual information hinders their effectiveness.We propose a Text Augmentation-based computational model for recognizing emotions using transformers(TA-MERT)to address this.The proposed model uses the Multimodal Emotion Lines Dataset(MELD),which ensures a balanced representation for recognizing human emotions.Themodel used text augmentation techniques to producemore training data,improving the proposed model’s accuracy.Transformer encoders train the deep neural network(DNN)model,especially Bidirectional Encoder(BE)representations that capture both forward and backward contextual information.This integration improves the accuracy and robustness of the proposed model.Furthermore,we present a method for balancing the training dataset by creating enhanced samples from the original dataset.By balancing the dataset across all emotion categories,we can lessen the adverse effects of data imbalance on the accuracy of the proposed model.Experimental results on the MELD dataset show that TA-MERT outperforms earlier methods,achieving a weighted F1 score of 62.60%and an accuracy of 64.36%.Overall,the proposed TA-MERT model solves the GBN models’weaknesses in obtaining contextual data for ERC.TA-MERT model recognizes human emotions more accurately by employing text augmentation and transformer-based encoding.The balanced dataset and the additional training samples also enhance its resilience.These findings highlight the significance of transformer-based approaches for special emotion recognition in conversations.
基金support from the National Key R&D plan(Grant No.2022YFC3004303)the National Natural Science Foundation of China(Grant No.42107161)+3 种基金the State Key Laboratory of Hydroscience and Hydraulic Engineering(Grant No.2021-KY-04)the Open Research Fund Program of State Key Laboratory of Hydroscience and Engineering(sklhse-2023-C-01)the Open Research Fund Program of Key Laboratory of the Hydrosphere of the Ministry of Water Resources(mklhs-2023-04)the China Three Gorges Corporation(XLD/2117).
文摘Rock fragmentation plays a critical role in rock avalanches,yet conventional approaches such as classical granular flow models or the bonded particle model have limitations in accurately characterizing the progressive disintegration and kinematics of multi-deformable rock blocks during rockslides.The present study proposes a discrete-continuous numerical model,based on a cohesive zone model,to explicitly incorporate the progressive fragmentation and intricate interparticle interactions inherent in rockslides.Breakable rock granular assemblies are released along an inclined plane and flow onto a horizontal plane.The numerical scenarios are established to incorporate variations in slope angle,initial height,friction coefficient,and particle number.The evolutions of fragmentation,kinematic,runout and depositional characteristics are quantitatively analyzed and compared with experimental and field data.A positive linear relationship between the equivalent friction coefficient and the apparent friction coefficient is identified.In general,the granular mass predominantly exhibits characteristics of a dense granular flow,with the Savage number exhibiting a decreasing trend as the volume of mass increases.The process of particle breakage gradually occurs in a bottom-up manner,leading to a significant increase in the angular velocities of the rock blocks with increasing depth.The simulation results reproduce the field observations of inverse grading and source stratigraphy preservation in the deposit.We propose a disintegration index that incorporates factors such as drop height,rock mass volume,and rock strength.Our findings demonstrate a consistent linear relationship between this index and the fragmentation degree in all tested scenarios.
基金Supported by Sichuan Science and Technology Program(2021YFQ0003,2023YFSY0026,2023YFH0004).
文摘In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and computing power advance,the issue of increasingly larger models and a growing number of parameters has surfaced.Consequently,model training has become more costly and less efficient.To enhance the efficiency and accuracy of the training process while reducing themodel volume,this paper proposes a first-order pruningmodel PAL-BERT based on the ALBERT model according to the characteristics of question-answering(QA)system and language model.Firstly,a first-order network pruning method based on the ALBERT model is designed,and the PAL-BERT model is formed.Then,the parameter optimization strategy of the PAL-BERT model is formulated,and the Mish function was used as an activation function instead of ReLU to improve the performance.Finally,after comparison experiments with traditional deep learning models TextCNN and BiLSTM,it is confirmed that PALBERT is a pruning model compression method that can significantly reduce training time and optimize training efficiency.Compared with traditional models,PAL-BERT significantly improves the NLP task’s performance.
基金Project supported by the National Natural Science Foundation of China(Nos.12272211,12072181,12121002)。
文摘Interval model updating(IMU)methods have been widely used in uncertain model updating due to their low requirements for sample data.However,the surrogate model in IMU methods mostly adopts the one-time construction method.This makes the accuracy of the surrogate model highly dependent on the experience of users and affects the accuracy of IMU methods.Therefore,an improved IMU method via the adaptive Kriging models is proposed.This method transforms the objective function of the IMU problem into two deterministic global optimization problems about the upper bound and the interval diameter through universal grey numbers.These optimization problems are addressed through the adaptive Kriging models and the particle swarm optimization(PSO)method to quantify the uncertain parameters,and the IMU is accomplished.During the construction of these adaptive Kriging models,the sample space is gridded according to sensitivity information.Local sampling is then performed in key subspaces based on the maximum mean square error(MMSE)criterion.The interval division coefficient and random sampling coefficient are adaptively adjusted without human interference until the model meets accuracy requirements.The effectiveness of the proposed method is demonstrated by a numerical example of a three-degree-of-freedom mass-spring system and an experimental example of a butted cylindrical shell.The results show that the updated results of the interval model are in good agreement with the experimental results.
基金supported by the Hong Kong GRF RGC project 15217222:“Modernization of the leveling network in the Hong Kong territories.”。
文摘We used the geological map and published rock density measurements to compile the digital rock density model for the Hong Kong territories.We then estimated the average density for the whole territory.According to our result,the rock density values in Hong Kong vary from 2101 to 2681 kg·m^(-3).These density values are typically smaller than the average density of 2670 kg·m^(-3),often adopted to represent the average density of the upper continental crust in physical geodesy and gravimetric geophysics applications.This finding reflects that the geological configuration in Hong Kong is mainly formed by light volcanic formations and lava flows with overlying sedimentary deposits at many locations,while the percentage of heavier metamorphic rocks is very low(less than 1%).This product will improve the accuracy of a detailed geoid model and orthometric heights.
基金funded by the National Natural Science Foundation of China(Grant No.12272247)National Key Project(Grant No.GJXM92579)Major Research and Development Project of Metallurgical Corporation of China Ltd.in the Non-Steel Field(Grant No.2021-5).
文摘The tensile-shear interactive damage(TSID)model is a novel and powerful constitutive model for rock-like materials.This study proposes a methodology to calibrate the TSID model parameters to simulate sandstone.The basic parameters of sandstone are determined through a series of static and dynamic tests,including uniaxial compression,Brazilian disc,triaxial compression under varying confining pressures,hydrostatic compression,and dynamic compression and tensile tests with a split Hopkinson pressure bar.Based on the sandstone test results from this study and previous research,a step-by-step procedure for parameter calibration is outlined,which accounts for the categories of the strength surface,equation of state(EOS),strain rate effect,and damage.The calibrated parameters are verified through numerical tests that correspond to the experimental loading conditions.Consistency between numerical results and experimental data indicates the precision and reliability of the calibrated parameters.The methodology presented in this study is scientifically sound,straightforward,and essential for improving the TSID model.Furthermore,it has the potential to contribute to other rock constitutive models,particularly new user-defined models.
基金partially supported by the National Natural Science Foundation of China(52375238)Science and Technology Program of Guangzhou(202201020213,202201020193,202201010399)GZHU-HKUST Joint Research Fund(YH202109).
文摘In time-variant reliability problems,there are a lot of uncertain variables from different sources.Therefore,it is important to consider these uncertainties in engineering.In addition,time-variant reliability problems typically involve a complexmultilevel nested optimization problem,which can result in an enormous amount of computation.To this end,this paper studies the time-variant reliability evaluation of structures with stochastic and bounded uncertainties using a mixed probability and convex set model.In this method,the stochastic process of a limit-state function with mixed uncertain parameters is first discretized and then converted into a timeindependent reliability problem.Further,to solve the double nested optimization problem in hybrid reliability calculation,an efficient iterative scheme is designed in standard uncertainty space to determine the most probable point(MPP).The limit state function is linearized at these points,and an innovative random variable is defined to solve the equivalent static reliability analysis model.The effectiveness of the proposed method is verified by two benchmark numerical examples and a practical engineering problem.
基金funded by the National Natural Science Foundation of China(Grant Nos.U22A20166 and 12172230)the Guangdong Basic and Applied Basic Research Foundation(Grant No.2023A1515012654)+1 种基金funded by the National Natural Science Foundation of China(Grant Nos.U22A20166 and 12172230)the Guangdong Basic and Applied Basic Research Foundation(Grant No.2023A1515012654)。
文摘Understanding the anisotropic creep behaviors of shale under direct shearing is a challenging issue.In this context,we conducted shear-creep and steady-creep tests on shale with five bedding orientations (i.e.0°,30°,45°,60°,and 90°),under multiple levels of direct shearing for the first time.The results show that the anisotropic creep of shale exhibits a significant stress-dependent behavior.Under a low shear stress,the creep compliance of shale increases linearly with the logarithm of time at all bedding orientations,and the increase depends on the bedding orientation and creep time.Under high shear stress conditions,the creep compliance of shale is minimal when the bedding orientation is 0°,and the steady-creep rate of shale increases significantly with increasing bedding orientations of 30°,45°,60°,and 90°.The stress-strain values corresponding to the inception of the accelerated creep stage show an increasing and then decreasing trend with the bedding orientation.A semilogarithmic model that could reflect the stress dependence of the steady-creep rate while considering the hardening and damage process is proposed.The model minimizes the deviation of the calculated steady-state creep rate from the observed value and reveals the behavior of the bedding orientation's influence on the steady-creep rate.The applicability of the five classical empirical creep models is quantitatively evaluated.It shows that the logarithmic model can well explain the experimental creep strain and creep rate,and it can accurately predict long-term shear creep deformation.Based on an improved logarithmic model,the variations in creep parameters with shear stress and bedding orientations are discussed.With abovementioned findings,a mathematical method for constructing an anisotropic shear creep model of shale is proposed,which can characterize the nonlinear dependence of the anisotropic shear creep behavior of shale on the bedding orientation.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.52161002,51661020,and 11364024)the Postdoctoral Science Foundation of China(Grant No.2014M560371)the Funds for Distinguished Young Scientists of Lanzhou University of Technology of China(Grant No.J201304).
文摘A multiphase field model coupled with a lattice Boltzmann(PF-LBM)model is proposed to simulate the distribution mechanism of bubbles and solutes at the solid-liquid interface,the interaction between dendrites and bubbles,and the effects of different temperatures,anisotropic strengths and tilting angles on the solidified organization of the SCN-0.24wt.%butanedinitrile alloy during the solidification process.The model adopts a multiphase field model to simulate the growth of dendrites,calculates the growth motions of dendrites based on the interfacial solute equilibrium;and adopts a lattice Boltzmann model(LBM)based on the Shan-Chen multiphase flow to simulate the growth and motions of bubbles in the liquid phase,which includes the interaction between solid-liquid-gas phases.The simulation results show that during the directional growth of columnar dendrites,bubbles first precipitate out slowly at the very bottom of the dendrites,and then rise up due to the different solid-liquid densities and pressure differences.The bubbles will interact with the dendrite in the process of flow migration,such as extrusion,overflow,fusion and disappearance.In the case of wide gaps in the dendrite channels,bubbles will fuse to form larger irregular bubbles,and in the case of dense channels,bubbles will deform due to the extrusion of dendrites.In the simulated region,as the dendrites converge and diverge,the bubbles precipitate out of the dendrites by compression and diffusion,which also causes physical phenomena such as fusion and spillage of the bubbles.These results reveal the physical mechanisms of bubble nucleation,growth and kinematic evolution during solidification and interaction with dendrite growth.