Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind−Magnetosphere−Ionosphere Link Explorer(SMILE)will observe magnetosheath and its boundary motion in soft X-rays for understanding magnetopause reconnectio...Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind−Magnetosphere−Ionosphere Link Explorer(SMILE)will observe magnetosheath and its boundary motion in soft X-rays for understanding magnetopause reconnection modes under various solar wind conditions after their respective launches in 2024 and 2025.Magnetosheath conditions,namely,plasma density,velocity,and temperature,are key parameters for predicting and analyzing soft X-ray images from the LEXI and SMILE missions.We developed a userfriendly model of magnetosheath that parameterizes number density,velocity,temperature,and magnetic field by utilizing the global Magnetohydrodynamics(MHD)model as well as the pre-existing gas-dynamic and analytic models.Using this parameterized magnetosheath model,scientists can easily reconstruct expected soft X-ray images and utilize them for analysis of observed images of LEXI and SMILE without simulating the complicated global magnetosphere models.First,we created an MHD-based magnetosheath model by running a total of 14 OpenGGCM global MHD simulations under 7 solar wind densities(1,5,10,15,20,25,and 30 cm)and 2 interplanetary magnetic field Bz components(±4 nT),and then parameterizing the results in new magnetosheath conditions.We compared the magnetosheath model result with THEMIS statistical data and it showed good agreement with a weighted Pearson correlation coefficient greater than 0.77,especially for plasma density and plasma velocity.Second,we compiled a suite of magnetosheath models incorporating previous magnetosheath models(gas-dynamic,analytic),and did two case studies to test the performance.The MHD-based model was comparable to or better than the previous models while providing self-consistency among the magnetosheath parameters.Third,we constructed a tool to calculate a soft X-ray image from any given vantage point,which can support the planning and data analysis of the aforementioned LEXI and SMILE missions.A release of the code has been uploaded to a Github repository.展开更多
Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the curr...Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the current state-of-the-art Coupled Model Intercomparison Project phase 6(CMIP6) models remain unknown. Here, both the strengths and weaknesses of CMIP6 models in simulating droughts and corresponding hydrothermal conditions in drylands are assessed.While the general patterns of simulated meteorological elements in drylands resemble the observations, the annual precipitation is overestimated by ~33%(with a model spread of 2.3%–77.2%), along with an underestimation of potential evapotranspiration(PET) by ~32%(17.5%–47.2%). The water deficit condition, measured by the difference between precipitation and PET, is 50%(29.1%–71.7%) weaker than observations. The CMIP6 models show weaknesses in capturing the climate mean drought characteristics in drylands, particularly with the occurrence and duration largely underestimated in the hyperarid Afro-Asian areas. Nonetheless, the drought-associated meteorological anomalies, including reduced precipitation, warmer temperatures, higher evaporative demand, and increased water deficit conditions, are reasonably reproduced. The simulated magnitude of precipitation(water deficit) associated with dryland droughts is overestimated by 28%(24%) compared to observations. The observed increasing trends in drought fractional area,occurrence, and corresponding meteorological anomalies during 1980–2014 are reasonably reproduced. Still, the increase in drought characteristics, associated precipitation and water deficit are obviously underestimated after the late 1990s,especially for mild and moderate droughts, indicative of a weaker response of dryland drought changes to global warming in CMIP6 models. Our results suggest that it is imperative to employ bias correction approaches in drought-related studies over drylands by using CMIP6 outputs.展开更多
Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but...Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.展开更多
BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are p...BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.展开更多
Rock fragmentation plays a critical role in rock avalanches,yet conventional approaches such as classical granular flow models or the bonded particle model have limitations in accurately characterizing the progressive...Rock fragmentation plays a critical role in rock avalanches,yet conventional approaches such as classical granular flow models or the bonded particle model have limitations in accurately characterizing the progressive disintegration and kinematics of multi-deformable rock blocks during rockslides.The present study proposes a discrete-continuous numerical model,based on a cohesive zone model,to explicitly incorporate the progressive fragmentation and intricate interparticle interactions inherent in rockslides.Breakable rock granular assemblies are released along an inclined plane and flow onto a horizontal plane.The numerical scenarios are established to incorporate variations in slope angle,initial height,friction coefficient,and particle number.The evolutions of fragmentation,kinematic,runout and depositional characteristics are quantitatively analyzed and compared with experimental and field data.A positive linear relationship between the equivalent friction coefficient and the apparent friction coefficient is identified.In general,the granular mass predominantly exhibits characteristics of a dense granular flow,with the Savage number exhibiting a decreasing trend as the volume of mass increases.The process of particle breakage gradually occurs in a bottom-up manner,leading to a significant increase in the angular velocities of the rock blocks with increasing depth.The simulation results reproduce the field observations of inverse grading and source stratigraphy preservation in the deposit.We propose a disintegration index that incorporates factors such as drop height,rock mass volume,and rock strength.Our findings demonstrate a consistent linear relationship between this index and the fragmentation degree in all tested scenarios.展开更多
In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and comput...In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and computing power advance,the issue of increasingly larger models and a growing number of parameters has surfaced.Consequently,model training has become more costly and less efficient.To enhance the efficiency and accuracy of the training process while reducing themodel volume,this paper proposes a first-order pruningmodel PAL-BERT based on the ALBERT model according to the characteristics of question-answering(QA)system and language model.Firstly,a first-order network pruning method based on the ALBERT model is designed,and the PAL-BERT model is formed.Then,the parameter optimization strategy of the PAL-BERT model is formulated,and the Mish function was used as an activation function instead of ReLU to improve the performance.Finally,after comparison experiments with traditional deep learning models TextCNN and BiLSTM,it is confirmed that PALBERT is a pruning model compression method that can significantly reduce training time and optimize training efficiency.Compared with traditional models,PAL-BERT significantly improves the NLP task’s performance.展开更多
Interval model updating(IMU)methods have been widely used in uncertain model updating due to their low requirements for sample data.However,the surrogate model in IMU methods mostly adopts the one-time construction me...Interval model updating(IMU)methods have been widely used in uncertain model updating due to their low requirements for sample data.However,the surrogate model in IMU methods mostly adopts the one-time construction method.This makes the accuracy of the surrogate model highly dependent on the experience of users and affects the accuracy of IMU methods.Therefore,an improved IMU method via the adaptive Kriging models is proposed.This method transforms the objective function of the IMU problem into two deterministic global optimization problems about the upper bound and the interval diameter through universal grey numbers.These optimization problems are addressed through the adaptive Kriging models and the particle swarm optimization(PSO)method to quantify the uncertain parameters,and the IMU is accomplished.During the construction of these adaptive Kriging models,the sample space is gridded according to sensitivity information.Local sampling is then performed in key subspaces based on the maximum mean square error(MMSE)criterion.The interval division coefficient and random sampling coefficient are adaptively adjusted without human interference until the model meets accuracy requirements.The effectiveness of the proposed method is demonstrated by a numerical example of a three-degree-of-freedom mass-spring system and an experimental example of a butted cylindrical shell.The results show that the updated results of the interval model are in good agreement with the experimental results.展开更多
基金supported by the NSF grant AGS-1928883the NASA grants,80NSSC20K1670 and 80MSFC20C0019+2 种基金support from NASA GSFC IRADHIFISFM funds。
文摘Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind−Magnetosphere−Ionosphere Link Explorer(SMILE)will observe magnetosheath and its boundary motion in soft X-rays for understanding magnetopause reconnection modes under various solar wind conditions after their respective launches in 2024 and 2025.Magnetosheath conditions,namely,plasma density,velocity,and temperature,are key parameters for predicting and analyzing soft X-ray images from the LEXI and SMILE missions.We developed a userfriendly model of magnetosheath that parameterizes number density,velocity,temperature,and magnetic field by utilizing the global Magnetohydrodynamics(MHD)model as well as the pre-existing gas-dynamic and analytic models.Using this parameterized magnetosheath model,scientists can easily reconstruct expected soft X-ray images and utilize them for analysis of observed images of LEXI and SMILE without simulating the complicated global magnetosphere models.First,we created an MHD-based magnetosheath model by running a total of 14 OpenGGCM global MHD simulations under 7 solar wind densities(1,5,10,15,20,25,and 30 cm)and 2 interplanetary magnetic field Bz components(±4 nT),and then parameterizing the results in new magnetosheath conditions.We compared the magnetosheath model result with THEMIS statistical data and it showed good agreement with a weighted Pearson correlation coefficient greater than 0.77,especially for plasma density and plasma velocity.Second,we compiled a suite of magnetosheath models incorporating previous magnetosheath models(gas-dynamic,analytic),and did two case studies to test the performance.The MHD-based model was comparable to or better than the previous models while providing self-consistency among the magnetosheath parameters.Third,we constructed a tool to calculate a soft X-ray image from any given vantage point,which can support the planning and data analysis of the aforementioned LEXI and SMILE missions.A release of the code has been uploaded to a Github repository.
基金supported by Ministry of Science and Technology of China (Grant No. 2018YFA0606501)National Natural Science Foundation of China (Grant No. 42075037)+1 种基金Key Laboratory Open Research Program of Xinjiang Science and Technology Department (Grant No. 2022D04009)the National Key Scientific and Technological Infrastructure project “Earth System Numerical Simulation Facility” (EarthLab)。
文摘Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the current state-of-the-art Coupled Model Intercomparison Project phase 6(CMIP6) models remain unknown. Here, both the strengths and weaknesses of CMIP6 models in simulating droughts and corresponding hydrothermal conditions in drylands are assessed.While the general patterns of simulated meteorological elements in drylands resemble the observations, the annual precipitation is overestimated by ~33%(with a model spread of 2.3%–77.2%), along with an underestimation of potential evapotranspiration(PET) by ~32%(17.5%–47.2%). The water deficit condition, measured by the difference between precipitation and PET, is 50%(29.1%–71.7%) weaker than observations. The CMIP6 models show weaknesses in capturing the climate mean drought characteristics in drylands, particularly with the occurrence and duration largely underestimated in the hyperarid Afro-Asian areas. Nonetheless, the drought-associated meteorological anomalies, including reduced precipitation, warmer temperatures, higher evaporative demand, and increased water deficit conditions, are reasonably reproduced. The simulated magnitude of precipitation(water deficit) associated with dryland droughts is overestimated by 28%(24%) compared to observations. The observed increasing trends in drought fractional area,occurrence, and corresponding meteorological anomalies during 1980–2014 are reasonably reproduced. Still, the increase in drought characteristics, associated precipitation and water deficit are obviously underestimated after the late 1990s,especially for mild and moderate droughts, indicative of a weaker response of dryland drought changes to global warming in CMIP6 models. Our results suggest that it is imperative to employ bias correction approaches in drought-related studies over drylands by using CMIP6 outputs.
基金supported by the Research Council of Norway under contracts 223252/F50 and 300844/F50the Trond Mohn Foundation。
文摘Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.
文摘BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.
基金support from the National Key R&D plan(Grant No.2022YFC3004303)the National Natural Science Foundation of China(Grant No.42107161)+3 种基金the State Key Laboratory of Hydroscience and Hydraulic Engineering(Grant No.2021-KY-04)the Open Research Fund Program of State Key Laboratory of Hydroscience and Engineering(sklhse-2023-C-01)the Open Research Fund Program of Key Laboratory of the Hydrosphere of the Ministry of Water Resources(mklhs-2023-04)the China Three Gorges Corporation(XLD/2117).
文摘Rock fragmentation plays a critical role in rock avalanches,yet conventional approaches such as classical granular flow models or the bonded particle model have limitations in accurately characterizing the progressive disintegration and kinematics of multi-deformable rock blocks during rockslides.The present study proposes a discrete-continuous numerical model,based on a cohesive zone model,to explicitly incorporate the progressive fragmentation and intricate interparticle interactions inherent in rockslides.Breakable rock granular assemblies are released along an inclined plane and flow onto a horizontal plane.The numerical scenarios are established to incorporate variations in slope angle,initial height,friction coefficient,and particle number.The evolutions of fragmentation,kinematic,runout and depositional characteristics are quantitatively analyzed and compared with experimental and field data.A positive linear relationship between the equivalent friction coefficient and the apparent friction coefficient is identified.In general,the granular mass predominantly exhibits characteristics of a dense granular flow,with the Savage number exhibiting a decreasing trend as the volume of mass increases.The process of particle breakage gradually occurs in a bottom-up manner,leading to a significant increase in the angular velocities of the rock blocks with increasing depth.The simulation results reproduce the field observations of inverse grading and source stratigraphy preservation in the deposit.We propose a disintegration index that incorporates factors such as drop height,rock mass volume,and rock strength.Our findings demonstrate a consistent linear relationship between this index and the fragmentation degree in all tested scenarios.
基金Supported by Sichuan Science and Technology Program(2021YFQ0003,2023YFSY0026,2023YFH0004).
文摘In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and computing power advance,the issue of increasingly larger models and a growing number of parameters has surfaced.Consequently,model training has become more costly and less efficient.To enhance the efficiency and accuracy of the training process while reducing themodel volume,this paper proposes a first-order pruningmodel PAL-BERT based on the ALBERT model according to the characteristics of question-answering(QA)system and language model.Firstly,a first-order network pruning method based on the ALBERT model is designed,and the PAL-BERT model is formed.Then,the parameter optimization strategy of the PAL-BERT model is formulated,and the Mish function was used as an activation function instead of ReLU to improve the performance.Finally,after comparison experiments with traditional deep learning models TextCNN and BiLSTM,it is confirmed that PALBERT is a pruning model compression method that can significantly reduce training time and optimize training efficiency.Compared with traditional models,PAL-BERT significantly improves the NLP task’s performance.
基金Project supported by the National Natural Science Foundation of China(Nos.12272211,12072181,12121002)。
文摘Interval model updating(IMU)methods have been widely used in uncertain model updating due to their low requirements for sample data.However,the surrogate model in IMU methods mostly adopts the one-time construction method.This makes the accuracy of the surrogate model highly dependent on the experience of users and affects the accuracy of IMU methods.Therefore,an improved IMU method via the adaptive Kriging models is proposed.This method transforms the objective function of the IMU problem into two deterministic global optimization problems about the upper bound and the interval diameter through universal grey numbers.These optimization problems are addressed through the adaptive Kriging models and the particle swarm optimization(PSO)method to quantify the uncertain parameters,and the IMU is accomplished.During the construction of these adaptive Kriging models,the sample space is gridded according to sensitivity information.Local sampling is then performed in key subspaces based on the maximum mean square error(MMSE)criterion.The interval division coefficient and random sampling coefficient are adaptively adjusted without human interference until the model meets accuracy requirements.The effectiveness of the proposed method is demonstrated by a numerical example of a three-degree-of-freedom mass-spring system and an experimental example of a butted cylindrical shell.The results show that the updated results of the interval model are in good agreement with the experimental results.