This paper investigated an effective and robust mechanism for detecting simple mail transfer protocol (SMTP) traffic anomaly. The detection method cumulates the deviation of current delivering status from history beha...This paper investigated an effective and robust mechanism for detecting simple mail transfer protocol (SMTP) traffic anomaly. The detection method cumulates the deviation of current delivering status from history behavior based on a weighted sum method called the leaky integrate-and-fire model to detect anomaly. The simplicity of the detection method is that the method need not store history profile and low computation overhead, which makes the detection method itself immunes to attacks. The performance is investigated in terms of detection probability, the false alarm ratio, and the detection delay. The results show that leaky integrate-and-fire method is quite effective at detecting constant intensity attacks and increasing intensity attacks. Compared with the non-parametric cumulative sum method, the evaluation results show that the proposed detection method has shorter detection latency and higher detection probability.展开更多
Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind−Magnetosphere−Ionosphere Link Explorer(SMILE)will observe magnetosheath and its boundary motion in soft X-rays for understanding magnetopause reconnectio...Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind−Magnetosphere−Ionosphere Link Explorer(SMILE)will observe magnetosheath and its boundary motion in soft X-rays for understanding magnetopause reconnection modes under various solar wind conditions after their respective launches in 2024 and 2025.Magnetosheath conditions,namely,plasma density,velocity,and temperature,are key parameters for predicting and analyzing soft X-ray images from the LEXI and SMILE missions.We developed a userfriendly model of magnetosheath that parameterizes number density,velocity,temperature,and magnetic field by utilizing the global Magnetohydrodynamics(MHD)model as well as the pre-existing gas-dynamic and analytic models.Using this parameterized magnetosheath model,scientists can easily reconstruct expected soft X-ray images and utilize them for analysis of observed images of LEXI and SMILE without simulating the complicated global magnetosphere models.First,we created an MHD-based magnetosheath model by running a total of 14 OpenGGCM global MHD simulations under 7 solar wind densities(1,5,10,15,20,25,and 30 cm)and 2 interplanetary magnetic field Bz components(±4 nT),and then parameterizing the results in new magnetosheath conditions.We compared the magnetosheath model result with THEMIS statistical data and it showed good agreement with a weighted Pearson correlation coefficient greater than 0.77,especially for plasma density and plasma velocity.Second,we compiled a suite of magnetosheath models incorporating previous magnetosheath models(gas-dynamic,analytic),and did two case studies to test the performance.The MHD-based model was comparable to or better than the previous models while providing self-consistency among the magnetosheath parameters.Third,we constructed a tool to calculate a soft X-ray image from any given vantage point,which can support the planning and data analysis of the aforementioned LEXI and SMILE missions.A release of the code has been uploaded to a Github repository.展开更多
Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the curr...Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the current state-of-the-art Coupled Model Intercomparison Project phase 6(CMIP6) models remain unknown. Here, both the strengths and weaknesses of CMIP6 models in simulating droughts and corresponding hydrothermal conditions in drylands are assessed.While the general patterns of simulated meteorological elements in drylands resemble the observations, the annual precipitation is overestimated by ~33%(with a model spread of 2.3%–77.2%), along with an underestimation of potential evapotranspiration(PET) by ~32%(17.5%–47.2%). The water deficit condition, measured by the difference between precipitation and PET, is 50%(29.1%–71.7%) weaker than observations. The CMIP6 models show weaknesses in capturing the climate mean drought characteristics in drylands, particularly with the occurrence and duration largely underestimated in the hyperarid Afro-Asian areas. Nonetheless, the drought-associated meteorological anomalies, including reduced precipitation, warmer temperatures, higher evaporative demand, and increased water deficit conditions, are reasonably reproduced. The simulated magnitude of precipitation(water deficit) associated with dryland droughts is overestimated by 28%(24%) compared to observations. The observed increasing trends in drought fractional area,occurrence, and corresponding meteorological anomalies during 1980–2014 are reasonably reproduced. Still, the increase in drought characteristics, associated precipitation and water deficit are obviously underestimated after the late 1990s,especially for mild and moderate droughts, indicative of a weaker response of dryland drought changes to global warming in CMIP6 models. Our results suggest that it is imperative to employ bias correction approaches in drought-related studies over drylands by using CMIP6 outputs.展开更多
Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but...Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.展开更多
Sensors for fire alarms require a high level of predictive variables to ensure accurate detection, injury prevention, and loss prevention. Bayesian networks can aid in enhancing early fire detection capabilities and r...Sensors for fire alarms require a high level of predictive variables to ensure accurate detection, injury prevention, and loss prevention. Bayesian networks can aid in enhancing early fire detection capabilities and reducing the frequency of erroneous fire alerts, thereby enhancing the effectiveness of numerous safety monitoring systems. This research explores the development of optimized probabilistic graphic models for the discretization thresholds of alarm system predictor variables. The study presents a statistical model framework that increases the efficacy of fire detection by predicting the discretization thresholds of alarm system predictor variable fluctuations used to detect the onset of fire. The work applies the Bayesian networks and probabilistic visual models to reveal the specific characteristics required to cope with fire detection strategies and patterns. The adopted methodology utilizes a combination of prior knowledge and statistical data to draw conclusions from observations. Utilizing domain knowledge to compute conditional dependencies between network variables enabled predictions to be made through the application of specialized analytical and simulation techniques.展开更多
Earth resource and environmental monitoring are essential areas that can be used to investigate the environmental conditions and natural resources supporting sustainable policy development,regulatory measures,and thei...Earth resource and environmental monitoring are essential areas that can be used to investigate the environmental conditions and natural resources supporting sustainable policy development,regulatory measures,and their implementation elevating the environment.Large-scale forest fire is considered a major harmful hazard that affects climate change and life over the globe.Therefore,the early identification of forest fires using automated tools is essential to avoid the spread of fire to a large extent.Therefore,this paper focuses on the design of automated forest fire detection using a fusion-based deep learning(AFFD-FDL)model for environmental monitoring.The AFFDFDL technique involves the design of an entropy-based fusion model for feature extraction.The combination of the handcrafted features using histogram of gradients(HOG)with deep features using SqueezeNet and Inception v3 models.Besides,an optimal extreme learning machine(ELM)based classifier is used to identify the existence of fire or not.In order to properly tune the parameters of the ELM model,the oppositional glowworm swarm optimization(OGSO)algorithm is employed and thereby improves the forest fire detection performance.A wide range of simulation analyses takes place on a benchmark dataset and the results are inspected under several aspects.The experimental results highlighted the betterment of the AFFD-FDL technique over the recent state of art techniques.展开更多
Deterministic compartment models(CMs)and stochastic models,including stochastic CMs and agent-based models,are widely utilized in epidemic modeling.However,the relationship between CMs and their corresponding stochast...Deterministic compartment models(CMs)and stochastic models,including stochastic CMs and agent-based models,are widely utilized in epidemic modeling.However,the relationship between CMs and their corresponding stochastic models is not well understood.The present study aimed to address this gap by conducting a comparative study using the susceptible,exposed,infectious,and recovered(SEIR)model and its extended CMs from the coronavirus disease 2019 modeling literature.We demonstrated the equivalence of the numerical solution of CMs using the Euler scheme and their stochastic counterparts through theoretical analysis and simulations.Based on this equivalence,we proposed an efficient model calibration method that could replicate the exact solution of CMs in the corresponding stochastic models through parameter adjustment.The advancement in calibration techniques enhanced the accuracy of stochastic modeling in capturing the dynamics of epidemics.However,it should be noted that discrete-time stochastic models cannot perfectly reproduce the exact solution of continuous-time CMs.Additionally,we proposed a new stochastic compartment and agent mixed model as an alternative to agent-based models for large-scale population simulations with a limited number of agents.This model offered a balance between computational efficiency and accuracy.The results of this research contributed to the comparison and unification of deterministic CMs and stochastic models in epidemic modeling.Furthermore,the results had implications for the development of hybrid models that integrated the strengths of both frameworks.Overall,the present study has provided valuable epidemic modeling techniques and their practical applications for understanding and controlling the spread of infectious diseases.展开更多
We used the geological map and published rock density measurements to compile the digital rock density model for the Hong Kong territories.We then estimated the average density for the whole territory.According to our...We used the geological map and published rock density measurements to compile the digital rock density model for the Hong Kong territories.We then estimated the average density for the whole territory.According to our result,the rock density values in Hong Kong vary from 2101 to 2681 kg·m^(-3).These density values are typically smaller than the average density of 2670 kg·m^(-3),often adopted to represent the average density of the upper continental crust in physical geodesy and gravimetric geophysics applications.This finding reflects that the geological configuration in Hong Kong is mainly formed by light volcanic formations and lava flows with overlying sedimentary deposits at many locations,while the percentage of heavier metamorphic rocks is very low(less than 1%).This product will improve the accuracy of a detailed geoid model and orthometric heights.展开更多
The tensile-shear interactive damage(TSID)model is a novel and powerful constitutive model for rock-like materials.This study proposes a methodology to calibrate the TSID model parameters to simulate sandstone.The bas...The tensile-shear interactive damage(TSID)model is a novel and powerful constitutive model for rock-like materials.This study proposes a methodology to calibrate the TSID model parameters to simulate sandstone.The basic parameters of sandstone are determined through a series of static and dynamic tests,including uniaxial compression,Brazilian disc,triaxial compression under varying confining pressures,hydrostatic compression,and dynamic compression and tensile tests with a split Hopkinson pressure bar.Based on the sandstone test results from this study and previous research,a step-by-step procedure for parameter calibration is outlined,which accounts for the categories of the strength surface,equation of state(EOS),strain rate effect,and damage.The calibrated parameters are verified through numerical tests that correspond to the experimental loading conditions.Consistency between numerical results and experimental data indicates the precision and reliability of the calibrated parameters.The methodology presented in this study is scientifically sound,straightforward,and essential for improving the TSID model.Furthermore,it has the potential to contribute to other rock constitutive models,particularly new user-defined models.展开更多
In time-variant reliability problems,there are a lot of uncertain variables from different sources.Therefore,it is important to consider these uncertainties in engineering.In addition,time-variant reliability problems...In time-variant reliability problems,there are a lot of uncertain variables from different sources.Therefore,it is important to consider these uncertainties in engineering.In addition,time-variant reliability problems typically involve a complexmultilevel nested optimization problem,which can result in an enormous amount of computation.To this end,this paper studies the time-variant reliability evaluation of structures with stochastic and bounded uncertainties using a mixed probability and convex set model.In this method,the stochastic process of a limit-state function with mixed uncertain parameters is first discretized and then converted into a timeindependent reliability problem.Further,to solve the double nested optimization problem in hybrid reliability calculation,an efficient iterative scheme is designed in standard uncertainty space to determine the most probable point(MPP).The limit state function is linearized at these points,and an innovative random variable is defined to solve the equivalent static reliability analysis model.The effectiveness of the proposed method is verified by two benchmark numerical examples and a practical engineering problem.展开更多
Simulating the total ionizing dose(TID)of an electrical system using transistor-level models can be difficult and expensive,particularly for digital-integrated circuits(ICs).In this study,a method for modeling TID eff...Simulating the total ionizing dose(TID)of an electrical system using transistor-level models can be difficult and expensive,particularly for digital-integrated circuits(ICs).In this study,a method for modeling TID effects in complementary metaloxide semiconductor(CMOS)digital ICs based on the input/output buffer information specification(IBIS)was proposed.The digital IC was first divided into three parts based on its internal structure:the input buffer,output buffer,and functional area.Each of these three parts was separately modeled.Using the IBIS model,the transistor V-I characteristic curves of the buffers were processed,and the physical parameters were extracted and modeled using VHDL-AMS.In the functional area,logic functions were modeled in VHDL according to the data sheet.A golden digital IC model was developed by combining the input buffer,output buffer,and functional area models.Furthermore,the golden ratio was reconstructed based on TID experimental data,enabling the assessment of TID effects on the threshold voltage,carrier mobility,and time series of the digital IC.TID experiments were conducted using a CMOS non-inverting multiplexer,NC7SZ157,and the results were compared with the simulation results,which showed that the relative errors were less than 2%at each dose point.This confirms the practicality and accuracy of the proposed modeling method.The TID effect model for digital ICs developed using this modeling technique includes both the logical function of the IC and changes in electrical properties and functional degradation impacted by TID,which has potential applications in the design of radiation-hardening tolerance in digital ICs.展开更多
Understanding the anisotropic creep behaviors of shale under direct shearing is a challenging issue.In this context,we conducted shear-creep and steady-creep tests on shale with five bedding orientations (i.e.0°,...Understanding the anisotropic creep behaviors of shale under direct shearing is a challenging issue.In this context,we conducted shear-creep and steady-creep tests on shale with five bedding orientations (i.e.0°,30°,45°,60°,and 90°),under multiple levels of direct shearing for the first time.The results show that the anisotropic creep of shale exhibits a significant stress-dependent behavior.Under a low shear stress,the creep compliance of shale increases linearly with the logarithm of time at all bedding orientations,and the increase depends on the bedding orientation and creep time.Under high shear stress conditions,the creep compliance of shale is minimal when the bedding orientation is 0°,and the steady-creep rate of shale increases significantly with increasing bedding orientations of 30°,45°,60°,and 90°.The stress-strain values corresponding to the inception of the accelerated creep stage show an increasing and then decreasing trend with the bedding orientation.A semilogarithmic model that could reflect the stress dependence of the steady-creep rate while considering the hardening and damage process is proposed.The model minimizes the deviation of the calculated steady-state creep rate from the observed value and reveals the behavior of the bedding orientation's influence on the steady-creep rate.The applicability of the five classical empirical creep models is quantitatively evaluated.It shows that the logarithmic model can well explain the experimental creep strain and creep rate,and it can accurately predict long-term shear creep deformation.Based on an improved logarithmic model,the variations in creep parameters with shear stress and bedding orientations are discussed.With abovementioned findings,a mathematical method for constructing an anisotropic shear creep model of shale is proposed,which can characterize the nonlinear dependence of the anisotropic shear creep behavior of shale on the bedding orientation.展开更多
Forest fires are natural disasters that can occur suddenly and can be very damaging,burning thousands of square kilometers.Prevention is better than suppression and prediction models of forest fire occurrence have dev...Forest fires are natural disasters that can occur suddenly and can be very damaging,burning thousands of square kilometers.Prevention is better than suppression and prediction models of forest fire occurrence have developed from the logistic regression model,the geographical weighted logistic regression model,the Lasso regression model,the random forest model,and the support vector machine model based on historical forest fire data from 2000 to 2019 in Jilin Province.The models,along with a distribution map are presented in this paper to provide a theoretical basis for forest fire management in this area.Existing studies show that the prediction accuracies of the two machine learning models are higher than those of the three generalized linear regression models.The accuracies of the random forest model,the support vector machine model,geographical weighted logistic regression model,the Lasso regression model,and logistic model were 88.7%,87.7%,86.0%,85.0%and 84.6%,respectively.Weather is the main factor affecting forest fires,while the impacts of topography factors,human and social-economic factors on fire occurrence were similar.展开更多
Eddy current dampers (ECDs) have emerged as highly desirable solutions for vibration control due to theirexceptional damping performance and durability. However, the existing constitutive models present challenges tot...Eddy current dampers (ECDs) have emerged as highly desirable solutions for vibration control due to theirexceptional damping performance and durability. However, the existing constitutive models present challenges tothe widespread implementation of ECD technology, and there is limited availability of finite element analysis (FEA)software capable of accurately modeling the behavior of ECDs. This study addresses these issues by developing anewconstitutivemodel that is both easily understandable and user-friendly for FEAsoftware. By utilizing numericalresults obtained from electromagnetic FEA, a novel power law constitutive model is proposed to capture thenonlinear behavior of ECDs. The effectiveness of the power law constitutive model is validated throughmechanicalproperty tests and numerical seismic analysis. Furthermore, a detailed description of the application process ofthe power law constitutive model in ANSYS FEA software is provided. To facilitate the preliminary design ofECDs, an analytical derivation of energy dissipation and parameter optimization for ECDs under harmonicmotionis performed. The results demonstrate that the power law constitutive model serves as a viable alternative forconducting dynamic analysis using FEA and optimizing parameters for ECDs.展开更多
Short-term(up to 30 days)predictions of Earth Rotation Parameters(ERPs)such as Polar Motion(PM:PMX and PMY)play an essential role in real-time applications related to high-precision reference frame conversion.Currentl...Short-term(up to 30 days)predictions of Earth Rotation Parameters(ERPs)such as Polar Motion(PM:PMX and PMY)play an essential role in real-time applications related to high-precision reference frame conversion.Currently,least squares(LS)+auto-regressive(AR)hybrid method is one of the main techniques of PM prediction.Besides,the weighted LS+AR hybrid method performs well for PM short-term prediction.However,the corresponding covariance information of LS fitting residuals deserves further exploration in the AR model.In this study,we have derived a modified stochastic model for the LS+AR hybrid method,namely the weighted LS+weighted AR hybrid method.By using the PM data products of IERS EOP 14 C04,the numerical results indicate that for PM short-term forecasting,the proposed weighted LS+weighted AR hybrid method shows an advantage over both the LS+AR hybrid method and the weighted LS+AR hybrid method.Compared to the mean absolute errors(MAEs)of PMX/PMY sho rt-term prediction of the LS+AR hybrid method and the weighted LS+AR hybrid method,the weighted LS+weighted AR hybrid method shows average improvements of 6.61%/12.08%and 0.24%/11.65%,respectively.Besides,for the slopes of the linear regression lines fitted to the errors of each method,the growth of the prediction error of the proposed method is slower than that of the other two methods.展开更多
BACKGROUND Colorectal cancer(CRC)is a serious threat worldwide.Although early screening is suggested to be the most effective method to prevent and control CRC,the current situation of early screening for CRC is still...BACKGROUND Colorectal cancer(CRC)is a serious threat worldwide.Although early screening is suggested to be the most effective method to prevent and control CRC,the current situation of early screening for CRC is still not optimistic.In China,the incidence of CRC in the Yangtze River Delta region is increasing dramatically,but few studies have been conducted.Therefore,it is necessary to develop a simple and efficient early screening model for CRC.AIM To develop and validate an early-screening nomogram model to identify individuals at high risk of CRC.METHODS Data of 64448 participants obtained from Ningbo Hospital,China between 2014 and 2017 were retrospectively analyzed.The cohort comprised 64448 individuals,of which,530 were excluded due to missing or incorrect data.Of 63918,7607(11.9%)individuals were considered to be high risk for CRC,and 56311(88.1%)were not.The participants were randomly allocated to a training set(44743)or validation set(19175).The discriminatory ability,predictive accuracy,and clinical utility of the model were evaluated by constructing and analyzing receiver operating characteristic(ROC)curves and calibration curves and by decision curve analysis.Finally,the model was validated internally using a bootstrap resampling technique.RESULTS Seven variables,including demographic,lifestyle,and family history information,were examined.Multifactorial logistic regression analysis revealed that age[odds ratio(OR):1.03,95%confidence interval(CI):1.02-1.03,P<0.001],body mass index(BMI)(OR:1.07,95%CI:1.06-1.08,P<0.001),waist circumference(WC)(OR:1.03,95%CI:1.02-1.03 P<0.001),lifestyle(OR:0.45,95%CI:0.42-0.48,P<0.001),and family history(OR:4.28,95%CI:4.04-4.54,P<0.001)were the most significant predictors of high-risk CRC.Healthy lifestyle was a protective factor,whereas family history was the most significant risk factor.The area under the curve was 0.734(95%CI:0.723-0.745)for the final validation set ROC curve and 0.735(95%CI:0.728-0.742)for the training set ROC curve.The calibration curve demonstrated a high correlation between the CRC high-risk population predicted by the nomogram model and the actual CRC high-risk population.CONCLUSION The early-screening nomogram model for CRC prediction in high-risk populations developed in this study based on age,BMI,WC,lifestyle,and family history exhibited high accuracy.展开更多
Amid urbanization and the continuous expansion of transportation networks,the necessity for tunnel construction and maintenance has become paramount.Addressing this need requires the investigation of efficient,economi...Amid urbanization and the continuous expansion of transportation networks,the necessity for tunnel construction and maintenance has become paramount.Addressing this need requires the investigation of efficient,economical,and robust tunnel reinforcement techniques.This paper explores fiber reinforced polymer(FRP)and steel fiber reinforced concrete(SFRC)technologies,which have emerged as viable solutions for enhancing tunnel structures.FRP is celebrated for its lightweight and high-strength attributes,effectively augmenting load-bearing capacity and seismic resistance,while SFRC’s notable crack resistance and longevity potentially enhance the performance of tunnel segments.Nonetheless,current research predominantly focuses on experimental analysis,lacking comprehensive theoretical models.To bridge this gap,the cohesive zone model(CZM),which utilizes cohesive elements to characterize the potential fracture surfaces of concrete/SFRC,the rebar-concrete interface,and the FRP-concrete interface,was employed.A modeling approach was subsequently proposed to construct a tunnel segment model reinforced with either SFRC or FRP.Moreover,the corresponding mixed-mode constitutive models,considering interfacial friction,were integrated into the proposed model.Experimental validation and numerical simulations corroborated the accuracy of the proposed model.Additionally,this study examined the reinforcement design of tunnel segments.Through a numerical evaluation,the effectiveness of innovative reinforcement schemes,such as substituting concrete with SFRC and externally bonding FRP sheets,was assessed utilizing a case study from the Fuzhou Metro Shield Tunnel Construction Project.展开更多
文摘This paper investigated an effective and robust mechanism for detecting simple mail transfer protocol (SMTP) traffic anomaly. The detection method cumulates the deviation of current delivering status from history behavior based on a weighted sum method called the leaky integrate-and-fire model to detect anomaly. The simplicity of the detection method is that the method need not store history profile and low computation overhead, which makes the detection method itself immunes to attacks. The performance is investigated in terms of detection probability, the false alarm ratio, and the detection delay. The results show that leaky integrate-and-fire method is quite effective at detecting constant intensity attacks and increasing intensity attacks. Compared with the non-parametric cumulative sum method, the evaluation results show that the proposed detection method has shorter detection latency and higher detection probability.
基金supported by the NSF grant AGS-1928883the NASA grants,80NSSC20K1670 and 80MSFC20C0019+2 种基金support from NASA GSFC IRADHIFISFM funds。
文摘Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind−Magnetosphere−Ionosphere Link Explorer(SMILE)will observe magnetosheath and its boundary motion in soft X-rays for understanding magnetopause reconnection modes under various solar wind conditions after their respective launches in 2024 and 2025.Magnetosheath conditions,namely,plasma density,velocity,and temperature,are key parameters for predicting and analyzing soft X-ray images from the LEXI and SMILE missions.We developed a userfriendly model of magnetosheath that parameterizes number density,velocity,temperature,and magnetic field by utilizing the global Magnetohydrodynamics(MHD)model as well as the pre-existing gas-dynamic and analytic models.Using this parameterized magnetosheath model,scientists can easily reconstruct expected soft X-ray images and utilize them for analysis of observed images of LEXI and SMILE without simulating the complicated global magnetosphere models.First,we created an MHD-based magnetosheath model by running a total of 14 OpenGGCM global MHD simulations under 7 solar wind densities(1,5,10,15,20,25,and 30 cm)and 2 interplanetary magnetic field Bz components(±4 nT),and then parameterizing the results in new magnetosheath conditions.We compared the magnetosheath model result with THEMIS statistical data and it showed good agreement with a weighted Pearson correlation coefficient greater than 0.77,especially for plasma density and plasma velocity.Second,we compiled a suite of magnetosheath models incorporating previous magnetosheath models(gas-dynamic,analytic),and did two case studies to test the performance.The MHD-based model was comparable to or better than the previous models while providing self-consistency among the magnetosheath parameters.Third,we constructed a tool to calculate a soft X-ray image from any given vantage point,which can support the planning and data analysis of the aforementioned LEXI and SMILE missions.A release of the code has been uploaded to a Github repository.
基金supported by Ministry of Science and Technology of China (Grant No. 2018YFA0606501)National Natural Science Foundation of China (Grant No. 42075037)+1 种基金Key Laboratory Open Research Program of Xinjiang Science and Technology Department (Grant No. 2022D04009)the National Key Scientific and Technological Infrastructure project “Earth System Numerical Simulation Facility” (EarthLab)。
文摘Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the current state-of-the-art Coupled Model Intercomparison Project phase 6(CMIP6) models remain unknown. Here, both the strengths and weaknesses of CMIP6 models in simulating droughts and corresponding hydrothermal conditions in drylands are assessed.While the general patterns of simulated meteorological elements in drylands resemble the observations, the annual precipitation is overestimated by ~33%(with a model spread of 2.3%–77.2%), along with an underestimation of potential evapotranspiration(PET) by ~32%(17.5%–47.2%). The water deficit condition, measured by the difference between precipitation and PET, is 50%(29.1%–71.7%) weaker than observations. The CMIP6 models show weaknesses in capturing the climate mean drought characteristics in drylands, particularly with the occurrence and duration largely underestimated in the hyperarid Afro-Asian areas. Nonetheless, the drought-associated meteorological anomalies, including reduced precipitation, warmer temperatures, higher evaporative demand, and increased water deficit conditions, are reasonably reproduced. The simulated magnitude of precipitation(water deficit) associated with dryland droughts is overestimated by 28%(24%) compared to observations. The observed increasing trends in drought fractional area,occurrence, and corresponding meteorological anomalies during 1980–2014 are reasonably reproduced. Still, the increase in drought characteristics, associated precipitation and water deficit are obviously underestimated after the late 1990s,especially for mild and moderate droughts, indicative of a weaker response of dryland drought changes to global warming in CMIP6 models. Our results suggest that it is imperative to employ bias correction approaches in drought-related studies over drylands by using CMIP6 outputs.
基金supported by the Research Council of Norway under contracts 223252/F50 and 300844/F50the Trond Mohn Foundation。
文摘Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.
文摘Sensors for fire alarms require a high level of predictive variables to ensure accurate detection, injury prevention, and loss prevention. Bayesian networks can aid in enhancing early fire detection capabilities and reducing the frequency of erroneous fire alerts, thereby enhancing the effectiveness of numerous safety monitoring systems. This research explores the development of optimized probabilistic graphic models for the discretization thresholds of alarm system predictor variables. The study presents a statistical model framework that increases the efficacy of fire detection by predicting the discretization thresholds of alarm system predictor variable fluctuations used to detect the onset of fire. The work applies the Bayesian networks and probabilistic visual models to reveal the specific characteristics required to cope with fire detection strategies and patterns. The adopted methodology utilizes a combination of prior knowledge and statistical data to draw conclusions from observations. Utilizing domain knowledge to compute conditional dependencies between network variables enabled predictions to be made through the application of specialized analytical and simulation techniques.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(RGP.1/172/42)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R191)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.This study is supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2023/R/1444).
文摘Earth resource and environmental monitoring are essential areas that can be used to investigate the environmental conditions and natural resources supporting sustainable policy development,regulatory measures,and their implementation elevating the environment.Large-scale forest fire is considered a major harmful hazard that affects climate change and life over the globe.Therefore,the early identification of forest fires using automated tools is essential to avoid the spread of fire to a large extent.Therefore,this paper focuses on the design of automated forest fire detection using a fusion-based deep learning(AFFD-FDL)model for environmental monitoring.The AFFDFDL technique involves the design of an entropy-based fusion model for feature extraction.The combination of the handcrafted features using histogram of gradients(HOG)with deep features using SqueezeNet and Inception v3 models.Besides,an optimal extreme learning machine(ELM)based classifier is used to identify the existence of fire or not.In order to properly tune the parameters of the ELM model,the oppositional glowworm swarm optimization(OGSO)algorithm is employed and thereby improves the forest fire detection performance.A wide range of simulation analyses takes place on a benchmark dataset and the results are inspected under several aspects.The experimental results highlighted the betterment of the AFFD-FDL technique over the recent state of art techniques.
基金supported by the National Natural Science Foundation of China(Grant Nos.82173620 to Yang Zhao and 82041024 to Feng Chen)partially supported by the Bill&Melinda Gates Foundation(Grant No.INV-006371 to Feng Chen)Priority Academic Program Development of Jiangsu Higher Education Institutions.
文摘Deterministic compartment models(CMs)and stochastic models,including stochastic CMs and agent-based models,are widely utilized in epidemic modeling.However,the relationship between CMs and their corresponding stochastic models is not well understood.The present study aimed to address this gap by conducting a comparative study using the susceptible,exposed,infectious,and recovered(SEIR)model and its extended CMs from the coronavirus disease 2019 modeling literature.We demonstrated the equivalence of the numerical solution of CMs using the Euler scheme and their stochastic counterparts through theoretical analysis and simulations.Based on this equivalence,we proposed an efficient model calibration method that could replicate the exact solution of CMs in the corresponding stochastic models through parameter adjustment.The advancement in calibration techniques enhanced the accuracy of stochastic modeling in capturing the dynamics of epidemics.However,it should be noted that discrete-time stochastic models cannot perfectly reproduce the exact solution of continuous-time CMs.Additionally,we proposed a new stochastic compartment and agent mixed model as an alternative to agent-based models for large-scale population simulations with a limited number of agents.This model offered a balance between computational efficiency and accuracy.The results of this research contributed to the comparison and unification of deterministic CMs and stochastic models in epidemic modeling.Furthermore,the results had implications for the development of hybrid models that integrated the strengths of both frameworks.Overall,the present study has provided valuable epidemic modeling techniques and their practical applications for understanding and controlling the spread of infectious diseases.
基金supported by the Hong Kong GRF RGC project 15217222:“Modernization of the leveling network in the Hong Kong territories.”。
文摘We used the geological map and published rock density measurements to compile the digital rock density model for the Hong Kong territories.We then estimated the average density for the whole territory.According to our result,the rock density values in Hong Kong vary from 2101 to 2681 kg·m^(-3).These density values are typically smaller than the average density of 2670 kg·m^(-3),often adopted to represent the average density of the upper continental crust in physical geodesy and gravimetric geophysics applications.This finding reflects that the geological configuration in Hong Kong is mainly formed by light volcanic formations and lava flows with overlying sedimentary deposits at many locations,while the percentage of heavier metamorphic rocks is very low(less than 1%).This product will improve the accuracy of a detailed geoid model and orthometric heights.
基金funded by the National Natural Science Foundation of China(Grant No.12272247)National Key Project(Grant No.GJXM92579)Major Research and Development Project of Metallurgical Corporation of China Ltd.in the Non-Steel Field(Grant No.2021-5).
文摘The tensile-shear interactive damage(TSID)model is a novel and powerful constitutive model for rock-like materials.This study proposes a methodology to calibrate the TSID model parameters to simulate sandstone.The basic parameters of sandstone are determined through a series of static and dynamic tests,including uniaxial compression,Brazilian disc,triaxial compression under varying confining pressures,hydrostatic compression,and dynamic compression and tensile tests with a split Hopkinson pressure bar.Based on the sandstone test results from this study and previous research,a step-by-step procedure for parameter calibration is outlined,which accounts for the categories of the strength surface,equation of state(EOS),strain rate effect,and damage.The calibrated parameters are verified through numerical tests that correspond to the experimental loading conditions.Consistency between numerical results and experimental data indicates the precision and reliability of the calibrated parameters.The methodology presented in this study is scientifically sound,straightforward,and essential for improving the TSID model.Furthermore,it has the potential to contribute to other rock constitutive models,particularly new user-defined models.
基金partially supported by the National Natural Science Foundation of China(52375238)Science and Technology Program of Guangzhou(202201020213,202201020193,202201010399)GZHU-HKUST Joint Research Fund(YH202109).
文摘In time-variant reliability problems,there are a lot of uncertain variables from different sources.Therefore,it is important to consider these uncertainties in engineering.In addition,time-variant reliability problems typically involve a complexmultilevel nested optimization problem,which can result in an enormous amount of computation.To this end,this paper studies the time-variant reliability evaluation of structures with stochastic and bounded uncertainties using a mixed probability and convex set model.In this method,the stochastic process of a limit-state function with mixed uncertain parameters is first discretized and then converted into a timeindependent reliability problem.Further,to solve the double nested optimization problem in hybrid reliability calculation,an efficient iterative scheme is designed in standard uncertainty space to determine the most probable point(MPP).The limit state function is linearized at these points,and an innovative random variable is defined to solve the equivalent static reliability analysis model.The effectiveness of the proposed method is verified by two benchmark numerical examples and a practical engineering problem.
基金This work was supported by the special fund of the State Key Laboratory of Intense Pulsed Radiation Simulation and Effect(No.SKLIPR2011).
文摘Simulating the total ionizing dose(TID)of an electrical system using transistor-level models can be difficult and expensive,particularly for digital-integrated circuits(ICs).In this study,a method for modeling TID effects in complementary metaloxide semiconductor(CMOS)digital ICs based on the input/output buffer information specification(IBIS)was proposed.The digital IC was first divided into three parts based on its internal structure:the input buffer,output buffer,and functional area.Each of these three parts was separately modeled.Using the IBIS model,the transistor V-I characteristic curves of the buffers were processed,and the physical parameters were extracted and modeled using VHDL-AMS.In the functional area,logic functions were modeled in VHDL according to the data sheet.A golden digital IC model was developed by combining the input buffer,output buffer,and functional area models.Furthermore,the golden ratio was reconstructed based on TID experimental data,enabling the assessment of TID effects on the threshold voltage,carrier mobility,and time series of the digital IC.TID experiments were conducted using a CMOS non-inverting multiplexer,NC7SZ157,and the results were compared with the simulation results,which showed that the relative errors were less than 2%at each dose point.This confirms the practicality and accuracy of the proposed modeling method.The TID effect model for digital ICs developed using this modeling technique includes both the logical function of the IC and changes in electrical properties and functional degradation impacted by TID,which has potential applications in the design of radiation-hardening tolerance in digital ICs.
基金funded by the National Natural Science Foundation of China(Grant Nos.U22A20166 and 12172230)the Guangdong Basic and Applied Basic Research Foundation(Grant No.2023A1515012654)+1 种基金funded by the National Natural Science Foundation of China(Grant Nos.U22A20166 and 12172230)the Guangdong Basic and Applied Basic Research Foundation(Grant No.2023A1515012654)。
文摘Understanding the anisotropic creep behaviors of shale under direct shearing is a challenging issue.In this context,we conducted shear-creep and steady-creep tests on shale with five bedding orientations (i.e.0°,30°,45°,60°,and 90°),under multiple levels of direct shearing for the first time.The results show that the anisotropic creep of shale exhibits a significant stress-dependent behavior.Under a low shear stress,the creep compliance of shale increases linearly with the logarithm of time at all bedding orientations,and the increase depends on the bedding orientation and creep time.Under high shear stress conditions,the creep compliance of shale is minimal when the bedding orientation is 0°,and the steady-creep rate of shale increases significantly with increasing bedding orientations of 30°,45°,60°,and 90°.The stress-strain values corresponding to the inception of the accelerated creep stage show an increasing and then decreasing trend with the bedding orientation.A semilogarithmic model that could reflect the stress dependence of the steady-creep rate while considering the hardening and damage process is proposed.The model minimizes the deviation of the calculated steady-state creep rate from the observed value and reveals the behavior of the bedding orientation's influence on the steady-creep rate.The applicability of the five classical empirical creep models is quantitatively evaluated.It shows that the logarithmic model can well explain the experimental creep strain and creep rate,and it can accurately predict long-term shear creep deformation.Based on an improved logarithmic model,the variations in creep parameters with shear stress and bedding orientations are discussed.With abovementioned findings,a mathematical method for constructing an anisotropic shear creep model of shale is proposed,which can characterize the nonlinear dependence of the anisotropic shear creep behavior of shale on the bedding orientation.
基金This research was funded by the National Natural Science Foundation of China(grant no.32271881).
文摘Forest fires are natural disasters that can occur suddenly and can be very damaging,burning thousands of square kilometers.Prevention is better than suppression and prediction models of forest fire occurrence have developed from the logistic regression model,the geographical weighted logistic regression model,the Lasso regression model,the random forest model,and the support vector machine model based on historical forest fire data from 2000 to 2019 in Jilin Province.The models,along with a distribution map are presented in this paper to provide a theoretical basis for forest fire management in this area.Existing studies show that the prediction accuracies of the two machine learning models are higher than those of the three generalized linear regression models.The accuracies of the random forest model,the support vector machine model,geographical weighted logistic regression model,the Lasso regression model,and logistic model were 88.7%,87.7%,86.0%,85.0%and 84.6%,respectively.Weather is the main factor affecting forest fires,while the impacts of topography factors,human and social-economic factors on fire occurrence were similar.
文摘Eddy current dampers (ECDs) have emerged as highly desirable solutions for vibration control due to theirexceptional damping performance and durability. However, the existing constitutive models present challenges tothe widespread implementation of ECD technology, and there is limited availability of finite element analysis (FEA)software capable of accurately modeling the behavior of ECDs. This study addresses these issues by developing anewconstitutivemodel that is both easily understandable and user-friendly for FEAsoftware. By utilizing numericalresults obtained from electromagnetic FEA, a novel power law constitutive model is proposed to capture thenonlinear behavior of ECDs. The effectiveness of the power law constitutive model is validated throughmechanicalproperty tests and numerical seismic analysis. Furthermore, a detailed description of the application process ofthe power law constitutive model in ANSYS FEA software is provided. To facilitate the preliminary design ofECDs, an analytical derivation of energy dissipation and parameter optimization for ECDs under harmonicmotionis performed. The results demonstrate that the power law constitutive model serves as a viable alternative forconducting dynamic analysis using FEA and optimizing parameters for ECDs.
基金supported by National Natural Science Foundation of China,China(No.42004016)HuBei Natural Science Fund,China(No.2020CFB329)+1 种基金HuNan Natural Science Fund,China(No.2023JJ60559,2023JJ60560)the State Key Laboratory of Geodesy and Earth’s Dynamics self-deployment project,China(No.S21L6101)。
文摘Short-term(up to 30 days)predictions of Earth Rotation Parameters(ERPs)such as Polar Motion(PM:PMX and PMY)play an essential role in real-time applications related to high-precision reference frame conversion.Currently,least squares(LS)+auto-regressive(AR)hybrid method is one of the main techniques of PM prediction.Besides,the weighted LS+AR hybrid method performs well for PM short-term prediction.However,the corresponding covariance information of LS fitting residuals deserves further exploration in the AR model.In this study,we have derived a modified stochastic model for the LS+AR hybrid method,namely the weighted LS+weighted AR hybrid method.By using the PM data products of IERS EOP 14 C04,the numerical results indicate that for PM short-term forecasting,the proposed weighted LS+weighted AR hybrid method shows an advantage over both the LS+AR hybrid method and the weighted LS+AR hybrid method.Compared to the mean absolute errors(MAEs)of PMX/PMY sho rt-term prediction of the LS+AR hybrid method and the weighted LS+AR hybrid method,the weighted LS+weighted AR hybrid method shows average improvements of 6.61%/12.08%and 0.24%/11.65%,respectively.Besides,for the slopes of the linear regression lines fitted to the errors of each method,the growth of the prediction error of the proposed method is slower than that of the other two methods.
基金Supported by the Project of NINGBO Leading Medical Health Discipline,No.2022-B11Ningbo Natural Science Foundation,No.202003N4206Public Welfare Foundation of Ningbo,No.2021S108.
文摘BACKGROUND Colorectal cancer(CRC)is a serious threat worldwide.Although early screening is suggested to be the most effective method to prevent and control CRC,the current situation of early screening for CRC is still not optimistic.In China,the incidence of CRC in the Yangtze River Delta region is increasing dramatically,but few studies have been conducted.Therefore,it is necessary to develop a simple and efficient early screening model for CRC.AIM To develop and validate an early-screening nomogram model to identify individuals at high risk of CRC.METHODS Data of 64448 participants obtained from Ningbo Hospital,China between 2014 and 2017 were retrospectively analyzed.The cohort comprised 64448 individuals,of which,530 were excluded due to missing or incorrect data.Of 63918,7607(11.9%)individuals were considered to be high risk for CRC,and 56311(88.1%)were not.The participants were randomly allocated to a training set(44743)or validation set(19175).The discriminatory ability,predictive accuracy,and clinical utility of the model were evaluated by constructing and analyzing receiver operating characteristic(ROC)curves and calibration curves and by decision curve analysis.Finally,the model was validated internally using a bootstrap resampling technique.RESULTS Seven variables,including demographic,lifestyle,and family history information,were examined.Multifactorial logistic regression analysis revealed that age[odds ratio(OR):1.03,95%confidence interval(CI):1.02-1.03,P<0.001],body mass index(BMI)(OR:1.07,95%CI:1.06-1.08,P<0.001),waist circumference(WC)(OR:1.03,95%CI:1.02-1.03 P<0.001),lifestyle(OR:0.45,95%CI:0.42-0.48,P<0.001),and family history(OR:4.28,95%CI:4.04-4.54,P<0.001)were the most significant predictors of high-risk CRC.Healthy lifestyle was a protective factor,whereas family history was the most significant risk factor.The area under the curve was 0.734(95%CI:0.723-0.745)for the final validation set ROC curve and 0.735(95%CI:0.728-0.742)for the training set ROC curve.The calibration curve demonstrated a high correlation between the CRC high-risk population predicted by the nomogram model and the actual CRC high-risk population.CONCLUSION The early-screening nomogram model for CRC prediction in high-risk populations developed in this study based on age,BMI,WC,lifestyle,and family history exhibited high accuracy.
基金funded by the Scientific research startup Foundation of Fujian University of Technology(GY-Z21067 and GY-Z21026).
文摘Amid urbanization and the continuous expansion of transportation networks,the necessity for tunnel construction and maintenance has become paramount.Addressing this need requires the investigation of efficient,economical,and robust tunnel reinforcement techniques.This paper explores fiber reinforced polymer(FRP)and steel fiber reinforced concrete(SFRC)technologies,which have emerged as viable solutions for enhancing tunnel structures.FRP is celebrated for its lightweight and high-strength attributes,effectively augmenting load-bearing capacity and seismic resistance,while SFRC’s notable crack resistance and longevity potentially enhance the performance of tunnel segments.Nonetheless,current research predominantly focuses on experimental analysis,lacking comprehensive theoretical models.To bridge this gap,the cohesive zone model(CZM),which utilizes cohesive elements to characterize the potential fracture surfaces of concrete/SFRC,the rebar-concrete interface,and the FRP-concrete interface,was employed.A modeling approach was subsequently proposed to construct a tunnel segment model reinforced with either SFRC or FRP.Moreover,the corresponding mixed-mode constitutive models,considering interfacial friction,were integrated into the proposed model.Experimental validation and numerical simulations corroborated the accuracy of the proposed model.Additionally,this study examined the reinforcement design of tunnel segments.Through a numerical evaluation,the effectiveness of innovative reinforcement schemes,such as substituting concrete with SFRC and externally bonding FRP sheets,was assessed utilizing a case study from the Fuzhou Metro Shield Tunnel Construction Project.