In the context of global mean square error concerning the number of random variables in the representation,the Karhunen–Loève(KL)expansion is the optimal series expansion method for random field discretization.T...In the context of global mean square error concerning the number of random variables in the representation,the Karhunen–Loève(KL)expansion is the optimal series expansion method for random field discretization.The computational efficiency and accuracy of the KL expansion are contingent upon the accurate resolution of the Fredholm integral eigenvalue problem(IEVP).The paper proposes an interpolation method based on different interpolation basis functions such as moving least squares(MLS),least squares(LS),and finite element method(FEM)to solve the IEVP.Compared with the Galerkin method based on finite element or Legendre polynomials,the main advantage of the interpolation method is that,in the calculation of eigenvalues and eigenfunctions in one-dimensional random fields,the integral matrix containing covariance function only requires a single integral,which is less than a two-folded integral by the Galerkin method.The effectiveness and computational efficiency of the proposed interpolation method are verified through various one-dimensional examples.Furthermore,based on theKL expansion and polynomial chaos expansion,the stochastic analysis of two-dimensional regular and irregular domains is conducted,and the basis function of the extended finite element method(XFEM)is introduced as the interpolation basis function in two-dimensional irregular domains to solve the IEVP.展开更多
We introduce CURDIS,a template for algorithms to discretize arcs of regular curves by incrementally producing a list of support pixels covering the arc.In this template,algorithms proceed by finding the tangent quadra...We introduce CURDIS,a template for algorithms to discretize arcs of regular curves by incrementally producing a list of support pixels covering the arc.In this template,algorithms proceed by finding the tangent quadrant at each point of the arc and determining which side the curve exits the pixel according to a tailored criterion.These two elements can be adapted for any type of curve,leading to algorithms dedicated to the shape of specific curves.While the calculation of the tangent quadrant for various curves,such as lines,conics,or cubics,is simple,it is more complex to analyze how pixels are traversed by the curve.In the case of conic arcs,we found a criterion for determining the pixel exit side.This leads us to present a new algorithm,called CURDIS-C,specific to the discretization of conics,for which we provide all the details.Surprisingly,the criterion for conics requires between one and three sign tests and four additions per pixel,making the algorithm efficient for resource-constrained systems and feasible for fixed-point or integer arithmetic implementations.Our algorithm also perfectly handles the pathological cases in which the conic intersects a pixel twice or changes quadrants multiple times within this pixel,achieving this generality at the cost of potentially computing up to two square roots per arc.We illustrate the use of CURDIS for the discretization of different curves,such as ellipses,hyperbolas,and parabolas,even when they degenerate into lines or corners.展开更多
Sensors for fire alarms require a high level of predictive variables to ensure accurate detection, injury prevention, and loss prevention. Bayesian networks can aid in enhancing early fire detection capabilities and r...Sensors for fire alarms require a high level of predictive variables to ensure accurate detection, injury prevention, and loss prevention. Bayesian networks can aid in enhancing early fire detection capabilities and reducing the frequency of erroneous fire alerts, thereby enhancing the effectiveness of numerous safety monitoring systems. This research explores the development of optimized probabilistic graphic models for the discretization thresholds of alarm system predictor variables. The study presents a statistical model framework that increases the efficacy of fire detection by predicting the discretization thresholds of alarm system predictor variable fluctuations used to detect the onset of fire. The work applies the Bayesian networks and probabilistic visual models to reveal the specific characteristics required to cope with fire detection strategies and patterns. The adopted methodology utilizes a combination of prior knowledge and statistical data to draw conclusions from observations. Utilizing domain knowledge to compute conditional dependencies between network variables enabled predictions to be made through the application of specialized analytical and simulation techniques.展开更多
A discrete Boltzmann model(DBM) with symmetric velocity discretization is constructed for compressible systems with an adjustable specific heat ratio in the external force field. The proposed two-dimensional(2D) nine-...A discrete Boltzmann model(DBM) with symmetric velocity discretization is constructed for compressible systems with an adjustable specific heat ratio in the external force field. The proposed two-dimensional(2D) nine-velocity scheme has better spatial symmetry and numerical accuracy than the discretized velocity model in literature [Acta Aerodyn. Sin.40 98108(2022)] and owns higher computational efficiency than the one in literature [Phys. Rev. E 99 012142(2019)].In addition, the matrix inversion method is adopted to calculate the discrete equilibrium distribution function and force term, both of which satisfy nine independent kinetic moment relations. Moreover, the DBM could be used to study a few thermodynamic nonequilibrium effects beyond the Euler equations that are recovered from the kinetic model in the hydrodynamic limit via the Chapman–Enskog expansion. Finally, the present method is verified through typical numerical simulations, including the free-falling process, Sod’s shock tube, sound wave, compressible Rayleigh–Taylor instability,and translational motion of a 2D fluid system.展开更多
This paper presents a procedure for assessing the reinforcement force of geosynthetics required for maintaining dynamic stability of a steep soil slope. Such a procedure is achieved with the use of the discretization ...This paper presents a procedure for assessing the reinforcement force of geosynthetics required for maintaining dynamic stability of a steep soil slope. Such a procedure is achieved with the use of the discretization technique and kinematic analysis of plasticity theory, i.e. discretization-based kinematic analysis. The discretization technique allows discretization of the analyzed slope into various components and generation of a kinematically admissible failure mechanism based on an associated flow rule.Accordingly, variations in soil properties including soil cohesion, internal friction angle and unit weight are accounted for with ease, while the conventional kinematic analysis fails to consider the changes in soil properties. The spatialetemporal effects of dynamic accelerations represented by primary and shear seismic waves are considered using the pseudo-dynamic approach. In the presence of geosynthetic reinforcement, tensile failure is discussed providing that the geosynthetics are installed with sufficient length. Equating the total rates of work done by external forces to the internal rates of work yields the upper bound solution of required reinforcement force, below which slopes fail. The reinforcement force is sought by optimizing the objective function with regard to independent variables, and presented in a normalized form. Pseudo-static analysis is a special case and hence readily transformed from pseudodynamic analysis. Comparisons of the pseudo-static/dynamic solutions calculated in this study are highlighted. Although the pseudo-static approach yields a conservative solution, its ability to give a reasonable result is substantiated for steep slopes. In order to provide a more meaningful solution to a stability analysis, the pseudo-dynamic approach is recommended due to considerations of spatial etemporal effect of earthquake input.展开更多
Rough set theory plays an important role in knowledge discovery, but cannot deal with continuous attributes, thus discretization is a problem which we cannot neglect. And discretization of decision systems in rough se...Rough set theory plays an important role in knowledge discovery, but cannot deal with continuous attributes, thus discretization is a problem which we cannot neglect. And discretization of decision systems in rough set theory has some particular characteristics. Consistency must be satisfied and cuts for discretization is expected to be as small as possible. Consistent and minimal discretization problem is NP-complete. In this paper, an immune algorithm for the problem is proposed. The correctness and effectiveness were shown in experiments. The discretization method presented in this paper can also be used as a data pre- treating step for other symbolic knowledge discovery or machine learning methods other than rough set theory.展开更多
The commonly used discretization approaches for distributed hydrological models can be broadly categorized into four types,based on the nature of the discrete components:Regular Mesh,Triangular Irregular Networks(TINs...The commonly used discretization approaches for distributed hydrological models can be broadly categorized into four types,based on the nature of the discrete components:Regular Mesh,Triangular Irregular Networks(TINs),Representative Elementary Watershed(REWs) and Hydrologic Response Units(HRUs).In this paper,a new discretization approach for landforms that have similar hydrologic properties is developed and discussed here for the Integrated Hydrologic Model(IHM),a combining simulation of surface and groundwater processes,accounting for the interaction between the systems.The approach used in the IHM is to disaggregate basin parameters into discrete landforms that have similar hydrologic properties.These landforms may be impervious areas,related areas,areas with high or low clay or organic fractions,areas with significantly different depths-to-water-table,and areas with different types of land cover or different land uses.Incorporating discrete landforms within basins allows significant distributed parameter analysis,but requires an efficient computational structure.The IHM integration represents a new approach interpreting fluxes across the model interface and storages near the interface for transfer to the appropriate model component,accounting for the disparate discretization while rigidly maintaining mass conservation.The discretization approaches employed in IHM will provide some ideas and insights which are helpful to those researchers who have been working on the integrated models for surface-groundwater interaction.展开更多
It is being widely studied how to extract knowledge from a decision table based on rough set theory. The novel problem is how to discretize a decision table having continuous attribute. In order to obtain more reasona...It is being widely studied how to extract knowledge from a decision table based on rough set theory. The novel problem is how to discretize a decision table having continuous attribute. In order to obtain more reasonable discretization results, a discretization algorithm is proposed, which arranges half-global discretization based on the correlational coefficient of each continuous attribute while considering the uniqueness of rough set theory. When choosing heuristic information, stability is combined with rough entropy. In terms of stability, the possibility of classifying objects belonging to certain sub-interval of a given attribute into neighbor sub-intervals is minimized. By doing this, rational discrete intervals can be determined. Rough entropy is employed to decide the optimal cut-points while guaranteeing the consistency of the decision table after discretization. Thought of this algorithm is elaborated through Iris data and then some experiments by comparing outcomes of four discritized datasets are also given, which are calculated by the proposed algorithm and four other typical algorithras for discritization respectively. After that, classification rules are deduced and summarized through rough set based classifiers. Results show that the proposed discretization algorithm is able to generate optimal classification accuracy while minimizing the number of discrete intervals. It displays superiority especially when dealing with a decision table having a large attribute number.展开更多
A method combining the pseudo-dynamic approach and discretization technique is carried out for computing the active earth pressure.Instead of using a presupposed failure mechanism,discretization technique is introduce...A method combining the pseudo-dynamic approach and discretization technique is carried out for computing the active earth pressure.Instead of using a presupposed failure mechanism,discretization technique is introduced to generate the potential failure surface,which is applicable to the case that soil strength parameters have spatial variability.For the purpose of analyzing the effect of earthquake,pseudo-dynamic approach is adopted to introduce the seismic forces,which can take into account the dynamic properties of seismic acceleration.A new type of micro-element is used to calculate the rate of work of external forces and the rate of internal energy dissipation.The analytical expression of seismic active earth pressure coefficient is deduced in the light of upper bound theorem and the corresponding upper bound solutions are obtained through numerical optimization.The method is validated by comparing the results of this paper with those reported in literatures.The parametric analysis is finally presented to further expound the effect of diverse parameters on active earth pressure under non-uniform soil.展开更多
The selection of a suitable discretization method(DM) to discretize spatially continuous variables(SCVs)is critical in ML-based natural hazard susceptibility assessment. However, few studies start to consider the infl...The selection of a suitable discretization method(DM) to discretize spatially continuous variables(SCVs)is critical in ML-based natural hazard susceptibility assessment. However, few studies start to consider the influence due to the selected DMs and how to efficiently select a suitable DM for each SCV. These issues were well addressed in this study. The information loss rate(ILR), an index based on the information entropy, seems can be used to select optimal DM for each SCV. However, the ILR fails to show the actual influence of discretization because such index only considers the total amount of information of the discretized variables departing from the original SCV. Facing this issue, we propose an index, information change rate(ICR), that focuses on the changed amount of information due to the discretization based on each cell, enabling the identification of the optimal DM. We develop a case study with Random Forest(training/testing ratio of 7 : 3) to assess flood susceptibility in Wanan County, China.The area under the curve-based and susceptibility maps-based approaches were presented to compare the ILR and ICR. The results show the ICR-based optimal DMs are more rational than the ILR-based ones in both cases. Moreover, we observed the ILR values are unnaturally small(<1%), whereas the ICR values are obviously more in line with general recognition(usually 10%–30%). The above results all demonstrate the superiority of the ICR. We consider this study fills up the existing research gaps, improving the MLbased natural hazard susceptibility assessments.展开更多
A new method for discretization of continuous attributes is put forward to overcome the limitation of the traditional rough sets, which cannot deal with continuous attributes.The method is based on an improved algorit...A new method for discretization of continuous attributes is put forward to overcome the limitation of the traditional rough sets, which cannot deal with continuous attributes.The method is based on an improved algorithm to produce candidate cut points and an algorithm of reduction based on variable precision rough information entropy. With the guarantee of consistency of decision system, the method can reduce the number of cut points and im- prove efficiency of reduction. Adopting variable precision rough information entropy as measure criterion, it has a good tolerance to noise. Experiments show that the algorithm yields satisfying reduction results.展开更多
Discretization based on rough set theory aims to seek the possible minimum number of the cut set without weakening the indiscemibility of the original decision system. Optimization of discretization is an NP-complete ...Discretization based on rough set theory aims to seek the possible minimum number of the cut set without weakening the indiscemibility of the original decision system. Optimization of discretization is an NP-complete problem and the genetic algorithm is an appropriate method to solve it. In order to achieve optimal discretization, first the choice of the initial set of cut set is discussed, because a good initial cut set can enhance the efficiency and quality of the follow-up algorithm. Second, an effective heuristic genetic algorithm for discretization of continuous attributes of the decision table is proposed, which takes the significance of cut dots as heuristic information and introduces a novel operator to maintain the indiscernibility of the original decision system and enhance the local research ability of the algorithm. So the algorithm converges quickly and has global optimizing ability. Finally, the effectiveness of the algorithm is validated through experiment.展开更多
The insulated gate bipolar transistor(IGBT)module is one of the most age-affected components in the switch power supply, and its reliability prediction is conducive to timely troubleshooting and reduction in safety ri...The insulated gate bipolar transistor(IGBT)module is one of the most age-affected components in the switch power supply, and its reliability prediction is conducive to timely troubleshooting and reduction in safety risks and unnecessary costs. The pulsed current pattern of the accelerator power supply is different from other converter applications;therefore, this study proposed a lifetime estimation method for IGBT modules in pulsed power supplies for accelerator magnets. The proposed methodology was based on junction temperature calculations using square-wave loss discretization and thermal modeling.Comparison results showed that the junction temperature error between the simulation and IR measurements was less than 3%. An AC power cycling test under real pulsed power supply applications was performed via offline wearout monitoring of the tested power IGBT module. After combining the IGBT4 PC curve and fitting the test results,a simple corrected lifetime model was developed to quantitatively evaluate the lifetime of the IGBT module,which can be employed for the accelerator pulsed power supply in engineering. This method can be applied to other IGBT modules and pulsed power supplies.展开更多
A computational technique is proposed for the Galerkin discretization of axially moving strings with geometric nonlinearity. The Galerkin discretization is based on the eigenfunctions of stationary strings. The discre...A computational technique is proposed for the Galerkin discretization of axially moving strings with geometric nonlinearity. The Galerkin discretization is based on the eigenfunctions of stationary strings. The discretized equations are simplified by regrouping nonlinear terms to reduce the computation work. The scheme can be easily implemented in the practical programming. Numerical results show the effectiveness of the technique. The results also highlight the feature of Galerkin's discretization of gyroscopic continua that the term number in Galerkin's discretization should be even. The technique is generalized from elastic strings to viscoelastic strings.展开更多
In this paper,we present a nonrecursive residual Monte Carlo method for estimating discretization errors associated with the S_(N) transport solution to radiation transport problems.Although this technique is general,...In this paper,we present a nonrecursive residual Monte Carlo method for estimating discretization errors associated with the S_(N) transport solution to radiation transport problems.Although this technique is general,we applied it to the mono-energetic 1-D S_(N) equation with linear-discontinuous finite element method spatial discretization as a demonstration of the theory for the purpose of this study.Two angular flux representations:conforming and simplified representations were considered in this analysis,and the results were compared.It is shown that the simplified representation dramatically reduces the memory footprint and computational complexity of residual source generation and sampling while accurately capturing the error associated with certain types of responses.展开更多
AIM: To evaluate the use of short-duration transient visual evoked potentials(VEP) and color reflectivity discretization analysis(CORDA) in glaucomatous eyes,eyes suspected of having glaucoma,and healthy eyes.MET...AIM: To evaluate the use of short-duration transient visual evoked potentials(VEP) and color reflectivity discretization analysis(CORDA) in glaucomatous eyes,eyes suspected of having glaucoma,and healthy eyes.METHODS: The study included 136 eyes from 136 subjects: 49 eyes with glaucoma,45 glaucoma suspect eyes,and 42 healthy eyes.Subjects underwent Humphrey visual field(VF) testing,VEP testing,as well as peripapillary retinal nerve fiber layer optical coherence tomography imaging studies with post-acquisition CORDA applied.Statistical analysis was performed using means and ranges,ANOVA,post-hoc comparisons using Turkey's adjustment,Fisher's Exact test,area under the curve,and Spearman correlation coefficients.RESULTS: Parameters from VEP and CORDA correlated significantly with VF mean deviation(MD)(P〈0.05).In distinguishing glaucomatous eyes from controls,VEP demonstrated area under the curve(AUC) values of 0.64-0.75 for amplitude and 0.67-0.81 for latency.The CORDA HR1 parameter was highly discriminative for glaucomatous eyes vs controls(AUC=0.94).CONCLUSION: Significant correlations are found between MD and parameters of short-duration transient VEP and CORDA,diagnostic modalities which warrant further consideration in identifying glaucoma characteristics.展开更多
In order to reduce the partial derivative errors in Preisach hysteresis model caused by inaccurate experimental data,the concept and correlative method of discretization of Preisach hysteresis model are proposed,the e...In order to reduce the partial derivative errors in Preisach hysteresis model caused by inaccurate experimental data,the concept and correlative method of discretization of Preisach hysteresis model are proposed,the essential of which is to centralize the distribution density of Preisach hysteresis model in local region as an integral,which is defined as the weight of a certain point in that region.For the input composed of an ascending segment and a descending segment,a method to determine the initial weights together with an additional method to determine present weights is given according to the number of input ascending segments.If the number of input ascending segments increases,the weights of the corresponding points in updating rectangle are updated by adding the initial weights of corresponding points.A prominent advantage of discrete Preisach hysteresis model is its memory efficiency.Another advantage of discrete Preisach hysteresis model is that there is no function in the model,and thus,it can be expediently operated using a computer.By generalizing the above updating rectangle method to the continuous Preisach hysteresis model,identification method of distribution density can be given as well.展开更多
The input time delay is always existent in the practical systems. Analysis of the delay phenomenon in a continuous-time domain is sophisticated. It is appropriate to obtain its corresponding discrete-time model for im...The input time delay is always existent in the practical systems. Analysis of the delay phenomenon in a continuous-time domain is sophisticated. It is appropriate to obtain its corresponding discrete-time model for implementation via digital computer. This paper proposes a new discretization method for calculating a sampled-data representation of nonlinear time-delayed non-affine systems. The proposed scheme provides a finite-dimensional representation for nonlinear systems with non-a^ne time-delayed input enabling existing nonlinear controller design techniques to be applied to them. The performance of the proposed discretization procedure is evaluated by using a nonlinear system with non-affine time-delayed input. For this nonlinear system, various time delay values are considered.展开更多
A discretization precision control method based on the second order osculating surface is proposed. The discretization precision of 3 D solid is controlled according to the error between the discrete solid surface a...A discretization precision control method based on the second order osculating surface is proposed. The discretization precision of 3 D solid is controlled according to the error between the discrete solid surface and its second order osculating surface. The global maximal error has been gotten after analyzing all the extremums of the error function. It can be used in controlling and optimizing the discretization precision of 3 D solid in computer 3 D modeling and NC milling path generation.展开更多
The rough sets and Boolean reasoning based discretization approach (RSBRA) is no t suitable for feature selection for machine learning algorithms such as neural network or SVM because the information loss due to discr...The rough sets and Boolean reasoning based discretization approach (RSBRA) is no t suitable for feature selection for machine learning algorithms such as neural network or SVM because the information loss due to discretization is large. A mo dified RSBRA for feature selection was proposed and evaluated with SVM classifie rs. In the presented algorithm, the level of consistency, coined from the rough sets theory, is introduced to substitute the stop criterion of circulation of th e RSBRA, which maintains the fidelity of the training set after discretization. The experimental results show the modified algorithm has better predictive accur acy and less training time than the original RSBRA.展开更多
基金The authors gratefully acknowledge the support provided by the Postgraduate Research&Practice Program of Jiangsu Province(Grant No.KYCX18_0526)the Fundamental Research Funds for the Central Universities(Grant No.2018B682X14)Guangdong Basic and Applied Basic Research Foundation(No.2021A1515110807).
文摘In the context of global mean square error concerning the number of random variables in the representation,the Karhunen–Loève(KL)expansion is the optimal series expansion method for random field discretization.The computational efficiency and accuracy of the KL expansion are contingent upon the accurate resolution of the Fredholm integral eigenvalue problem(IEVP).The paper proposes an interpolation method based on different interpolation basis functions such as moving least squares(MLS),least squares(LS),and finite element method(FEM)to solve the IEVP.Compared with the Galerkin method based on finite element or Legendre polynomials,the main advantage of the interpolation method is that,in the calculation of eigenvalues and eigenfunctions in one-dimensional random fields,the integral matrix containing covariance function only requires a single integral,which is less than a two-folded integral by the Galerkin method.The effectiveness and computational efficiency of the proposed interpolation method are verified through various one-dimensional examples.Furthermore,based on theKL expansion and polynomial chaos expansion,the stochastic analysis of two-dimensional regular and irregular domains is conducted,and the basis function of the extended finite element method(XFEM)is introduced as the interpolation basis function in two-dimensional irregular domains to solve the IEVP.
文摘We introduce CURDIS,a template for algorithms to discretize arcs of regular curves by incrementally producing a list of support pixels covering the arc.In this template,algorithms proceed by finding the tangent quadrant at each point of the arc and determining which side the curve exits the pixel according to a tailored criterion.These two elements can be adapted for any type of curve,leading to algorithms dedicated to the shape of specific curves.While the calculation of the tangent quadrant for various curves,such as lines,conics,or cubics,is simple,it is more complex to analyze how pixels are traversed by the curve.In the case of conic arcs,we found a criterion for determining the pixel exit side.This leads us to present a new algorithm,called CURDIS-C,specific to the discretization of conics,for which we provide all the details.Surprisingly,the criterion for conics requires between one and three sign tests and four additions per pixel,making the algorithm efficient for resource-constrained systems and feasible for fixed-point or integer arithmetic implementations.Our algorithm also perfectly handles the pathological cases in which the conic intersects a pixel twice or changes quadrants multiple times within this pixel,achieving this generality at the cost of potentially computing up to two square roots per arc.We illustrate the use of CURDIS for the discretization of different curves,such as ellipses,hyperbolas,and parabolas,even when they degenerate into lines or corners.
文摘Sensors for fire alarms require a high level of predictive variables to ensure accurate detection, injury prevention, and loss prevention. Bayesian networks can aid in enhancing early fire detection capabilities and reducing the frequency of erroneous fire alerts, thereby enhancing the effectiveness of numerous safety monitoring systems. This research explores the development of optimized probabilistic graphic models for the discretization thresholds of alarm system predictor variables. The study presents a statistical model framework that increases the efficacy of fire detection by predicting the discretization thresholds of alarm system predictor variable fluctuations used to detect the onset of fire. The work applies the Bayesian networks and probabilistic visual models to reveal the specific characteristics required to cope with fire detection strategies and patterns. The adopted methodology utilizes a combination of prior knowledge and statistical data to draw conclusions from observations. Utilizing domain knowledge to compute conditional dependencies between network variables enabled predictions to be made through the application of specialized analytical and simulation techniques.
基金Project supported by the National Natural Science Foundation of China (Grant Nos. 51806116, U2242214, and 11875329)Guangdong Basic and Applied Basic Research Foundation (Grant No. 2022A1515012116)the Natural Science Foundation of Fujian Province, China (Grant Nos. 2021J01652 and 2021J01655)。
文摘A discrete Boltzmann model(DBM) with symmetric velocity discretization is constructed for compressible systems with an adjustable specific heat ratio in the external force field. The proposed two-dimensional(2D) nine-velocity scheme has better spatial symmetry and numerical accuracy than the discretized velocity model in literature [Acta Aerodyn. Sin.40 98108(2022)] and owns higher computational efficiency than the one in literature [Phys. Rev. E 99 012142(2019)].In addition, the matrix inversion method is adopted to calculate the discrete equilibrium distribution function and force term, both of which satisfy nine independent kinetic moment relations. Moreover, the DBM could be used to study a few thermodynamic nonequilibrium effects beyond the Euler equations that are recovered from the kinetic model in the hydrodynamic limit via the Chapman–Enskog expansion. Finally, the present method is verified through typical numerical simulations, including the free-falling process, Sod’s shock tube, sound wave, compressible Rayleigh–Taylor instability,and translational motion of a 2D fluid system.
基金financial support for the first author’s PhD program by the President’s Graduate Fellowship in Singapore
文摘This paper presents a procedure for assessing the reinforcement force of geosynthetics required for maintaining dynamic stability of a steep soil slope. Such a procedure is achieved with the use of the discretization technique and kinematic analysis of plasticity theory, i.e. discretization-based kinematic analysis. The discretization technique allows discretization of the analyzed slope into various components and generation of a kinematically admissible failure mechanism based on an associated flow rule.Accordingly, variations in soil properties including soil cohesion, internal friction angle and unit weight are accounted for with ease, while the conventional kinematic analysis fails to consider the changes in soil properties. The spatialetemporal effects of dynamic accelerations represented by primary and shear seismic waves are considered using the pseudo-dynamic approach. In the presence of geosynthetic reinforcement, tensile failure is discussed providing that the geosynthetics are installed with sufficient length. Equating the total rates of work done by external forces to the internal rates of work yields the upper bound solution of required reinforcement force, below which slopes fail. The reinforcement force is sought by optimizing the objective function with regard to independent variables, and presented in a normalized form. Pseudo-static analysis is a special case and hence readily transformed from pseudodynamic analysis. Comparisons of the pseudo-static/dynamic solutions calculated in this study are highlighted. Although the pseudo-static approach yields a conservative solution, its ability to give a reasonable result is substantiated for steep slopes. In order to provide a more meaningful solution to a stability analysis, the pseudo-dynamic approach is recommended due to considerations of spatial etemporal effect of earthquake input.
基金Project supported by the National Basic Research Program (973)of China (No. 2002CB312106), China Postdoctoral Science Founda-tion (No. 2004035715), the Science & Technology Program of Zhe-jiang Province (No. 2004C31098), and the Postdoctoral Foundation of Zhejiang Province (No. 2004-bsh-023), China
文摘Rough set theory plays an important role in knowledge discovery, but cannot deal with continuous attributes, thus discretization is a problem which we cannot neglect. And discretization of decision systems in rough set theory has some particular characteristics. Consistency must be satisfied and cuts for discretization is expected to be as small as possible. Consistent and minimal discretization problem is NP-complete. In this paper, an immune algorithm for the problem is proposed. The correctness and effectiveness were shown in experiments. The discretization method presented in this paper can also be used as a data pre- treating step for other symbolic knowledge discovery or machine learning methods other than rough set theory.
基金Under the auspices of National Natural Science Foundation of China(No.40901026)Beijing Municipal Science & Technology New Star Project Funds(No.2010B046)+1 种基金Beijing Municipal Natural Science Foundation(No.8123041)Southwest Florida Water Management District(SFWMD) Project
文摘The commonly used discretization approaches for distributed hydrological models can be broadly categorized into four types,based on the nature of the discrete components:Regular Mesh,Triangular Irregular Networks(TINs),Representative Elementary Watershed(REWs) and Hydrologic Response Units(HRUs).In this paper,a new discretization approach for landforms that have similar hydrologic properties is developed and discussed here for the Integrated Hydrologic Model(IHM),a combining simulation of surface and groundwater processes,accounting for the interaction between the systems.The approach used in the IHM is to disaggregate basin parameters into discrete landforms that have similar hydrologic properties.These landforms may be impervious areas,related areas,areas with high or low clay or organic fractions,areas with significantly different depths-to-water-table,and areas with different types of land cover or different land uses.Incorporating discrete landforms within basins allows significant distributed parameter analysis,but requires an efficient computational structure.The IHM integration represents a new approach interpreting fluxes across the model interface and storages near the interface for transfer to the appropriate model component,accounting for the disparate discretization while rigidly maintaining mass conservation.The discretization approaches employed in IHM will provide some ideas and insights which are helpful to those researchers who have been working on the integrated models for surface-groundwater interaction.
文摘It is being widely studied how to extract knowledge from a decision table based on rough set theory. The novel problem is how to discretize a decision table having continuous attribute. In order to obtain more reasonable discretization results, a discretization algorithm is proposed, which arranges half-global discretization based on the correlational coefficient of each continuous attribute while considering the uniqueness of rough set theory. When choosing heuristic information, stability is combined with rough entropy. In terms of stability, the possibility of classifying objects belonging to certain sub-interval of a given attribute into neighbor sub-intervals is minimized. By doing this, rational discrete intervals can be determined. Rough entropy is employed to decide the optimal cut-points while guaranteeing the consistency of the decision table after discretization. Thought of this algorithm is elaborated through Iris data and then some experiments by comparing outcomes of four discritized datasets are also given, which are calculated by the proposed algorithm and four other typical algorithras for discritization respectively. After that, classification rules are deduced and summarized through rough set based classifiers. Results show that the proposed discretization algorithm is able to generate optimal classification accuracy while minimizing the number of discrete intervals. It displays superiority especially when dealing with a decision table having a large attribute number.
基金Projects(51908557,51378510)supported by the National Natural Science Foundation of China。
文摘A method combining the pseudo-dynamic approach and discretization technique is carried out for computing the active earth pressure.Instead of using a presupposed failure mechanism,discretization technique is introduced to generate the potential failure surface,which is applicable to the case that soil strength parameters have spatial variability.For the purpose of analyzing the effect of earthquake,pseudo-dynamic approach is adopted to introduce the seismic forces,which can take into account the dynamic properties of seismic acceleration.A new type of micro-element is used to calculate the rate of work of external forces and the rate of internal energy dissipation.The analytical expression of seismic active earth pressure coefficient is deduced in the light of upper bound theorem and the corresponding upper bound solutions are obtained through numerical optimization.The method is validated by comparing the results of this paper with those reported in literatures.The parametric analysis is finally presented to further expound the effect of diverse parameters on active earth pressure under non-uniform soil.
文摘The selection of a suitable discretization method(DM) to discretize spatially continuous variables(SCVs)is critical in ML-based natural hazard susceptibility assessment. However, few studies start to consider the influence due to the selected DMs and how to efficiently select a suitable DM for each SCV. These issues were well addressed in this study. The information loss rate(ILR), an index based on the information entropy, seems can be used to select optimal DM for each SCV. However, the ILR fails to show the actual influence of discretization because such index only considers the total amount of information of the discretized variables departing from the original SCV. Facing this issue, we propose an index, information change rate(ICR), that focuses on the changed amount of information due to the discretization based on each cell, enabling the identification of the optimal DM. We develop a case study with Random Forest(training/testing ratio of 7 : 3) to assess flood susceptibility in Wanan County, China.The area under the curve-based and susceptibility maps-based approaches were presented to compare the ILR and ICR. The results show the ICR-based optimal DMs are more rational than the ILR-based ones in both cases. Moreover, we observed the ILR values are unnaturally small(<1%), whereas the ICR values are obviously more in line with general recognition(usually 10%–30%). The above results all demonstrate the superiority of the ICR. We consider this study fills up the existing research gaps, improving the MLbased natural hazard susceptibility assessments.
文摘A new method for discretization of continuous attributes is put forward to overcome the limitation of the traditional rough sets, which cannot deal with continuous attributes.The method is based on an improved algorithm to produce candidate cut points and an algorithm of reduction based on variable precision rough information entropy. With the guarantee of consistency of decision system, the method can reduce the number of cut points and im- prove efficiency of reduction. Adopting variable precision rough information entropy as measure criterion, it has a good tolerance to noise. Experiments show that the algorithm yields satisfying reduction results.
文摘Discretization based on rough set theory aims to seek the possible minimum number of the cut set without weakening the indiscemibility of the original decision system. Optimization of discretization is an NP-complete problem and the genetic algorithm is an appropriate method to solve it. In order to achieve optimal discretization, first the choice of the initial set of cut set is discussed, because a good initial cut set can enhance the efficiency and quality of the follow-up algorithm. Second, an effective heuristic genetic algorithm for discretization of continuous attributes of the decision table is proposed, which takes the significance of cut dots as heuristic information and introduces a novel operator to maintain the indiscernibility of the original decision system and enhance the local research ability of the algorithm. So the algorithm converges quickly and has global optimizing ability. Finally, the effectiveness of the algorithm is validated through experiment.
基金supported by the National Key Research and Development Program of China (No. 2019YFA0405402)。
文摘The insulated gate bipolar transistor(IGBT)module is one of the most age-affected components in the switch power supply, and its reliability prediction is conducive to timely troubleshooting and reduction in safety risks and unnecessary costs. The pulsed current pattern of the accelerator power supply is different from other converter applications;therefore, this study proposed a lifetime estimation method for IGBT modules in pulsed power supplies for accelerator magnets. The proposed methodology was based on junction temperature calculations using square-wave loss discretization and thermal modeling.Comparison results showed that the junction temperature error between the simulation and IR measurements was less than 3%. An AC power cycling test under real pulsed power supply applications was performed via offline wearout monitoring of the tested power IGBT module. After combining the IGBT4 PC curve and fitting the test results,a simple corrected lifetime model was developed to quantitatively evaluate the lifetime of the IGBT module,which can be employed for the accelerator pulsed power supply in engineering. This method can be applied to other IGBT modules and pulsed power supplies.
基金supported by the National Outstanding Young Scientists Fund of China(No.10725209)the National Natural Science Foundation of China(No.10672092)+3 种基金Shanghai Subject Chief Scientist Project(No.09XD1401700)Shanghai Municipal Education Commission Scientific Research Project(No.07ZZ07)Shanghai Leading Academic Discipline Project (No.S30106)Changjiang Scholars and Innovative Research Team in University Program(No.IRT0844).
文摘A computational technique is proposed for the Galerkin discretization of axially moving strings with geometric nonlinearity. The Galerkin discretization is based on the eigenfunctions of stationary strings. The discretized equations are simplified by regrouping nonlinear terms to reduce the computation work. The scheme can be easily implemented in the practical programming. Numerical results show the effectiveness of the technique. The results also highlight the feature of Galerkin's discretization of gyroscopic continua that the term number in Galerkin's discretization should be even. The technique is generalized from elastic strings to viscoelastic strings.
文摘In this paper,we present a nonrecursive residual Monte Carlo method for estimating discretization errors associated with the S_(N) transport solution to radiation transport problems.Although this technique is general,we applied it to the mono-energetic 1-D S_(N) equation with linear-discontinuous finite element method spatial discretization as a demonstration of the theory for the purpose of this study.Two angular flux representations:conforming and simplified representations were considered in this analysis,and the results were compared.It is shown that the simplified representation dramatically reduces the memory footprint and computational complexity of residual source generation and sampling while accurately capturing the error associated with certain types of responses.
文摘AIM: To evaluate the use of short-duration transient visual evoked potentials(VEP) and color reflectivity discretization analysis(CORDA) in glaucomatous eyes,eyes suspected of having glaucoma,and healthy eyes.METHODS: The study included 136 eyes from 136 subjects: 49 eyes with glaucoma,45 glaucoma suspect eyes,and 42 healthy eyes.Subjects underwent Humphrey visual field(VF) testing,VEP testing,as well as peripapillary retinal nerve fiber layer optical coherence tomography imaging studies with post-acquisition CORDA applied.Statistical analysis was performed using means and ranges,ANOVA,post-hoc comparisons using Turkey's adjustment,Fisher's Exact test,area under the curve,and Spearman correlation coefficients.RESULTS: Parameters from VEP and CORDA correlated significantly with VF mean deviation(MD)(P〈0.05).In distinguishing glaucomatous eyes from controls,VEP demonstrated area under the curve(AUC) values of 0.64-0.75 for amplitude and 0.67-0.81 for latency.The CORDA HR1 parameter was highly discriminative for glaucomatous eyes vs controls(AUC=0.94).CONCLUSION: Significant correlations are found between MD and parameters of short-duration transient VEP and CORDA,diagnostic modalities which warrant further consideration in identifying glaucoma characteristics.
基金Project(2013CB733000)supported by the National Basic Research Program of China
文摘In order to reduce the partial derivative errors in Preisach hysteresis model caused by inaccurate experimental data,the concept and correlative method of discretization of Preisach hysteresis model are proposed,the essential of which is to centralize the distribution density of Preisach hysteresis model in local region as an integral,which is defined as the weight of a certain point in that region.For the input composed of an ascending segment and a descending segment,a method to determine the initial weights together with an additional method to determine present weights is given according to the number of input ascending segments.If the number of input ascending segments increases,the weights of the corresponding points in updating rectangle are updated by adding the initial weights of corresponding points.A prominent advantage of discrete Preisach hysteresis model is its memory efficiency.Another advantage of discrete Preisach hysteresis model is that there is no function in the model,and thus,it can be expediently operated using a computer.By generalizing the above updating rectangle method to the continuous Preisach hysteresis model,identification method of distribution density can be given as well.
基金supported by University Natural Science Research Project of Jiangsu Province (No. 10KJB510001)
文摘The input time delay is always existent in the practical systems. Analysis of the delay phenomenon in a continuous-time domain is sophisticated. It is appropriate to obtain its corresponding discrete-time model for implementation via digital computer. This paper proposes a new discretization method for calculating a sampled-data representation of nonlinear time-delayed non-affine systems. The proposed scheme provides a finite-dimensional representation for nonlinear systems with non-a^ne time-delayed input enabling existing nonlinear controller design techniques to be applied to them. The performance of the proposed discretization procedure is evaluated by using a nonlinear system with non-affine time-delayed input. For this nonlinear system, various time delay values are considered.
文摘A discretization precision control method based on the second order osculating surface is proposed. The discretization precision of 3 D solid is controlled according to the error between the discrete solid surface and its second order osculating surface. The global maximal error has been gotten after analyzing all the extremums of the error function. It can be used in controlling and optimizing the discretization precision of 3 D solid in computer 3 D modeling and NC milling path generation.
基金National Key Fundamental Research Pro-ject of China (No.2002cb312200-01-3),National Natural Science Foundation ofChina (No.60174038) and Specialized Re-search Fund for the Doctoral Program ofHigher Education (No.20030248040)
文摘The rough sets and Boolean reasoning based discretization approach (RSBRA) is no t suitable for feature selection for machine learning algorithms such as neural network or SVM because the information loss due to discretization is large. A mo dified RSBRA for feature selection was proposed and evaluated with SVM classifie rs. In the presented algorithm, the level of consistency, coined from the rough sets theory, is introduced to substitute the stop criterion of circulation of th e RSBRA, which maintains the fidelity of the training set after discretization. The experimental results show the modified algorithm has better predictive accur acy and less training time than the original RSBRA.