Volatile nitrosamines (VNAs) are a group of compounds classified as probable (group 2A) and possible (group 2B) carcinogens in humans. Along with certain foods and contaminated drinking water, VNAs are detected at hig...Volatile nitrosamines (VNAs) are a group of compounds classified as probable (group 2A) and possible (group 2B) carcinogens in humans. Along with certain foods and contaminated drinking water, VNAs are detected at high levels in tobacco products and in both mainstream and side-stream smoke. Our laboratory monitors six urinary VNAs—N-nitrosodimethylamine (NDMA), N-nitrosomethylethylamine (NMEA), N-nitrosodiethylamine (NDEA), N-nitrosopiperidine (NPIP), N-nitrosopyrrolidine (NPYR), and N-nitrosomorpholine (NMOR)—using isotope dilution GC-MS/ MS (QQQ) for large population studies such as the National Health and Nutrition Examination Survey (NHANES). In this paper, we report for the first time a new automated sample preparation method to more efficiently quantitate these VNAs. Automation is done using Hamilton STAR<sup>TM</sup> and Caliper Staccato<sup>TM</sup> workstations. This new automated method reduces sample preparation time from 4 hours to 2.5 hours while maintaining precision (inter-run CV < 10%) and accuracy (85% - 111%). More importantly this method increases sample throughput while maintaining a low limit of detection (<10 pg/mL) for all analytes. A streamlined sample data flow was created in parallel to the automated method, in which samples can be tracked from receiving to final LIMs output with minimal human intervention, further minimizing human error in the sample preparation process. This new automated method and the sample data flow are currently applied in bio-monitoring of VNAs in the US non-institutionalized population NHANES 2013-2014 cycle.展开更多
As sandstone layers in thin interbedded section are difficult to identify,conventional model-driven seismic inversion and data-driven seismic prediction methods have low precision in predicting them.To solve this prob...As sandstone layers in thin interbedded section are difficult to identify,conventional model-driven seismic inversion and data-driven seismic prediction methods have low precision in predicting them.To solve this problem,a model-data-driven seismic AVO(amplitude variation with offset)inversion method based on a space-variant objective function has been worked out.In this method,zero delay cross-correlation function and F norm are used to establish objective function.Based on inverse distance weighting theory,change of the objective function is controlled according to the location of the target CDP(common depth point),to change the constraint weights of training samples,initial low-frequency models,and seismic data on the inversion.Hence,the proposed method can get high resolution and high-accuracy velocity and density from inversion of small sample data,and is suitable for identifying thin interbedded sand bodies.Tests with thin interbedded geological models show that the proposed method has high inversion accuracy and resolution for small sample data,and can identify sandstone and mudstone layers of about one-30th of the dominant wavelength thick.Tests on the field data of Lishui sag show that the inversion results of the proposed method have small relative error with well-log data,and can identify thin interbedded sandstone layers of about one-15th of the dominant wavelength thick with small sample data.展开更多
Material identification is critical for understanding the relationship between mechanical properties and the associated mechanical functions.However,material identification is a challenging task,especially when the ch...Material identification is critical for understanding the relationship between mechanical properties and the associated mechanical functions.However,material identification is a challenging task,especially when the characteristic of the material is highly nonlinear in nature,as is common in biological tissue.In this work,we identify unknown material properties in continuum solid mechanics via physics-informed neural networks(PINNs).To improve the accuracy and efficiency of PINNs,we develop efficient strategies to nonuniformly sample observational data.We also investigate different approaches to enforce Dirichlet-type boundary conditions(BCs)as soft or hard constraints.Finally,we apply the proposed methods to a diverse set of time-dependent and time-independent solid mechanic examples that span linear elastic and hyperelastic material space.The estimated material parameters achieve relative errors of less than 1%.As such,this work is relevant to diverse applications,including optimizing structural integrity and developing novel materials.展开更多
In this paper, the consensus problem with position sampled data for second-order multi-agent systems is investigated.The interaction topology among the agents is depicted by a directed graph. The full-order and reduce...In this paper, the consensus problem with position sampled data for second-order multi-agent systems is investigated.The interaction topology among the agents is depicted by a directed graph. The full-order and reduced-order observers with position sampled data are proposed, by which two kinds of sampled data-based consensus protocols are constructed. With the provided sampled protocols, the consensus convergence analysis of a continuous-time multi-agent system is equivalently transformed into that of a discrete-time system. Then, by using matrix theory and a sampled control analysis method, some sufficient and necessary consensus conditions based on the coupling parameters, spectrum of the Laplacian matrix and sampling period are obtained. While the sampling period tends to zero, our established necessary and sufficient conditions are degenerated to the continuous-time protocol case, which are consistent with the existing result for the continuous-time case. Finally, the effectiveness of our established results is illustrated by a simple simulation example.展开更多
Data from the 2013 Canadian Tobacco, Alcohol and Drugs Survey, and two other surveys are used to determine the effects of cannabis use on self-reported physical and mental health. Daily or almost daily marijuana use i...Data from the 2013 Canadian Tobacco, Alcohol and Drugs Survey, and two other surveys are used to determine the effects of cannabis use on self-reported physical and mental health. Daily or almost daily marijuana use is shown to be detrimental to both measures of health for some age groups but not all. The age group specific effects depend on gender. Males and females respond differently to cannabis use. The health costs of regularly using cannabis are significant but they are much smaller than those associated with tobacco use. These costs are attributed to both the presence of delta9-tetrahydrocannabinol and the fact that smoking cannabis is itself a health hazard because of the toxic properties of the smoke ingested. Cannabis use is costlier to regular smokers and age of first use below the age of 15 or 20 and being a former user leads to reduced physical and mental capacities which are permanent. These results strongly suggest that the legalization of marijuana be accompanied by educational programs, counseling services, and a delivery system, which minimizes juvenile and young adult usage.展开更多
In this paper,the authors consider the distributed adaptive identification problem over sensor networks using sampled data,where the dynamics of each sensor is described by a stochastic differential equation.By minimi...In this paper,the authors consider the distributed adaptive identification problem over sensor networks using sampled data,where the dynamics of each sensor is described by a stochastic differential equation.By minimizing a local objective function at sampling time instants,the authors propose an online distributed least squares algorithm based on sampled data.A cooperative non-persistent excitation condition is introduced,under which the convergence results of the proposed algorithm are established by properly choosing the sampling time interval.The upper bound on the accumulative regret of the adaptive predictor can also be provided.Finally,the authors demonstrate the cooperative effect of multiple sensors in the estimation of unknown parameters by computer simulations.展开更多
The basis of accurate mineral resource estimates is to have a geological model which replicates the nature and style of the orebody. Key inputs into the generation of a good geological model are the sample data and ma...The basis of accurate mineral resource estimates is to have a geological model which replicates the nature and style of the orebody. Key inputs into the generation of a good geological model are the sample data and mapping information. The Obuasi Mine sample data with a lot of legacy issues were subjected to a robust validation process and integrated with mapping information to generate an accurate geological orebody model for mineral resource estimation in Block 8 Lower. Validation of the sample data focused on replacing missing collar coordinates, missing assays, and correcting magnetic declination that was used to convert the downhole surveys from true to magnetic, fix missing lithology and finally assign confidence numbers to all the sample data. The missing coordinates which were replaced ensured that the sample data plotted at their correct location in space as intended from the planning stage. Magnetic declination data, which was maintained constant throughout all the years even though it changes every year, was also corrected in the validation project. The corrected magnetic declination ensured that the drillholes were plotted on their accurate trajectory as per the planned azimuth and also reflected the true position of the intercepted mineralized fissure(s) which was previously not the case and marked a major blot in the modelling of the Obuasi orebody. The incorporation of mapped data with the validated sample data in the wireframes resulted in a better interpretation of the orebody. The updated mineral resource generated by domaining quartz from the sulphides and compared with the old resource showed that the sulphide tonnes in the old resource estimates were overestimated by 1% and the grade overestimated by 8.5%.展开更多
The identification of nonlinear systems with multiple sampled rates is a difficult task.The motivation of our paper is to study the parameter estimation problem of Hammerstein systems with dead-zone characteristics by...The identification of nonlinear systems with multiple sampled rates is a difficult task.The motivation of our paper is to study the parameter estimation problem of Hammerstein systems with dead-zone characteristics by using the dual-rate sampled data.Firstly,the auxiliary model identification principle is used to estimate the unmeasurable variables,and the recursive estimation algorithm is proposed to identify the parameters of the static nonlinear model with the dead-zone function and the parameters of the dynamic linear system model.Then,the convergence of the proposed identification algorithm is analyzed by using the martingale convergence theorem.It is proved theoretically that the estimated parameters can converge to the real values under the condition of continuous excitation.Finally,the validity of the proposed algorithm is proved by the identification of the dual-rate sampled nonlinear systems.展开更多
Based on the multi-model principle, the fuzzy identification for nonlinear systems with multirate sampled data is studied.Firstly, the nonlinear system with multirate sampled data can be shown as the nonlinear weighte...Based on the multi-model principle, the fuzzy identification for nonlinear systems with multirate sampled data is studied.Firstly, the nonlinear system with multirate sampled data can be shown as the nonlinear weighted combination of some linear models at multiple local working points. On this basis, the fuzzy model of the multirate sampled nonlinear system is built. The premise structure of the fuzzy model is confirmed by using fuzzy competitive learning, and the conclusion parameters of the fuzzy model are estimated by the random gradient descent algorithm. The convergence of the proposed identification algorithm is given by using the martingale theorem and lemmas. The fuzzy model of the PH neutralization process of acid-base titration for hair quality detection is constructed to demonstrate the effectiveness of the proposed method.展开更多
The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical...The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical components in rocks is expensive and time consuming.However,the basic well log curves are not well correlated with BI so correlation-based,machine-learning methods are not able to derive highly accurate BI predictions using such data.A correlation-free,optimized data-matching algorithm is configured to predict BI on a supervised basis from well log and core data available from two published wells in the Lower Barnett Shale Formation (Texas).This transparent open box (TOB) algorithm matches data records by calculating the sum of squared errors between their variables and selecting the best matches as those with the minimum squared errors.It then applies optimizers to adjust weights applied to individual variable errors to minimize the root mean square error (RMSE)between calculated and predicted (BI).The prediction accuracy achieved by TOB using just five well logs (Gr,ρb,Ns,Rs,Dt) to predict BI is dependent on the density of data records sampled.At a sampling density of about one sample per 0.5 ft BI is predicted with RMSE~0.056 and R^(2)~0.790.At a sampling density of about one sample per0.1 ft BI is predicted with RMSE~0.008 and R^(2)~0.995.Adding a stratigraphic height index as an additional (sixth)input variable method improves BI prediction accuracy to RMSE~0.003 and R^(2)~0.999 for the two wells with only 1 record in 10,000 yielding a BI prediction error of>±0.1.The model has the potential to be applied in an unsupervised basis to predict BI from basic well log data in surrounding wells lacking mineralogical measurements but with similar lithofacies and burial histories.The method could also be extended to predict elastic rock properties in and seismic attributes from wells and seismic data to improve the precision of brittleness index and fracability mapping spatially.展开更多
Seismic data interpolation,especially irregularly sampled data interpolation,is a critical task for seismic processing and subsequent interpretation.Recently,with the development of machine learning and deep learning,...Seismic data interpolation,especially irregularly sampled data interpolation,is a critical task for seismic processing and subsequent interpretation.Recently,with the development of machine learning and deep learning,convolutional neural networks(CNNs)are applied for interpolating irregularly sampled seismic data.CNN based approaches can address the apparent defects of traditional interpolation methods,such as the low computational efficiency and the difficulty on parameters selection.However,current CNN based methods only consider the temporal and spatial features of irregularly sampled seismic data,which fail to consider the frequency features of seismic data,i.e.,the multi-scale features.To overcome these drawbacks,we propose a wavelet-based convolutional block attention deep learning(W-CBADL)network for irregularly sampled seismic data reconstruction.We firstly introduce the discrete wavelet transform(DWT)and the inverse wavelet transform(IWT)to the commonly used U-Net by considering the multi-scale features of irregularly sampled seismic data.Moreover,we propose to adopt the convolutional block attention module(CBAM)to precisely restore sampled seismic traces,which could apply the attention to both channel and spatial dimensions.Finally,we adopt the proposed W-CBADL model to synthetic and pre-stack field data to evaluate its validity and effectiveness.The results demonstrate that the proposed W-CBADL model could reconstruct irregularly sampled seismic data more effectively and more efficiently than the state-of-the-art contrastive CNN based models.展开更多
Fourier transform is a basis of the analysis. This paper presents a kind ofmethod of minimum sampling data determined profile of the inverted object ininverse scattering.
A new and useful method of technology economics, parameter estimation method, was presented in light of the stability of gravity center of object in this paper. This method could deal with the fitting and forecasting ...A new and useful method of technology economics, parameter estimation method, was presented in light of the stability of gravity center of object in this paper. This method could deal with the fitting and forecasting of economy volume and could greatly decrease the errors of the fitting and forecasting results. Moreover, the strict hypothetical conditions in least squares method were not necessary in the method presented in this paper, which overcame the shortcomings of least squares method and expanded the application of data barycentre method. Application to the steel consumption volume forecasting was presented in this paper. It was shown that the result of fitting and forecasting was satisfactory. From the comparison between data barycentre forecasting method and least squares method, we could conclude that the fitting and forecasting results using data barycentre method were more stable than those of using least squares regression forecasting method, and the computation of data barycentre forecasting method was simpler than that of least squares method. As a result, the data barycentre method was convenient to use in technical economy.展开更多
The main aim of this work is to design a non-fragile sampled data control(NFSDC) scheme for the asymptotic synchronization criteria for interconnected coupled circuit systems(multi-agent systems, MASs). NFSDC is used ...The main aim of this work is to design a non-fragile sampled data control(NFSDC) scheme for the asymptotic synchronization criteria for interconnected coupled circuit systems(multi-agent systems, MASs). NFSDC is used to conduct synchronization analysis of the considered MASs in the presence of time-varying delays. By constructing suitable Lyapunov functions, sufficient conditions are derived in terms of linear matrix inequalities(LMIs) to ensure synchronization between the MAS leader and follower systems. Finally, two numerical examples are given to show the effectiveness of the proposed control scheme and less conservation of the proposed Lyapunov functions.展开更多
In this paper,the authors consider a sparse parameter estimation problem in continuoustime linear stochastic regression models using sampling data.Based on the compressed sensing(CS)method,the authors propose a compre...In this paper,the authors consider a sparse parameter estimation problem in continuoustime linear stochastic regression models using sampling data.Based on the compressed sensing(CS)method,the authors propose a compressed least squares(LS) algorithm to deal with the challenges of parameter sparsity.At each sampling time instant,the proposed compressed LS algorithm first compresses the original high-dimensional regressor using a sensing matrix and obtains a low-dimensional LS estimate for the compressed unknown parameter.Then,the original high-dimensional sparse unknown parameter is recovered by a reconstruction method.By introducing a compressed excitation assumption and employing stochastic Lyapunov function and martingale estimate methods,the authors establish the performance analysis of the compressed LS algorithm under the condition on the sampling time interval without using independence or stationarity conditions on the system signals.At last,a simulation example is provided to verify the theoretical results by comparing the standard and the compressed LS algorithms for estimating a high-dimensional sparse unknown parameter.展开更多
To study the capacity of artificial neural network (ANN) applying to battlefield target classification and result of classification, according to the characteristics of battlefield target acoustic and seismic sign...To study the capacity of artificial neural network (ANN) applying to battlefield target classification and result of classification, according to the characteristics of battlefield target acoustic and seismic signals, an on the spot experiment was carried out to derive acoustic and seismic signals of a tank and jeep by special experiment system. Experiment data processed by fast Fourier transform(FFT) were used to train the ANN to distinguish the two battlefield targets. The ANN classifier was performed by the special program based on the modified back propagation (BP) algorithm. The ANN classifier has high correct identification rates for acoustic and seismic signals of battlefield targets, and is suitable for the classification of battlefield targets. The modified BP algorithm eliminates oscillations and local minimum of the standard BP algorithm, and enhances the convergence rate of the ANN.展开更多
Aiming at the reliability analysis of small sample data or implicit structural function,a novel structural reliability analysis model based on support vector machine(SVM)and neural network direct integration method(DN...Aiming at the reliability analysis of small sample data or implicit structural function,a novel structural reliability analysis model based on support vector machine(SVM)and neural network direct integration method(DNN)is proposed.Firstly,SVM with good small sample learning ability is used to train small sample data,fit structural performance functions and establish regular integration regions.Secondly,DNN is approximated the integral function to achieve multiple integration in the integration region.Finally,structural reliability was obtained by DNN.Numerical examples are investigated to demonstrate the effectiveness of the present method,which provides a feasible way for the structural reliability analysis.展开更多
This paper introduces the basic viewpoints and characteristics of Bayesian statistics. Which provides a theoretical basis for solving the problem of small sample of flight simulator using Bayesian method. A series of ...This paper introduces the basic viewpoints and characteristics of Bayesian statistics. Which provides a theoretical basis for solving the problem of small sample of flight simulator using Bayesian method. A series of formulas were derived to establish the Bayesian reliability modeling and evaluation model for flight simulation equipment. The two key problems of Bayesian method were pointed out as follows: obtaining the prior distribution of WeibuU parameter, calculating the parameter a posterior distribution and parameter estimation without analytic solution, and proposing the corresponding solution scheme.展开更多
This paper investigates the globally asymptotically stable and L_(2)-gain of robust H_(∞)control for switched nonlinear systems under sampled data.By considering the relationship between the sampling period and the d...This paper investigates the globally asymptotically stable and L_(2)-gain of robust H_(∞)control for switched nonlinear systems under sampled data.By considering the relationship between the sampling period and the dwell time,the non-switching and one switching are discussed in the sampling interval,respectively.Firstly,a state feedback sampled-data controller is constructed by the back-stepping method,and the switching converts to asynchronous switching if it happens within the sampling interval.Then,under the limiting conditions of the sampling period,which are obtained by the average dwell time method,the closed-loop system is globally asymptotically stable and has L_(2)-gain.Finally,two numerical examples are provided to demonstrate the effectiveness of the proposed method.展开更多
This paper studies the sampled data based containment control problem of second-order multi-agent systems with intermittent communications,where velocity measurements for each agent are unavailable.A novel controller ...This paper studies the sampled data based containment control problem of second-order multi-agent systems with intermittent communications,where velocity measurements for each agent are unavailable.A novel controller for second-order containment is put forward via intermittent sampled position data measurement.Several necessary and sufficient conditions are derived to achieve intermittent sampled containment control by means of analyzing the relationship among control gains,eigenvalues of the Laplacian matrix,the sampling period,and the communication width.Finally,several simulation examples are used to testify the correctness and effectiveness of the theoretical results.展开更多
文摘Volatile nitrosamines (VNAs) are a group of compounds classified as probable (group 2A) and possible (group 2B) carcinogens in humans. Along with certain foods and contaminated drinking water, VNAs are detected at high levels in tobacco products and in both mainstream and side-stream smoke. Our laboratory monitors six urinary VNAs—N-nitrosodimethylamine (NDMA), N-nitrosomethylethylamine (NMEA), N-nitrosodiethylamine (NDEA), N-nitrosopiperidine (NPIP), N-nitrosopyrrolidine (NPYR), and N-nitrosomorpholine (NMOR)—using isotope dilution GC-MS/ MS (QQQ) for large population studies such as the National Health and Nutrition Examination Survey (NHANES). In this paper, we report for the first time a new automated sample preparation method to more efficiently quantitate these VNAs. Automation is done using Hamilton STAR<sup>TM</sup> and Caliper Staccato<sup>TM</sup> workstations. This new automated method reduces sample preparation time from 4 hours to 2.5 hours while maintaining precision (inter-run CV < 10%) and accuracy (85% - 111%). More importantly this method increases sample throughput while maintaining a low limit of detection (<10 pg/mL) for all analytes. A streamlined sample data flow was created in parallel to the automated method, in which samples can be tracked from receiving to final LIMs output with minimal human intervention, further minimizing human error in the sample preparation process. This new automated method and the sample data flow are currently applied in bio-monitoring of VNAs in the US non-institutionalized population NHANES 2013-2014 cycle.
文摘As sandstone layers in thin interbedded section are difficult to identify,conventional model-driven seismic inversion and data-driven seismic prediction methods have low precision in predicting them.To solve this problem,a model-data-driven seismic AVO(amplitude variation with offset)inversion method based on a space-variant objective function has been worked out.In this method,zero delay cross-correlation function and F norm are used to establish objective function.Based on inverse distance weighting theory,change of the objective function is controlled according to the location of the target CDP(common depth point),to change the constraint weights of training samples,initial low-frequency models,and seismic data on the inversion.Hence,the proposed method can get high resolution and high-accuracy velocity and density from inversion of small sample data,and is suitable for identifying thin interbedded sand bodies.Tests with thin interbedded geological models show that the proposed method has high inversion accuracy and resolution for small sample data,and can identify sandstone and mudstone layers of about one-30th of the dominant wavelength thick.Tests on the field data of Lishui sag show that the inversion results of the proposed method have small relative error with well-log data,and can identify thin interbedded sandstone layers of about one-15th of the dominant wavelength thick with small sample data.
基金funded by the Cora Topolewski Cardiac Research Fund at the Children’s Hospital of Philadelphia(CHOP)the Pediatric Valve Center Frontier Program at CHOP+4 种基金the Additional Ventures Single Ventricle Research Fund Expansion Awardthe National Institutes of Health(USA)supported by the program(Nos.NHLBI T32 HL007915 and NIH R01 HL153166)supported by the program(No.NIH R01 HL153166)supported by the U.S.Department of Energy(No.DE-SC0022953)。
文摘Material identification is critical for understanding the relationship between mechanical properties and the associated mechanical functions.However,material identification is a challenging task,especially when the characteristic of the material is highly nonlinear in nature,as is common in biological tissue.In this work,we identify unknown material properties in continuum solid mechanics via physics-informed neural networks(PINNs).To improve the accuracy and efficiency of PINNs,we develop efficient strategies to nonuniformly sample observational data.We also investigate different approaches to enforce Dirichlet-type boundary conditions(BCs)as soft or hard constraints.Finally,we apply the proposed methods to a diverse set of time-dependent and time-independent solid mechanic examples that span linear elastic and hyperelastic material space.The estimated material parameters achieve relative errors of less than 1%.As such,this work is relevant to diverse applications,including optimizing structural integrity and developing novel materials.
基金supported by the Natural Science Foundation of Zhejiang Province,China(Grant No.LY13F030005)the National Natural Science Foundation of China(Grant No.61501331)
文摘In this paper, the consensus problem with position sampled data for second-order multi-agent systems is investigated.The interaction topology among the agents is depicted by a directed graph. The full-order and reduced-order observers with position sampled data are proposed, by which two kinds of sampled data-based consensus protocols are constructed. With the provided sampled protocols, the consensus convergence analysis of a continuous-time multi-agent system is equivalently transformed into that of a discrete-time system. Then, by using matrix theory and a sampled control analysis method, some sufficient and necessary consensus conditions based on the coupling parameters, spectrum of the Laplacian matrix and sampling period are obtained. While the sampling period tends to zero, our established necessary and sufficient conditions are degenerated to the continuous-time protocol case, which are consistent with the existing result for the continuous-time case. Finally, the effectiveness of our established results is illustrated by a simple simulation example.
文摘Data from the 2013 Canadian Tobacco, Alcohol and Drugs Survey, and two other surveys are used to determine the effects of cannabis use on self-reported physical and mental health. Daily or almost daily marijuana use is shown to be detrimental to both measures of health for some age groups but not all. The age group specific effects depend on gender. Males and females respond differently to cannabis use. The health costs of regularly using cannabis are significant but they are much smaller than those associated with tobacco use. These costs are attributed to both the presence of delta9-tetrahydrocannabinol and the fact that smoking cannabis is itself a health hazard because of the toxic properties of the smoke ingested. Cannabis use is costlier to regular smokers and age of first use below the age of 15 or 20 and being a former user leads to reduced physical and mental capacities which are permanent. These results strongly suggest that the legalization of marijuana be accompanied by educational programs, counseling services, and a delivery system, which minimizes juvenile and young adult usage.
基金supported by the Natural Science Foundation of China under Grant No.T2293772the National Key R&D Program of China under Grant No.2018YFA0703800+1 种基金the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No.XDA27000000the National Science Foundation of Shandong Province under Grant No.ZR2020ZD26.
文摘In this paper,the authors consider the distributed adaptive identification problem over sensor networks using sampled data,where the dynamics of each sensor is described by a stochastic differential equation.By minimizing a local objective function at sampling time instants,the authors propose an online distributed least squares algorithm based on sampled data.A cooperative non-persistent excitation condition is introduced,under which the convergence results of the proposed algorithm are established by properly choosing the sampling time interval.The upper bound on the accumulative regret of the adaptive predictor can also be provided.Finally,the authors demonstrate the cooperative effect of multiple sensors in the estimation of unknown parameters by computer simulations.
文摘The basis of accurate mineral resource estimates is to have a geological model which replicates the nature and style of the orebody. Key inputs into the generation of a good geological model are the sample data and mapping information. The Obuasi Mine sample data with a lot of legacy issues were subjected to a robust validation process and integrated with mapping information to generate an accurate geological orebody model for mineral resource estimation in Block 8 Lower. Validation of the sample data focused on replacing missing collar coordinates, missing assays, and correcting magnetic declination that was used to convert the downhole surveys from true to magnetic, fix missing lithology and finally assign confidence numbers to all the sample data. The missing coordinates which were replaced ensured that the sample data plotted at their correct location in space as intended from the planning stage. Magnetic declination data, which was maintained constant throughout all the years even though it changes every year, was also corrected in the validation project. The corrected magnetic declination ensured that the drillholes were plotted on their accurate trajectory as per the planned azimuth and also reflected the true position of the intercepted mineralized fissure(s) which was previously not the case and marked a major blot in the modelling of the Obuasi orebody. The incorporation of mapped data with the validated sample data in the wireframes resulted in a better interpretation of the orebody. The updated mineral resource generated by domaining quartz from the sulphides and compared with the old resource showed that the sulphide tonnes in the old resource estimates were overestimated by 1% and the grade overestimated by 8.5%.
基金supported by the National Natural Science Foundation of China(61863034)
文摘The identification of nonlinear systems with multiple sampled rates is a difficult task.The motivation of our paper is to study the parameter estimation problem of Hammerstein systems with dead-zone characteristics by using the dual-rate sampled data.Firstly,the auxiliary model identification principle is used to estimate the unmeasurable variables,and the recursive estimation algorithm is proposed to identify the parameters of the static nonlinear model with the dead-zone function and the parameters of the dynamic linear system model.Then,the convergence of the proposed identification algorithm is analyzed by using the martingale convergence theorem.It is proved theoretically that the estimated parameters can converge to the real values under the condition of continuous excitation.Finally,the validity of the proposed algorithm is proved by the identification of the dual-rate sampled nonlinear systems.
基金supported by the National Natural Science Foundation of China(61863034)。
文摘Based on the multi-model principle, the fuzzy identification for nonlinear systems with multirate sampled data is studied.Firstly, the nonlinear system with multirate sampled data can be shown as the nonlinear weighted combination of some linear models at multiple local working points. On this basis, the fuzzy model of the multirate sampled nonlinear system is built. The premise structure of the fuzzy model is confirmed by using fuzzy competitive learning, and the conclusion parameters of the fuzzy model are estimated by the random gradient descent algorithm. The convergence of the proposed identification algorithm is given by using the martingale theorem and lemmas. The fuzzy model of the PH neutralization process of acid-base titration for hair quality detection is constructed to demonstrate the effectiveness of the proposed method.
文摘The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical components in rocks is expensive and time consuming.However,the basic well log curves are not well correlated with BI so correlation-based,machine-learning methods are not able to derive highly accurate BI predictions using such data.A correlation-free,optimized data-matching algorithm is configured to predict BI on a supervised basis from well log and core data available from two published wells in the Lower Barnett Shale Formation (Texas).This transparent open box (TOB) algorithm matches data records by calculating the sum of squared errors between their variables and selecting the best matches as those with the minimum squared errors.It then applies optimizers to adjust weights applied to individual variable errors to minimize the root mean square error (RMSE)between calculated and predicted (BI).The prediction accuracy achieved by TOB using just five well logs (Gr,ρb,Ns,Rs,Dt) to predict BI is dependent on the density of data records sampled.At a sampling density of about one sample per 0.5 ft BI is predicted with RMSE~0.056 and R^(2)~0.790.At a sampling density of about one sample per0.1 ft BI is predicted with RMSE~0.008 and R^(2)~0.995.Adding a stratigraphic height index as an additional (sixth)input variable method improves BI prediction accuracy to RMSE~0.003 and R^(2)~0.999 for the two wells with only 1 record in 10,000 yielding a BI prediction error of>±0.1.The model has the potential to be applied in an unsupervised basis to predict BI from basic well log data in surrounding wells lacking mineralogical measurements but with similar lithofacies and burial histories.The method could also be extended to predict elastic rock properties in and seismic attributes from wells and seismic data to improve the precision of brittleness index and fracability mapping spatially.
基金Supported by the National Natural Science Foundation of China under Grant 42274144 and under Grant 41974137.
文摘Seismic data interpolation,especially irregularly sampled data interpolation,is a critical task for seismic processing and subsequent interpretation.Recently,with the development of machine learning and deep learning,convolutional neural networks(CNNs)are applied for interpolating irregularly sampled seismic data.CNN based approaches can address the apparent defects of traditional interpolation methods,such as the low computational efficiency and the difficulty on parameters selection.However,current CNN based methods only consider the temporal and spatial features of irregularly sampled seismic data,which fail to consider the frequency features of seismic data,i.e.,the multi-scale features.To overcome these drawbacks,we propose a wavelet-based convolutional block attention deep learning(W-CBADL)network for irregularly sampled seismic data reconstruction.We firstly introduce the discrete wavelet transform(DWT)and the inverse wavelet transform(IWT)to the commonly used U-Net by considering the multi-scale features of irregularly sampled seismic data.Moreover,we propose to adopt the convolutional block attention module(CBAM)to precisely restore sampled seismic traces,which could apply the attention to both channel and spatial dimensions.Finally,we adopt the proposed W-CBADL model to synthetic and pre-stack field data to evaluate its validity and effectiveness.The results demonstrate that the proposed W-CBADL model could reconstruct irregularly sampled seismic data more effectively and more efficiently than the state-of-the-art contrastive CNN based models.
文摘Fourier transform is a basis of the analysis. This paper presents a kind ofmethod of minimum sampling data determined profile of the inverted object ininverse scattering.
文摘A new and useful method of technology economics, parameter estimation method, was presented in light of the stability of gravity center of object in this paper. This method could deal with the fitting and forecasting of economy volume and could greatly decrease the errors of the fitting and forecasting results. Moreover, the strict hypothetical conditions in least squares method were not necessary in the method presented in this paper, which overcame the shortcomings of least squares method and expanded the application of data barycentre method. Application to the steel consumption volume forecasting was presented in this paper. It was shown that the result of fitting and forecasting was satisfactory. From the comparison between data barycentre forecasting method and least squares method, we could conclude that the fitting and forecasting results using data barycentre method were more stable than those of using least squares regression forecasting method, and the computation of data barycentre forecasting method was simpler than that of least squares method. As a result, the data barycentre method was convenient to use in technical economy.
基金Project supported by the National Natural Science Foundation of China(No.62103103)the Natural Science Foundation of Jiangsu Province,China(No.BK20210223)。
文摘The main aim of this work is to design a non-fragile sampled data control(NFSDC) scheme for the asymptotic synchronization criteria for interconnected coupled circuit systems(multi-agent systems, MASs). NFSDC is used to conduct synchronization analysis of the considered MASs in the presence of time-varying delays. By constructing suitable Lyapunov functions, sufficient conditions are derived in terms of linear matrix inequalities(LMIs) to ensure synchronization between the MAS leader and follower systems. Finally, two numerical examples are given to show the effectiveness of the proposed control scheme and less conservation of the proposed Lyapunov functions.
基金supported by the Major Key Project of Peng Cheng Laboratory under Grant No.PCL2023AS1-2Project funded by China Postdoctoral Science Foundation under Grant Nos.2022M722926 and2023T160605。
文摘In this paper,the authors consider a sparse parameter estimation problem in continuoustime linear stochastic regression models using sampling data.Based on the compressed sensing(CS)method,the authors propose a compressed least squares(LS) algorithm to deal with the challenges of parameter sparsity.At each sampling time instant,the proposed compressed LS algorithm first compresses the original high-dimensional regressor using a sensing matrix and obtains a low-dimensional LS estimate for the compressed unknown parameter.Then,the original high-dimensional sparse unknown parameter is recovered by a reconstruction method.By introducing a compressed excitation assumption and employing stochastic Lyapunov function and martingale estimate methods,the authors establish the performance analysis of the compressed LS algorithm under the condition on the sampling time interval without using independence or stationarity conditions on the system signals.At last,a simulation example is provided to verify the theoretical results by comparing the standard and the compressed LS algorithms for estimating a high-dimensional sparse unknown parameter.
文摘To study the capacity of artificial neural network (ANN) applying to battlefield target classification and result of classification, according to the characteristics of battlefield target acoustic and seismic signals, an on the spot experiment was carried out to derive acoustic and seismic signals of a tank and jeep by special experiment system. Experiment data processed by fast Fourier transform(FFT) were used to train the ANN to distinguish the two battlefield targets. The ANN classifier was performed by the special program based on the modified back propagation (BP) algorithm. The ANN classifier has high correct identification rates for acoustic and seismic signals of battlefield targets, and is suitable for the classification of battlefield targets. The modified BP algorithm eliminates oscillations and local minimum of the standard BP algorithm, and enhances the convergence rate of the ANN.
基金National Natural Science Foundation of China(Nos.11262014,11962021 and 51965051)Inner Mongolia Natural Science Foundation,China(No.2019MS05064)+1 种基金Inner Mongolia Earthquake Administration Director Fund Project,China(No.2019YB06)Inner Mongolia University of Technology Foundation,China(No.2020015)。
文摘Aiming at the reliability analysis of small sample data or implicit structural function,a novel structural reliability analysis model based on support vector machine(SVM)and neural network direct integration method(DNN)is proposed.Firstly,SVM with good small sample learning ability is used to train small sample data,fit structural performance functions and establish regular integration regions.Secondly,DNN is approximated the integral function to achieve multiple integration in the integration region.Finally,structural reliability was obtained by DNN.Numerical examples are investigated to demonstrate the effectiveness of the present method,which provides a feasible way for the structural reliability analysis.
文摘This paper introduces the basic viewpoints and characteristics of Bayesian statistics. Which provides a theoretical basis for solving the problem of small sample of flight simulator using Bayesian method. A series of formulas were derived to establish the Bayesian reliability modeling and evaluation model for flight simulation equipment. The two key problems of Bayesian method were pointed out as follows: obtaining the prior distribution of WeibuU parameter, calculating the parameter a posterior distribution and parameter estimation without analytic solution, and proposing the corresponding solution scheme.
文摘This paper investigates the globally asymptotically stable and L_(2)-gain of robust H_(∞)control for switched nonlinear systems under sampled data.By considering the relationship between the sampling period and the dwell time,the non-switching and one switching are discussed in the sampling interval,respectively.Firstly,a state feedback sampled-data controller is constructed by the back-stepping method,and the switching converts to asynchronous switching if it happens within the sampling interval.Then,under the limiting conditions of the sampling period,which are obtained by the average dwell time method,the closed-loop system is globally asymptotically stable and has L_(2)-gain.Finally,two numerical examples are provided to demonstrate the effectiveness of the proposed method.
基金Project supported by the Tianjin Natural Science Foundation of China(Nos.20JCQNJC01450 and 20JCYBJC01060)the National Natural Science Foundation of China(No.61973175)。
文摘This paper studies the sampled data based containment control problem of second-order multi-agent systems with intermittent communications,where velocity measurements for each agent are unavailable.A novel controller for second-order containment is put forward via intermittent sampled position data measurement.Several necessary and sufficient conditions are derived to achieve intermittent sampled containment control by means of analyzing the relationship among control gains,eigenvalues of the Laplacian matrix,the sampling period,and the communication width.Finally,several simulation examples are used to testify the correctness and effectiveness of the theoretical results.