Raman spectroscopy has found extensive use in monitoring and controlling cell culture processes.In this context,the prediction accuracy of Raman-based models is of paramount importance.However,models established with ...Raman spectroscopy has found extensive use in monitoring and controlling cell culture processes.In this context,the prediction accuracy of Raman-based models is of paramount importance.However,models established with data from manually fed-batch cultures often exhibit poor performance in Raman-controlled cultures.Thus,there is a need for effective methods to rectify these models.The objective of this paper is to investigate the efficacy of Kalman filter(KF)algorithm in correcting Raman-based models during cell culture.Initially,partial least squares(PLS)models for different components were constructed using data from manually fed-batch cultures,and the predictive performance of these models was compared.Subsequently,various correction methods including the PLS-KF-KF method proposed in this study were employed to refine the PLS models.Finally,a case study involving the auto-control of glucose concentration demonstrated the application of optimal model correction method.The results indicated that the original PLS models exhibited differential performance between manually fed-batch cultures and Raman-controlled cultures.For glucose,the root mean square error of prediction(RMSEP)of manually fed-batch culture and Raman-controlled culture was 0.23 and 0.40 g·L^(-1).With the implementation of model correction methods,there was a significant improvement in model performance within Raman-controlled cultures.The RMSEP for glucose from updating-PLS,KF-PLS,and PLS-KF-KF was 0.38,0.36 and 0.17 g·L^(-1),respectively.Notably,the proposed PLS-KF-KF model correction method was found to be more effective and stable,playing a vital role in the automated nutrient feeding of cell cultures.展开更多
This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding type...This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.展开更多
At present,one of the methods used to determine the height of points on the Earth’s surface is Global Navigation Satellite System(GNSS)leveling.It is possible to determine the orthometric or normal height by this met...At present,one of the methods used to determine the height of points on the Earth’s surface is Global Navigation Satellite System(GNSS)leveling.It is possible to determine the orthometric or normal height by this method only if there is a geoid or quasi-geoid height model available.This paper proposes the methodology for local correction of the heights of high-order global geoid models such as EGM08,EIGEN-6C4,GECO,and XGM2019e_2159.This methodology was tested in different areas of the research field,covering various relief forms.The dependence of the change in corrected height accuracy on the input data was analyzed,and the correction was also conducted for model heights in three tidal systems:"tide free","mean tide",and"zero tide".The results show that the heights of EIGEN-6C4 model can be corrected with an accuracy of up to 1 cm for flat and foothill terrains with the dimensionality of 1°×1°,2°×2°,and 3°×3°.The EGM08 model presents an almost identical result.The EIGEN-6C4 model is best suited for mountainous relief and provides an accuracy of 1.5 cm on the 1°×1°area.The height correction accuracy of GECO and XGM2019e_2159 models is slightly poor,which has fuzziness in terms of numerical fluctuation.展开更多
An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and pr...An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and program LAYER.We calculated the error field penetration threshold for J-TEXT.In addition,we find that the island width increases slightly as the error field amplitude increases when the error field amplitude is below the critical penetration value.However,the island width suddenly jumps to a large value because the shielding effect of the plasma against the error field disappears after the penetration.By scanning the natural mode frequency,we find that the shielding effect of the plasma decreases as the natural mode frequency decreases.Finally,we obtain the m/n=2/1 penetration threshold scaling on density and temperature.展开更多
Numerical weather prediction(NWP)models have always presented large forecasting errors of surface wind speeds over regions with complex terrain.In this study,surface wind forecasts from an operational NWP model,the SM...Numerical weather prediction(NWP)models have always presented large forecasting errors of surface wind speeds over regions with complex terrain.In this study,surface wind forecasts from an operational NWP model,the SMS-WARR(Shanghai Meteorological Service-WRF ADAS Rapid Refresh System),are analyzed to quantitatively reveal the relationships between the forecasted surface wind speed errors and terrain features,with the intent of providing clues to better apply the NWP model to complex terrain regions.The terrain features are described by three parameters:the standard deviation of the model grid-scale orography,terrain height error of the model,and slope angle.The results show that the forecast bias has a unimodal distribution with a change in the standard deviation of orography.The minimum ME(the mean value of bias)is 1.2 m s^(-1) when the standard deviation is between 60 and 70 m.A positive correlation exists between bias and terrain height error,with the ME increasing by 10%−30%for every 200 m increase in terrain height error.The ME decreases by 65.6%when slope angle increases from(0.5°−1.5°)to larger than 3.5°for uphill winds but increases by 35.4%when the absolute value of slope angle increases from(0.5°−1.5°)to(2.5°−3.5°)for downhill winds.Several sensitivity experiments are carried out with a model output statistical(MOS)calibration model for surface wind speeds and ME(RMSE)has been reduced by 90%(30%)by introducing terrain parameters,demonstrating the value of this study.展开更多
Standard automatic dependent surveillance broadcast (ADS-B) reception algorithms offer considerable performance at high signal-to-noise ratios (SNRs). However, the performance of ADS-B algorithms in applications can b...Standard automatic dependent surveillance broadcast (ADS-B) reception algorithms offer considerable performance at high signal-to-noise ratios (SNRs). However, the performance of ADS-B algorithms in applications can be problematic at low SNRs and in high interference situations, as detecting and decoding techniques may not perform correctly in such circumstances. In addition, conventional error correction algorithms have limitations in their ability to correct errors in ADS-B messages, as the bit and confidence values may be declared inaccurately in the event of low SNRs and high interference. The principal goal of this paper is to deploy a Long Short-Term Memory (LSTM) recurrent neural network model for error correction in conjunction with a conventional algorithm. The data of various flights are collected and cleaned in an initial stage. The clean data is divided randomly into training and test sets. Next, the LSTM model is trained based on the training dataset, and then the model is evaluated based on the test dataset. The proposed model not only improves the ADS-B In packet error correction rate (PECR), but it also enhances the ADS-B In terms of sensitivity. The performance evaluation results reveal that the proposed scheme is achievable and efficient for the avionics industry. It is worth noting that the proposed algorithm is not dependent on conventional algorithms’ prerequisites.展开更多
To estimate the parameters of the mixed additive and multiplicative(MAM)random error model using the weighted least squares iterative algorithm that requires derivation of the complex weight array,we introduce a deriv...To estimate the parameters of the mixed additive and multiplicative(MAM)random error model using the weighted least squares iterative algorithm that requires derivation of the complex weight array,we introduce a derivative-free cat swarm optimization for parameter estimation.We embed the Powell method,which uses conjugate direction acceleration and does not need to derive the objective function,into the original cat swarm optimization to accelerate its convergence speed and search accuracy.We use the ordinary least squares,weighted least squares,original cat swarm optimization,particle swarm algorithm and improved cat swarm optimization to estimate the parameters of the straight-line fitting MAM model with lower nonlinearity and the DEM MAM model with higher nonlinearity,respectively.The experimental results show that the improved cat swarm optimization has faster convergence speed,higher search accuracy,and better stability than the original cat swarm optimization and the particle swarm algorithm.At the same time,the improved cat swarm optimization can obtain results consistent with the weighted least squares method based on the objective function only while avoiding multiple complex weight array derivations.The method in this paper provides a new idea for theoretical research on parameter estimation of MAM error models.展开更多
Spatial linear features are often represented as a series of line segments joined by measured endpoints in surveying and geographic information science.There are not only the measuring errors of the endpoints but also...Spatial linear features are often represented as a series of line segments joined by measured endpoints in surveying and geographic information science.There are not only the measuring errors of the endpoints but also the modeling errors between the line segments and the actual geographical features.This paper presents a Brownian bridge error model for line segments combining both the modeling and measuring errors.First,the Brownian bridge is used to establish the position distribution of the actual geographic feature represented by the line segment.Second,an error propagation model with the constraints of the measuring error distribution of the endpoints is proposed.Third,a comprehensive error band of the line segment is constructed,wherein both the modeling and measuring errors are contained.The proposed error model can be used to evaluate line segments’overall accuracy and trustability influenced by modeling and measuring errors,and provides a comprehensive quality indicator for the geospatial data.展开更多
Errors inevitably exist in numerical weather prediction (NWP) due to imperfect numeric and physical parameterizations. To eliminate these errors, by considering NWP as an inverse problem, an unknown term in the pred...Errors inevitably exist in numerical weather prediction (NWP) due to imperfect numeric and physical parameterizations. To eliminate these errors, by considering NWP as an inverse problem, an unknown term in the prediction equations can be estimated inversely by using the past data, which are presumed to represent the imperfection of the NWP model (model error, denoted as ME). In this first paper of a two-part series, an iteration method for obtaining the MEs in past intervals is presented, and the results from testing its convergence in idealized experiments are reported. Moreover, two batches of iteration tests were applied in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August 2009 and January-February 2010. The datasets associated with the initial conditions and sea surface temperature (SST) were both based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results showed that 6th h forecast errors were reduced to 10% of their original value after a 20-step iteration. Then, off-line forecast error corrections were estimated linearly based on the 2-month mean MEs and compared with forecast errors. The estimated error corrections agreed well with the forecast errors, but the linear growth rate of the estimation was steeper than the forecast error. The advantage of this iteration method is that the MEs can provide the foundation for online correction. A larger proportion of the forecast errors can be expected to be canceled out by properly introducing the model error correction into GRAPES-GFS.展开更多
Machine learning models were used to improve the accuracy of China Meteorological Administration Multisource Precipitation Analysis System(CMPAS)in complex terrain areas by combining rain gauge precipitation with topo...Machine learning models were used to improve the accuracy of China Meteorological Administration Multisource Precipitation Analysis System(CMPAS)in complex terrain areas by combining rain gauge precipitation with topographic factors like altitude,slope,slope direction,slope variability,surface roughness,and meteorological factors like temperature and wind speed.The results of the correction demonstrated that the ensemble learning method has a considerably corrective effect and the three methods(Random Forest,AdaBoost,and Bagging)adopted in the study had similar results.The mean bias between CMPAS and 85%of automatic weather stations has dropped by more than 30%.The plateau region displays the largest accuracy increase,the winter season shows the greatest error reduction,and decreasing precipitation improves the correction outcome.Additionally,the heavy precipitation process’precision has improved to some degree.For individual stations,the revised CMPAS error fluctuation range is significantly reduced.展开更多
With rapid economic development,the size of urban land in China is expanding dramatically.The Urban Growth Boundary(UGB)is an expandable spatial boundary for urban construction in a certain period in order to control ...With rapid economic development,the size of urban land in China is expanding dramatically.The Urban Growth Boundary(UGB)is an expandable spatial boundary for urban construction in a certain period in order to control the urban sprawl.Reasonable delineation of UGB can inhibit the disorderly spread of urban space and guide the normal development of the city.It is of practical significance for the construction of green urban space.The study utilizes GIS technology to establish a land construction suitability evaluation system for Nankang city,which is experiencing rapid urban expansion,and outlines the preliminary UGB under the future land use simulation(FLUS)model.At the same time,considering the coupled coordination of"Production-Living-Ecological Space",and based on the suitability evaluation,we revised the preliminary UGB by combining the advantages of the patch-generating land use simulation(PLUS)model and the convex hull model to delineate the final UGB.The results show that:1)the comprehensive score of the evaluation of the suitability of the construction of land from high to low shows the distribution of the center of the city to the surrounding circle type spread,the center of the city has the highest suitability score.The results of convex hull model show that the urban expansion type of Nankang is epitaxial.In the future,the urban expansion will mainly occur in the northern part of the city.The PLUS model predicts an increase of 3359.97 hm^(2)of construction land in Nankang by 2035,of which 2022.97 hm^(2)is urban construction land.2)The FLUS model has a prediction accuracy of 86.3%and delineates a preliminary UGB area of 9215.07 hm^(2).3)We used the results of the construction suitability evaluation,PLUS model simulation results,and convex hull model predictions to revise the originally delineated UGB.The final delineated UGB area is 8895.67 hm^(2)and it is capable of meeting the future development of the study area.The results of the delineation can promote sustainable urban development,and the delineation methodology can provide a reference basis for the preparation of territorial spatial planning.展开更多
By using error correction model, I conduct co-integration analysis on the research of the relationship between the per capita practical consumption and per capita practical disposable income of urban residents in Huna...By using error correction model, I conduct co-integration analysis on the research of the relationship between the per capita practical consumption and per capita practical disposable income of urban residents in Hunan Province from 1978 to 2009. The results show that there is a co-integration relationship between the per capita practical consumption and the practical per capita disposable income of urban residents, and based on these, the corresponding error correction model is established. Finally, corresponding countermeasures and suggestions are put forward as follows: broaden the income channel of urban residents; create goods consuming environment; perfect socialist security system.展开更多
Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximat...Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximate error correction scheme that performs dimension mapping operations on surface codes.This error correction scheme utilizes the topological properties of error correction codes to map the surface code dimension to three dimensions.Compared to previous error correction schemes,the present three-dimensional surface code exhibits good scalability due to its higher redundancy and more efficient error correction capabilities.By reducing the number of ancilla qubits required for error correction,this approach achieves savings in measurement space and reduces resource consumption costs.In order to improve the decoding efficiency and solve the problem of the correlation between the surface code stabilizer and the 3D space after dimension mapping,we employ a reinforcement learning(RL)decoder based on deep Q-learning,which enables faster identification of the optimal syndrome and achieves better thresholds through conditional optimization.Compared to the minimum weight perfect matching decoding,the threshold of the RL trained model reaches 0.78%,which is 56%higher and enables large-scale fault-tolerant quantum computation.展开更多
Quantum metrology provides a fundamental limit on the precision of multi-parameter estimation,called the Heisenberg limit,which has been achieved in noiseless quantum systems.However,for systems subject to noises,it i...Quantum metrology provides a fundamental limit on the precision of multi-parameter estimation,called the Heisenberg limit,which has been achieved in noiseless quantum systems.However,for systems subject to noises,it is hard to achieve this limit since noises are inclined to destroy quantum coherence and entanglement.In this paper,a combined control scheme with feedback and quantum error correction(QEC)is proposed to achieve the Heisenberg limit in the presence of spontaneous emission,where the feedback control is used to protect a stabilizer code space containing an optimal probe state and an additional control is applied to eliminate the measurement incompatibility among three parameters.Although an ancilla system is necessary for the preparation of the optimal probe state,our scheme does not require the ancilla system to be noiseless.In addition,the control scheme in this paper has a low-dimensional code space.For the three components of a magnetic field,it can achieve the highest estimation precision with only a 2-dimensional code space,while at least a4-dimensional code space is required in the common optimal error correction protocols.展开更多
Assembly geometric error as a part of the machine tool system errors has a significant influence on the machining accuracy of the multi-axis machine tool.And it cannot be eliminated due to the error propagation of com...Assembly geometric error as a part of the machine tool system errors has a significant influence on the machining accuracy of the multi-axis machine tool.And it cannot be eliminated due to the error propagation of components in the assembly process,which is generally non-uniformly distributed in the whole working space.A comprehensive expression model for assembly geometric error is greatly helpful for machining quality control of machine tools to meet the demand for machining accuracy in practice.However,the expression ranges based on the standard quasistatic expression model for assembly geometric errors are far less than those needed in the whole working space of the multi-axis machine tool.To address this issue,a modeling methodology based on the Jacobian-Torsor model is proposed to describe the spatially distributed geometric errors.Firstly,an improved kinematic Jacobian-Torsor model is developed to describe the relative movements such as translation and rotation motion between assembly bodies,respectively.Furthermore,based on the proposed kinematic Jacobian-Torsor model,a spatial expression of geometric errors for the multi-axis machine tool is given.And simulation and experimental verification are taken with the investigation of the spatial distribution of geometric errors on five four-axis machine tools.The results validate the effectiveness of the proposed kinematic Jacobian-Torsor model in dealing with the spatial expression of assembly geometric errors.展开更多
In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
Kinematic calibration is a reliable way to improve the accuracy of parallel manipulators, while the error model dramatically afects the accuracy, reliability, and stability of identifcation results. In this paper, a c...Kinematic calibration is a reliable way to improve the accuracy of parallel manipulators, while the error model dramatically afects the accuracy, reliability, and stability of identifcation results. In this paper, a comparison study on kinematic calibration for a 3-DOF parallel manipulator with three error models is presented to investigate the relative merits of diferent error modeling methods. The study takes into consideration the inverse-kinematic error model, which ignores all passive joint errors, the geometric-constraint error model, which is derived by special geometric constraints of the studied RPR-equivalent parallel manipulator, and the complete-minimal error model, which meets the complete, minimal, and continuous criteria. This comparison focuses on aspects such as modeling complexity, identifcation accuracy, the impact of noise uncertainty, and parameter identifability. To facilitate a more intuitive comparison, simulations are conducted to draw conclusions in certain aspects, including accuracy, the infuence of the S joint, identifcation with noises, and sensitivity indices. The simulations indicate that the complete-minimal error model exhibits the lowest residual values, and all error models demonstrate stability considering noises. Hereafter, an experiment is conducted on a prototype using a laser tracker, providing further insights into the diferences among the three error models. The results show that the residual errors of this machine tool are signifcantly improved according to the identifed parameters, and the complete-minimal error model can approach the measurements by nearly 90% compared to the inverse-kinematic error model. The fndings pertaining to the model process, complexity, and limitations are also instructive for other parallel manipulators.展开更多
Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptab...Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptability of fixing techniques may vary for different types of code errors.How to choose the appropriate methods to fix different types of errors is still an unsolved problem.To this end,this paper first classifies code errors by Java novice programmers based on Delphi analysis,and compares the effectiveness of different deep learning models(CuBERT,GraphCodeBERT and GGNN)fixing different types of errors.The results indicated that the 3 models differed significantly in their classification accuracy on different error codes,while the error correction model based on the Bert structure showed better code correction potential for beginners’codes.展开更多
In the version of this Article originally published online,there was an error in the schematics of Figures 2b and 2c.These errors have now been corrected in the original article.
An online systematic error correction is presented and examined as a technique to improve the accuracy of real-time numerical weather prediction, based on the dataset of model errors (MEs) in past intervals. Given t...An online systematic error correction is presented and examined as a technique to improve the accuracy of real-time numerical weather prediction, based on the dataset of model errors (MEs) in past intervals. Given the analyses, the ME in each interval (6 h) between two analyses can be iteratively obtained by introducing an unknown tendency term into the prediction equation, shown in Part I of this two-paper series. In this part, after analyzing the 5-year (2001-2005) GRAPES- GFS (Global Forecast System of the Global and Regional Assimilation and Prediction System) error patterns and evolution, a systematic model error correction is given based on the least-squares approach by firstly using the past MEs. To test the correction, we applied the approach in GRAPES-GFS for July 2009 and January 2010. The datasets associated with the initial condition and SST used in this study were based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results indicated that the Northern Hemispheric systematically underestimated equator-to-pole geopotential gradient and westerly wind of GRAPES-GFS were largely enhanced, and the biases of temperature and wind in the tropics were strongly reduced. Therefore, the correction results in a more skillful forecast with lower mean bias and root-mean-square error and higher anomaly correlation coefficient.展开更多
基金supported by the Key Research and Development Program of Zhejiang Province,China(2023C03116).
文摘Raman spectroscopy has found extensive use in monitoring and controlling cell culture processes.In this context,the prediction accuracy of Raman-based models is of paramount importance.However,models established with data from manually fed-batch cultures often exhibit poor performance in Raman-controlled cultures.Thus,there is a need for effective methods to rectify these models.The objective of this paper is to investigate the efficacy of Kalman filter(KF)algorithm in correcting Raman-based models during cell culture.Initially,partial least squares(PLS)models for different components were constructed using data from manually fed-batch cultures,and the predictive performance of these models was compared.Subsequently,various correction methods including the PLS-KF-KF method proposed in this study were employed to refine the PLS models.Finally,a case study involving the auto-control of glucose concentration demonstrated the application of optimal model correction method.The results indicated that the original PLS models exhibited differential performance between manually fed-batch cultures and Raman-controlled cultures.For glucose,the root mean square error of prediction(RMSEP)of manually fed-batch culture and Raman-controlled culture was 0.23 and 0.40 g·L^(-1).With the implementation of model correction methods,there was a significant improvement in model performance within Raman-controlled cultures.The RMSEP for glucose from updating-PLS,KF-PLS,and PLS-KF-KF was 0.38,0.36 and 0.17 g·L^(-1),respectively.Notably,the proposed PLS-KF-KF model correction method was found to be more effective and stable,playing a vital role in the automated nutrient feeding of cell cultures.
基金supported in part by the National Natural Science Foundation of China(Nos.62071441 and 61701464)in part by the Fundamental Research Funds for the Central Universities(No.202151006).
文摘This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.
基金the International Center for Global Earth Models(ICGEM)for the height anomaly and gravity anomaly data and Bureau Gravimetrique International(BGI)for free-air gravity anomaly data from the World Gravity Map project(WGM2012)The authors are grateful to Głowny Urza˛d Geodezji i Kartografii of Poland for the height anomaly data of the quasi-geoid PL-geoid2021.
文摘At present,one of the methods used to determine the height of points on the Earth’s surface is Global Navigation Satellite System(GNSS)leveling.It is possible to determine the orthometric or normal height by this method only if there is a geoid or quasi-geoid height model available.This paper proposes the methodology for local correction of the heights of high-order global geoid models such as EGM08,EIGEN-6C4,GECO,and XGM2019e_2159.This methodology was tested in different areas of the research field,covering various relief forms.The dependence of the change in corrected height accuracy on the input data was analyzed,and the correction was also conducted for model heights in three tidal systems:"tide free","mean tide",and"zero tide".The results show that the heights of EIGEN-6C4 model can be corrected with an accuracy of up to 1 cm for flat and foothill terrains with the dimensionality of 1°×1°,2°×2°,and 3°×3°.The EGM08 model presents an almost identical result.The EIGEN-6C4 model is best suited for mountainous relief and provides an accuracy of 1.5 cm on the 1°×1°area.The height correction accuracy of GECO and XGM2019e_2159 models is slightly poor,which has fuzziness in terms of numerical fluctuation.
基金Project supported by the National Natural Science Foundation of China (Grant No.51821005)。
文摘An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and program LAYER.We calculated the error field penetration threshold for J-TEXT.In addition,we find that the island width increases slightly as the error field amplitude increases when the error field amplitude is below the critical penetration value.However,the island width suddenly jumps to a large value because the shielding effect of the plasma against the error field disappears after the penetration.By scanning the natural mode frequency,we find that the shielding effect of the plasma decreases as the natural mode frequency decreases.Finally,we obtain the m/n=2/1 penetration threshold scaling on density and temperature.
基金supported by the National Natural Science Foundation of China(No.U2142206).
文摘Numerical weather prediction(NWP)models have always presented large forecasting errors of surface wind speeds over regions with complex terrain.In this study,surface wind forecasts from an operational NWP model,the SMS-WARR(Shanghai Meteorological Service-WRF ADAS Rapid Refresh System),are analyzed to quantitatively reveal the relationships between the forecasted surface wind speed errors and terrain features,with the intent of providing clues to better apply the NWP model to complex terrain regions.The terrain features are described by three parameters:the standard deviation of the model grid-scale orography,terrain height error of the model,and slope angle.The results show that the forecast bias has a unimodal distribution with a change in the standard deviation of orography.The minimum ME(the mean value of bias)is 1.2 m s^(-1) when the standard deviation is between 60 and 70 m.A positive correlation exists between bias and terrain height error,with the ME increasing by 10%−30%for every 200 m increase in terrain height error.The ME decreases by 65.6%when slope angle increases from(0.5°−1.5°)to larger than 3.5°for uphill winds but increases by 35.4%when the absolute value of slope angle increases from(0.5°−1.5°)to(2.5°−3.5°)for downhill winds.Several sensitivity experiments are carried out with a model output statistical(MOS)calibration model for surface wind speeds and ME(RMSE)has been reduced by 90%(30%)by introducing terrain parameters,demonstrating the value of this study.
文摘Standard automatic dependent surveillance broadcast (ADS-B) reception algorithms offer considerable performance at high signal-to-noise ratios (SNRs). However, the performance of ADS-B algorithms in applications can be problematic at low SNRs and in high interference situations, as detecting and decoding techniques may not perform correctly in such circumstances. In addition, conventional error correction algorithms have limitations in their ability to correct errors in ADS-B messages, as the bit and confidence values may be declared inaccurately in the event of low SNRs and high interference. The principal goal of this paper is to deploy a Long Short-Term Memory (LSTM) recurrent neural network model for error correction in conjunction with a conventional algorithm. The data of various flights are collected and cleaned in an initial stage. The clean data is divided randomly into training and test sets. Next, the LSTM model is trained based on the training dataset, and then the model is evaluated based on the test dataset. The proposed model not only improves the ADS-B In packet error correction rate (PECR), but it also enhances the ADS-B In terms of sensitivity. The performance evaluation results reveal that the proposed scheme is achievable and efficient for the avionics industry. It is worth noting that the proposed algorithm is not dependent on conventional algorithms’ prerequisites.
基金supported by the National Natural Science Foundation of China(No.42174011 and No.41874001).
文摘To estimate the parameters of the mixed additive and multiplicative(MAM)random error model using the weighted least squares iterative algorithm that requires derivation of the complex weight array,we introduce a derivative-free cat swarm optimization for parameter estimation.We embed the Powell method,which uses conjugate direction acceleration and does not need to derive the objective function,into the original cat swarm optimization to accelerate its convergence speed and search accuracy.We use the ordinary least squares,weighted least squares,original cat swarm optimization,particle swarm algorithm and improved cat swarm optimization to estimate the parameters of the straight-line fitting MAM model with lower nonlinearity and the DEM MAM model with higher nonlinearity,respectively.The experimental results show that the improved cat swarm optimization has faster convergence speed,higher search accuracy,and better stability than the original cat swarm optimization and the particle swarm algorithm.At the same time,the improved cat swarm optimization can obtain results consistent with the weighted least squares method based on the objective function only while avoiding multiple complex weight array derivations.The method in this paper provides a new idea for theoretical research on parameter estimation of MAM error models.
基金National Natural Science Foundation of China(Nos.42071372,42221002)。
文摘Spatial linear features are often represented as a series of line segments joined by measured endpoints in surveying and geographic information science.There are not only the measuring errors of the endpoints but also the modeling errors between the line segments and the actual geographical features.This paper presents a Brownian bridge error model for line segments combining both the modeling and measuring errors.First,the Brownian bridge is used to establish the position distribution of the actual geographic feature represented by the line segment.Second,an error propagation model with the constraints of the measuring error distribution of the endpoints is proposed.Third,a comprehensive error band of the line segment is constructed,wherein both the modeling and measuring errors are contained.The proposed error model can be used to evaluate line segments’overall accuracy and trustability influenced by modeling and measuring errors,and provides a comprehensive quality indicator for the geospatial data.
基金funded by the National Natural Science Foundation Science Fund for Youth (Grant No.41405095)the Key Projects in the National Science and Technology Pillar Program during the Twelfth Fiveyear Plan Period (Grant No.2012BAC22B02)the National Natural Science Foundation Science Fund for Creative Research Groups (Grant No.41221064)
文摘Errors inevitably exist in numerical weather prediction (NWP) due to imperfect numeric and physical parameterizations. To eliminate these errors, by considering NWP as an inverse problem, an unknown term in the prediction equations can be estimated inversely by using the past data, which are presumed to represent the imperfection of the NWP model (model error, denoted as ME). In this first paper of a two-part series, an iteration method for obtaining the MEs in past intervals is presented, and the results from testing its convergence in idealized experiments are reported. Moreover, two batches of iteration tests were applied in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August 2009 and January-February 2010. The datasets associated with the initial conditions and sea surface temperature (SST) were both based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results showed that 6th h forecast errors were reduced to 10% of their original value after a 20-step iteration. Then, off-line forecast error corrections were estimated linearly based on the 2-month mean MEs and compared with forecast errors. The estimated error corrections agreed well with the forecast errors, but the linear growth rate of the estimation was steeper than the forecast error. The advantage of this iteration method is that the MEs can provide the foundation for online correction. A larger proportion of the forecast errors can be expected to be canceled out by properly introducing the model error correction into GRAPES-GFS.
基金Program of Science and Technology Department of Sichuan Province(2022YFS0541-02)Program of Heavy Rain and Drought-flood Disasters in Plateau and Basin Key Laboratory of Sichuan Province(SCQXKJQN202121)Innovative Development Program of the China Meteorological Administration(CXFZ2021Z007)。
文摘Machine learning models were used to improve the accuracy of China Meteorological Administration Multisource Precipitation Analysis System(CMPAS)in complex terrain areas by combining rain gauge precipitation with topographic factors like altitude,slope,slope direction,slope variability,surface roughness,and meteorological factors like temperature and wind speed.The results of the correction demonstrated that the ensemble learning method has a considerably corrective effect and the three methods(Random Forest,AdaBoost,and Bagging)adopted in the study had similar results.The mean bias between CMPAS and 85%of automatic weather stations has dropped by more than 30%.The plateau region displays the largest accuracy increase,the winter season shows the greatest error reduction,and decreasing precipitation improves the correction outcome.Additionally,the heavy precipitation process’precision has improved to some degree.For individual stations,the revised CMPAS error fluctuation range is significantly reduced.
基金supported by the Humanities and Social Sciences Program of Jiangxi Universities(Grant No.GL21129)the Graduate Student Innovation Fund Program of Gannan Normal University(Grant No.YCX23A043)the Open Subject of Geography Discipline Construction of Gannan Normal University(Grant No.200084).
文摘With rapid economic development,the size of urban land in China is expanding dramatically.The Urban Growth Boundary(UGB)is an expandable spatial boundary for urban construction in a certain period in order to control the urban sprawl.Reasonable delineation of UGB can inhibit the disorderly spread of urban space and guide the normal development of the city.It is of practical significance for the construction of green urban space.The study utilizes GIS technology to establish a land construction suitability evaluation system for Nankang city,which is experiencing rapid urban expansion,and outlines the preliminary UGB under the future land use simulation(FLUS)model.At the same time,considering the coupled coordination of"Production-Living-Ecological Space",and based on the suitability evaluation,we revised the preliminary UGB by combining the advantages of the patch-generating land use simulation(PLUS)model and the convex hull model to delineate the final UGB.The results show that:1)the comprehensive score of the evaluation of the suitability of the construction of land from high to low shows the distribution of the center of the city to the surrounding circle type spread,the center of the city has the highest suitability score.The results of convex hull model show that the urban expansion type of Nankang is epitaxial.In the future,the urban expansion will mainly occur in the northern part of the city.The PLUS model predicts an increase of 3359.97 hm^(2)of construction land in Nankang by 2035,of which 2022.97 hm^(2)is urban construction land.2)The FLUS model has a prediction accuracy of 86.3%and delineates a preliminary UGB area of 9215.07 hm^(2).3)We used the results of the construction suitability evaluation,PLUS model simulation results,and convex hull model predictions to revise the originally delineated UGB.The final delineated UGB area is 8895.67 hm^(2)and it is capable of meeting the future development of the study area.The results of the delineation can promote sustainable urban development,and the delineation methodology can provide a reference basis for the preparation of territorial spatial planning.
基金Supported by the Scientific Research Subject of Department of Education in Hunan Province(10C0556)
文摘By using error correction model, I conduct co-integration analysis on the research of the relationship between the per capita practical consumption and per capita practical disposable income of urban residents in Hunan Province from 1978 to 2009. The results show that there is a co-integration relationship between the per capita practical consumption and the practical per capita disposable income of urban residents, and based on these, the corresponding error correction model is established. Finally, corresponding countermeasures and suggestions are put forward as follows: broaden the income channel of urban residents; create goods consuming environment; perfect socialist security system.
基金Project supported by the Natural Science Foundation of Shandong Province,China(Grant Nos.ZR2021MF049,ZR2022LLZ012,and ZR2021LLZ001)。
文摘Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximate error correction scheme that performs dimension mapping operations on surface codes.This error correction scheme utilizes the topological properties of error correction codes to map the surface code dimension to three dimensions.Compared to previous error correction schemes,the present three-dimensional surface code exhibits good scalability due to its higher redundancy and more efficient error correction capabilities.By reducing the number of ancilla qubits required for error correction,this approach achieves savings in measurement space and reduces resource consumption costs.In order to improve the decoding efficiency and solve the problem of the correlation between the surface code stabilizer and the 3D space after dimension mapping,we employ a reinforcement learning(RL)decoder based on deep Q-learning,which enables faster identification of the optimal syndrome and achieves better thresholds through conditional optimization.Compared to the minimum weight perfect matching decoding,the threshold of the RL trained model reaches 0.78%,which is 56%higher and enables large-scale fault-tolerant quantum computation.
基金Project supported by the National Natural Science Foundation of China(Grant No.61873251)。
文摘Quantum metrology provides a fundamental limit on the precision of multi-parameter estimation,called the Heisenberg limit,which has been achieved in noiseless quantum systems.However,for systems subject to noises,it is hard to achieve this limit since noises are inclined to destroy quantum coherence and entanglement.In this paper,a combined control scheme with feedback and quantum error correction(QEC)is proposed to achieve the Heisenberg limit in the presence of spontaneous emission,where the feedback control is used to protect a stabilizer code space containing an optimal probe state and an additional control is applied to eliminate the measurement incompatibility among three parameters.Although an ancilla system is necessary for the preparation of the optimal probe state,our scheme does not require the ancilla system to be noiseless.In addition,the control scheme in this paper has a low-dimensional code space.For the three components of a magnetic field,it can achieve the highest estimation precision with only a 2-dimensional code space,while at least a4-dimensional code space is required in the common optimal error correction protocols.
基金Supported by National Natural Science Foundation of China (Grant No.51975369)National Key Science and Technology Research Program of China (Grant No.2019ZX04027001)。
文摘Assembly geometric error as a part of the machine tool system errors has a significant influence on the machining accuracy of the multi-axis machine tool.And it cannot be eliminated due to the error propagation of components in the assembly process,which is generally non-uniformly distributed in the whole working space.A comprehensive expression model for assembly geometric error is greatly helpful for machining quality control of machine tools to meet the demand for machining accuracy in practice.However,the expression ranges based on the standard quasistatic expression model for assembly geometric errors are far less than those needed in the whole working space of the multi-axis machine tool.To address this issue,a modeling methodology based on the Jacobian-Torsor model is proposed to describe the spatially distributed geometric errors.Firstly,an improved kinematic Jacobian-Torsor model is developed to describe the relative movements such as translation and rotation motion between assembly bodies,respectively.Furthermore,based on the proposed kinematic Jacobian-Torsor model,a spatial expression of geometric errors for the multi-axis machine tool is given.And simulation and experimental verification are taken with the investigation of the spatial distribution of geometric errors on five four-axis machine tools.The results validate the effectiveness of the proposed kinematic Jacobian-Torsor model in dealing with the spatial expression of assembly geometric errors.
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.
基金Supported by National Key Research and Development Program of China(Grant No.2019YFA0709001)National Natural Science Foundation of China(Grant Nos.52022056,51875334,52205031 and 52205034)National Key Research and Development Program of China(Grant No.2017YFE0111300).
文摘Kinematic calibration is a reliable way to improve the accuracy of parallel manipulators, while the error model dramatically afects the accuracy, reliability, and stability of identifcation results. In this paper, a comparison study on kinematic calibration for a 3-DOF parallel manipulator with three error models is presented to investigate the relative merits of diferent error modeling methods. The study takes into consideration the inverse-kinematic error model, which ignores all passive joint errors, the geometric-constraint error model, which is derived by special geometric constraints of the studied RPR-equivalent parallel manipulator, and the complete-minimal error model, which meets the complete, minimal, and continuous criteria. This comparison focuses on aspects such as modeling complexity, identifcation accuracy, the impact of noise uncertainty, and parameter identifability. To facilitate a more intuitive comparison, simulations are conducted to draw conclusions in certain aspects, including accuracy, the infuence of the S joint, identifcation with noises, and sensitivity indices. The simulations indicate that the complete-minimal error model exhibits the lowest residual values, and all error models demonstrate stability considering noises. Hereafter, an experiment is conducted on a prototype using a laser tracker, providing further insights into the diferences among the three error models. The results show that the residual errors of this machine tool are signifcantly improved according to the identifed parameters, and the complete-minimal error model can approach the measurements by nearly 90% compared to the inverse-kinematic error model. The fndings pertaining to the model process, complexity, and limitations are also instructive for other parallel manipulators.
基金supported in part by the Education Department of Sichuan Province(Grant No.[2022]114).
文摘Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptability of fixing techniques may vary for different types of code errors.How to choose the appropriate methods to fix different types of errors is still an unsolved problem.To this end,this paper first classifies code errors by Java novice programmers based on Delphi analysis,and compares the effectiveness of different deep learning models(CuBERT,GraphCodeBERT and GGNN)fixing different types of errors.The results indicated that the 3 models differed significantly in their classification accuracy on different error codes,while the error correction model based on the Bert structure showed better code correction potential for beginners’codes.
文摘In the version of this Article originally published online,there was an error in the schematics of Figures 2b and 2c.These errors have now been corrected in the original article.
基金funded by the National Natural Science Foundation Science Fund for Youth (Grant No.41405095)the Key Projects in the National Science and Technology Pillar Program during the Twelfth Fiveyear Plan Period (Grant No.2012BAC22B02)the National Natural Science Foundation Science Fund for Creative Research Groups (Grant No.41221064)
文摘An online systematic error correction is presented and examined as a technique to improve the accuracy of real-time numerical weather prediction, based on the dataset of model errors (MEs) in past intervals. Given the analyses, the ME in each interval (6 h) between two analyses can be iteratively obtained by introducing an unknown tendency term into the prediction equation, shown in Part I of this two-paper series. In this part, after analyzing the 5-year (2001-2005) GRAPES- GFS (Global Forecast System of the Global and Regional Assimilation and Prediction System) error patterns and evolution, a systematic model error correction is given based on the least-squares approach by firstly using the past MEs. To test the correction, we applied the approach in GRAPES-GFS for July 2009 and January 2010. The datasets associated with the initial condition and SST used in this study were based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results indicated that the Northern Hemispheric systematically underestimated equator-to-pole geopotential gradient and westerly wind of GRAPES-GFS were largely enhanced, and the biases of temperature and wind in the tropics were strongly reduced. Therefore, the correction results in a more skillful forecast with lower mean bias and root-mean-square error and higher anomaly correlation coefficient.