With the continued development of multiple Global Navigation Satellite Systems(GNSS)and the emergence of various frequencies,UnDifferenced and UnCombined(UDUC)data processing has become an increasingly attractive opti...With the continued development of multiple Global Navigation Satellite Systems(GNSS)and the emergence of various frequencies,UnDifferenced and UnCombined(UDUC)data processing has become an increasingly attractive option.In this contribution,we provide an overview of the current status of UDUC GNSS data processing activities in China.These activities encompass the formulation of Precise Point Positioning(PPP)models and PPP-Real-Time Kinematic(PPP-RTK)models for processing single-station and multi-station GNSS data,respectively.Regarding single-station data processing,we discuss the advancements in PPP models,particularly the extension from a single system to multiple systems,and from dual frequencies to single and multiple frequencies.Additionally,we introduce the modified PPP model,which accounts for the time variation of receiver code biases,a departure from the conventional PPP model that typically assumes these biases to be time-constant.In the realm of multi-station PPP-RTK data processing,we introduce the ionosphere-weighted PPP-RTK model,which enhances the model strength by considering the spatial correlation of ionospheric delays.We also review the phase-only PPP-RTK model,designed to mitigate the impact of unmodelled code-related errors.Furthermore,we explore GLONASS PPP-RTK,achieved through the application of the integer-estimable model.For large-scale network data processing,we introduce the all-in-view PPP-RTK model,which alleviates the strict common-view requirement at all receivers.Moreover,we present the decentralized PPP-RTK data processing strategy,designed to improve computational efficiency.Overall,this work highlights the various advancements in UDUC GNSS data processing,providing insights into the state-of-the-art techniques employed in China to achieve precise GNSS applications.展开更多
The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement method...The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement methods,and is being widely used in the field of ocean observation.Shear and inverse methods are now commonly used by the international marine community to process LADCP data and calculate ocean current profiles.The two methods have their advantages and shortcomings.The shear method calculates the value of current shear more accurately,while the accuracy in an absolute value of the current is lower.The inverse method calculates the absolute value of the current velocity more accurately,but the current shear is less accurate.Based on the shear method,this paper proposes a layering shear method to calculate the current velocity profile by“layering averaging”,and proposes corresponding current calculation methods according to the different types of problems in several field observation data from the western Pacific,forming an independent LADCP data processing system.The comparison results have shown that the layering shear method can achieve the same effect as the inverse method in the calculation of the absolute value of current velocity,while retaining the advantages of the shear method in the calculation of a value of the current shear.展开更多
The Kuiyang-ST2000 deep-towed high-resolution multichannel seismic system was designed by the First Institute of Oceanography,Ministry of Natural Resources(FIO,MNR).The system is mainly composed of a plasma spark sour...The Kuiyang-ST2000 deep-towed high-resolution multichannel seismic system was designed by the First Institute of Oceanography,Ministry of Natural Resources(FIO,MNR).The system is mainly composed of a plasma spark source(source level:216 dB,main frequency:750 Hz,frequency bandwidth:150-1200 Hz)and a towed hydrophone streamer with 48 channels.Because the source and the towed hydrophone streamer are constantly moving according to the towing configuration,the accurate positioning of the towing hydrophone array and the moveout correction of deep-towed multichannel seismic data processing before imaging are challenging.Initially,according to the characteristics of the system and the towing streamer shape in deep water,travel-time positioning method was used to construct the hydrophone streamer shape,and the results were corrected by using the polynomial curve fitting method.Then,a new data-processing workflow for Kuiyang-ST2000 system data was introduced,mainly including float datum setting,residual static correction,phase-based moveout correction,which allows the imaging algorithms of conventional marine seismic data processing to extend to deep-towed seismic data.We successfully applied the Kuiyang-ST2000 system and methodology of data processing to a gas hydrate survey of the Qiongdongnan and Shenhu areas in the South China Sea,and the results show that the profile has very high vertical and lateral resolutions(0.5 m and 8 m,respectively),which can provide full and accurate details of gas hydrate-related and geohazard sedimentary and structural features in the South China Sea.展开更多
With the observation of a series of ground-based laser interferometer gravitational wave(GW)detectors such as LIGO and Virgo,nearly 100 GW events have been detected successively.At present,all detected GW events are g...With the observation of a series of ground-based laser interferometer gravitational wave(GW)detectors such as LIGO and Virgo,nearly 100 GW events have been detected successively.At present,all detected GW events are generated by the mergers of compact binary systems and are identified through the data processing of matched filtering.Based on matched filtering,we use the GW waveform of the Newtonian approximate(NA)model constructed by linearized theory to match the events detected by LIGO and injections to determine the coalescence time and utilize the frequency curve for data fitting to estimate the parameters of the chirp masses of binary black holes(BBHs).The average chirp mass of our results is 22.05_(-6.31)^(+6.31)M_(⊙),which is very close to 23.80_(-3.52)^(+4.83)M_(⊙)provided by GWOSC.In the process,we can analyze LIGO GW events and estimate the chirp masses of the BBHs.This work presents the feasibility and accuracy of the low-order approximate model and data fitting in the application of GW data processing.It is beneficial for further data processing and has certain research value for the preliminary application of GW data.展开更多
Geodetic functional models,stochastic models,and model parameter estimation theory are fundamental for geodetic data processing.In the past five years,through the unremitting efforts of Chinese scholars in the field o...Geodetic functional models,stochastic models,and model parameter estimation theory are fundamental for geodetic data processing.In the past five years,through the unremitting efforts of Chinese scholars in the field of geodetic data processing,according to the application and practice of geodesy,they have made significant contributions in the fields of hypothesis testing theory,un-modeled error,outlier detection,and robust estimation,variance component estimation,complex least squares,and ill-posed problems treatment.Many functional models such as the nonlinear adjustment model,EIV model,and mixed additive and multiplicative random error model are also constructed and improved.Geodetic data inversion is an important part of geodetic data processing,and Chinese scholars have done a lot of work in geodetic data inversion in the past five years,such as seismic slide distribution inversion,intelligent inversion algorithm,multi-source data joint inversion,water reserve change and satellite gravity inversion.This paper introduces the achievements of Chinese scholars in the field of geodetic data processing in the past five years,analyzes the methods used by scholars and the problems solved,and looks forward to the unsolved problems in geodetic data processing and the direction that needs further research in the future.展开更多
The High Precision Magnetometer(HPM) on board the China Seismo-Electromagnetic Satellite(CSES) allows highly accurate measurement of the geomagnetic field; it includes FGM(Fluxgate Magnetometer) and CDSM(Coupled Dark ...The High Precision Magnetometer(HPM) on board the China Seismo-Electromagnetic Satellite(CSES) allows highly accurate measurement of the geomagnetic field; it includes FGM(Fluxgate Magnetometer) and CDSM(Coupled Dark State Magnetometer)probes. This article introduces the main processing method, algorithm, and processing procedure of the HPM data. First, the FGM and CDSM probes are calibrated according to ground sensor data. Then the FGM linear parameters can be corrected in orbit, by applying the absolute vector magnetic field correction algorithm from CDSM data. At the same time, the magnetic interference of the satellite is eliminated according to ground-satellite magnetic test results. Finally, according to the characteristics of the magnetic field direction in the low latitude region, the transformation matrix between FGM probe and star sensor is calibrated in orbit to determine the correct direction of the magnetic field. Comparing the magnetic field data of CSES and SWARM satellites in five continuous geomagnetic quiet days, the difference in measurements of the vector magnetic field is about 10 nT, which is within the uncertainty interval of geomagnetic disturbance.展开更多
Low-field(nuclear magnetic resonance)NMR has been widely used in petroleum industry,such as well logging and laboratory rock core analysis.However,the signal-to-noise ratio is low due to the low magnetic field strengt...Low-field(nuclear magnetic resonance)NMR has been widely used in petroleum industry,such as well logging and laboratory rock core analysis.However,the signal-to-noise ratio is low due to the low magnetic field strength of NMR tools and the complex petrophysical properties of detected samples.Suppressing the noise and highlighting the available NMR signals is very important for subsequent data processing.Most denoising methods are normally based on fixed mathematical transformation or handdesign feature selectors to suppress noise characteristics,which may not perform well because of their non-adaptive performance to different noisy signals.In this paper,we proposed a“data processing framework”to improve the quality of low field NMR echo data based on dictionary learning.Dictionary learning is a machine learning method based on redundancy and sparse representation theory.Available information in noisy NMR echo data can be adaptively extracted and reconstructed by dictionary learning.The advantages and application effectiveness of the proposed method were verified with a number of numerical simulations,NMR core data analyses,and NMR logging data processing.The results show that dictionary learning can significantly improve the quality of NMR echo data with high noise level and effectively improve the accuracy and reliability of inversion results.展开更多
The estimation of the type and parameter of flow field is important for robotic fish.Recent estimation methods cannot meet the requirements of the robotic fish due to the lack of prior knowledge or the under-fitting o...The estimation of the type and parameter of flow field is important for robotic fish.Recent estimation methods cannot meet the requirements of the robotic fish due to the lack of prior knowledge or the under-fitting of the model.A processing method including data preprocessing,feature extraction,feature selection,flow type classification and flow field parameters estimation,is proposed based on the data of the pressure sensors in an artificial lateral line.Probabilistic Neural Network(PNN)is used to classify the flow field type and the Generalized Regressive Neural Network(GRNN)is the best choice for estimating the flow field parameters.Also,a few filtering methods for data preprocessing,three methods for feature selection and nine parameters estimation methods are analysis for choosing better method.The proposed method is verified by the experiments with both simulation and real data.展开更多
In this paper,a dynamic linear detecting method,that the non-linear coefficient NL% was led and the non-linearity of data were estimated continuously and dynamically and determined when NL% exceeded reference value (...In this paper,a dynamic linear detecting method,that the non-linear coefficient NL% was led and the non-linearity of data were estimated continuously and dynamically and determined when NL% exceeded reference value (5%),was used for data processing and could solve the problem caused by the phenomenon of substrate depleting occurred following the redox reaction in portable blood sugar analyzer.By contrast to the conventional end-point method,the dynamic linear detecting method is based on multipoint data collecting.Experiments of measuring the calibration glucose solution with 8 various concentrations from 50 mg/dl to 400 mg/dl were carried out with the analyzer developed by our group.The linear regression curve,whose correlation for the data was 0.9995 and the residual was 2.8080,were obtained.The obtained correlation,residual, and the computation workload are all fit for the portable blood sugar analyzer.展开更多
A method of fast data processing has been developed to rapidly obtain evolution of the electron density profile for a multichannel polarimeter-interferometer system(POLARIS)on J-TEXT. Compared with the Abel inversion ...A method of fast data processing has been developed to rapidly obtain evolution of the electron density profile for a multichannel polarimeter-interferometer system(POLARIS)on J-TEXT. Compared with the Abel inversion method, evolution of the density profile analyzed by this method can quickly offer important information. This method has the advantage of fast calculation speed with the order of ten milliseconds per normal shot and it is capable of processing up to 1 MHz sampled data, which is helpful for studying density sawtooth instability and the disruption between shots. In the duration of a flat-top plasma current of usual ohmic discharges on J-TEXT, shape factor u is ranged from 4 to 5. When the disruption of discharge happens, the density profile becomes peaked and the shape factor u typically decreases to 1.展开更多
Experimental Design and Data Processing is an important core professional basic course for food science majors.This course is theoretical and practical,and there are many formulas,abstract contents and difficult to un...Experimental Design and Data Processing is an important core professional basic course for food science majors.This course is theoretical and practical,and there are many formulas,abstract contents and difficult to understand,and there are some problems in the teaching process,such as students1 poor interest in learning,insufficient mastery of what they have learned,and inability to combine theory with practice organically.Through analyzing the existing problems,this paper puts forward some reform measures for the teaching mode of experimental design and data processing by using the intelligent teaching of Superstar platform.展开更多
Based on previous site testing and satellite cloud data,Ali,Daocheng and Muztagh-ata have been selected as candidate sites for the Large Optical/Infrared Telescope(LOT) in China.We present the data collection,processi...Based on previous site testing and satellite cloud data,Ali,Daocheng and Muztagh-ata have been selected as candidate sites for the Large Optical/Infrared Telescope(LOT) in China.We present the data collection,processing,management and quality analysis for our site testing based on using similar hardware.We analyze meteorological data,seeing,background light,cloud and precipitable water vapor data from 2017 March 10 to 2019 March 10.We also investigated the relative usefulness of our all-sky camera data in comparison to that from the meteorological TERRA satellite data based on a night-by-night comparison of the correlation and consistency between them.We find a 6% discrepancy arising from a wide range of factors.展开更多
In this paper,the latest progress,major achievements and future plans of Chinese meteorological satellites and the core data processing techniques are discussed.First,the latest three FengYun(FY)meteorological satelli...In this paper,the latest progress,major achievements and future plans of Chinese meteorological satellites and the core data processing techniques are discussed.First,the latest three FengYun(FY)meteorological satellites(FY-2H,FY-3D,and FY-4A)and their primary objectives are introduced Second,the core image navigation techniques and accuracies of the FY meteorological satellites are elaborated,including the latest geostationary(FY-2/4)and polar-orbit(FY-3)satellites.Third,the radiometric calibration techniques and accuracies of reflective solar bands,thermal infrared bands,and passive microwave bands for FY meteorological satellites are discussed.It also illustrates the latest progress of real-time calibration with the onboard calibration system and validation with different methods,including the vicarious China radiance calibration site calibration,pseudo invariant calibration site calibration,deep convective clouds calibration,and lunar calibration.Fourth,recent progress of meteorological satellite data assimilation applications and quantitative science produce are summarized at length.The main progress is in meteorological satellite data assimilation by using microwave and hyper-spectral infrared sensors in global and regional numerical weather prediction models.Lastly,the latest progress in radiative transfer,absorption and scattering calculations for satellite remote sensing is summarized,and some important research using a new radiative transfer model are illustrated.展开更多
In this paper, I described the methods that I used for the creation of Xlets, which are Java applets that are developed for the IDTV environment;and the methods for online data retrieval and processing that I utilized...In this paper, I described the methods that I used for the creation of Xlets, which are Java applets that are developed for the IDTV environment;and the methods for online data retrieval and processing that I utilized in these Xlets. The themes that I chose for the Xlets of the IDTV applications are Earthquake and Tsunami Early Warning;Recent Seismic Activity Report;and Emergency Services. The online data regarding the Recent Seismic Activity Report application are provided by the Kandilli Observatory and Earthquake Research Institute (KOERI) of Bogazici University in Istanbul;while the online data for the Earthquake and Tsunami Early Warning and the Emergency Services applications are provided by the Godoro website which I used for storing (and retrieving by the Xlets) the earthquake and tsunami early warning simulation data, and the DVB network subscriber data (such as name and address information) for utilizing in the Emergency Services (Police, Ambulance and Fire Department) application. I have focused on the methodologies to use digital television as an efficient medium to convey timely and useful information regarding seismic warning data to the public, which forms the main research topic of this paper.展开更多
With the increasing variety of application software of meteorological satellite ground system, how to provide reasonable hardware resources and improve the efficiency of software is paid more and more attention. In th...With the increasing variety of application software of meteorological satellite ground system, how to provide reasonable hardware resources and improve the efficiency of software is paid more and more attention. In this paper, a set of software classification method based on software operating characteristics is proposed. The method uses software run-time resource consumption to describe the software running characteristics. Firstly, principal component analysis (PCA) is used to reduce the dimension of software running feature data and to interpret software characteristic information. Then the modified K-means algorithm was used to classify the meteorological data processing software. Finally, it combined with the results of principal component analysis to explain the significance of various types of integrated software operating characteristics. And it is used as the basis for optimizing the allocation of software hardware resources and improving the efficiency of software operation.展开更多
Data quality has exerted important influence over the application of grain big data, so data cleaning is a necessary and important work. In MapReduce frame, parallel technique is often used to execute data cleaning in...Data quality has exerted important influence over the application of grain big data, so data cleaning is a necessary and important work. In MapReduce frame, parallel technique is often used to execute data cleaning in high scalability mode, but due to the lack of effective design, there are amounts of computing redundancy in the process of data cleaning, which results in lower performance. In this research, we found that some tasks often are carried out multiple times on same input files, or require same operation results in the process of data cleaning. For this problem, we proposed a new optimization technique that is based on task merge. By merging simple or redundancy computations on same input files, the number of the loop computation in MapReduce can be reduced greatly. The experiment shows, by this means, the overall system runtime is significantly reduced, which proves that the process of data cleaning is optimized. In this paper, we optimized several modules of data cleaning such as entity identification, inconsistent data restoration, and missing value filling. Experimental results show that the proposed method in this paper can increase efficiency for grain big data cleaning.展开更多
Slow speed of the Next-Generation sequencing data analysis, compared to the latest high throughput sequencers such as HiSeq X system, using the current industry standard genome analysis pipeline, has been the major fa...Slow speed of the Next-Generation sequencing data analysis, compared to the latest high throughput sequencers such as HiSeq X system, using the current industry standard genome analysis pipeline, has been the major factor of data backlog which limits the real-time use of genomic data for precision medicine. This study demonstrates the DRAGEN Bio-IT Processor as a potential candidate to remove the “Big Data Bottleneck”. DRAGENTM accomplished the variant calling, for ~40× coverage WGS data in as low as ~30 minutes using a single command, achieving the over 50-fold data analysis speed while maintaining the similar or better variant calling accuracy than the standard GATK Best Practices workflow. This systematic comparison provides the faster and efficient NGS data analysis alternative to NGS-based healthcare industries and research institutes to meet the requirement for precision medicine based healthcare.展开更多
This study introduces the site selection and data processing of GNSS receiver calibration networks. According to the design requirements and relevant specifications, the authors investigate the observation conditions ...This study introduces the site selection and data processing of GNSS receiver calibration networks. According to the design requirements and relevant specifications, the authors investigate the observation conditions of the potential sites and collect the experimental GNSS observation data. TEQC is used to evaluate the data availability rate and multipath effects of the observation data to determine the appropriate site. After the construction and measurement of the calibration network, the baseline processing of the medium and long baseline network is conducted by GAMIT. The accuracy indexes including NRMS, difference between repeated baselines, and closure of independent observation loops all meet the specified criteria.展开更多
A novel technique for automatic seismic data processing using both integral and local feature of seismograms was presented in this paper.Here,the term integral feature of seismograms refers to feature which may depict...A novel technique for automatic seismic data processing using both integral and local feature of seismograms was presented in this paper.Here,the term integral feature of seismograms refers to feature which may depict the shape of the whole seismograms.However,unlike some previous efforts which completely abandon the DIAL approach,i.e.,signal detection,phase identification,association,and event localization,and seek to use envelope cross-correlation to detect seismic events directly,our technique keeps following the DIAL approach,but in addition to detect signals corresponding to individual seismic phases,it also detects continuous wave-trains and explores their feature for phase-type identification and signal association.More concrete ideas about how to define wave-trains and combine them with various detections,as well as how to measure and utilize their feature in the seismic data processing were expatiated in the paper.This approach has been applied to the routine data processing by us for years,and test results for a 16 days' period using data from the Xinjiang seismic station network were presented.The automatic processing results have fairly low false and missed event rate simultaneously,showing that the new technique has good application prospects for improvement of the automatic seismic data processing.展开更多
In this paper, the use of a signal to noise ratio (SNR) is proposed for the quantification of the goodness of some selected processing techniques of thermographic images, such as differentiated absolute contrast, skew...In this paper, the use of a signal to noise ratio (SNR) is proposed for the quantification of the goodness of some selected processing techniques of thermographic images, such as differentiated absolute contrast, skewness and kurtosis based algorithms, pulsed phase transform, principal component analysis and thermographic signal reconstruction. A new hybrid technique is also applied (PhAC—Phase absolute contrast), it combines three different processing techniques: phase absolute contrast, pulsed phase thermography and thermographic signal reconstruction. The quality of the results is established on the basis of the values of the parameter SNR, assessed for the present defects in the analyzed specimen, which enabled to quantify and compare their identification and the quality of the results of the employed technique.展开更多
基金National Natural Science Foundation of China(No.42022025)。
文摘With the continued development of multiple Global Navigation Satellite Systems(GNSS)and the emergence of various frequencies,UnDifferenced and UnCombined(UDUC)data processing has become an increasingly attractive option.In this contribution,we provide an overview of the current status of UDUC GNSS data processing activities in China.These activities encompass the formulation of Precise Point Positioning(PPP)models and PPP-Real-Time Kinematic(PPP-RTK)models for processing single-station and multi-station GNSS data,respectively.Regarding single-station data processing,we discuss the advancements in PPP models,particularly the extension from a single system to multiple systems,and from dual frequencies to single and multiple frequencies.Additionally,we introduce the modified PPP model,which accounts for the time variation of receiver code biases,a departure from the conventional PPP model that typically assumes these biases to be time-constant.In the realm of multi-station PPP-RTK data processing,we introduce the ionosphere-weighted PPP-RTK model,which enhances the model strength by considering the spatial correlation of ionospheric delays.We also review the phase-only PPP-RTK model,designed to mitigate the impact of unmodelled code-related errors.Furthermore,we explore GLONASS PPP-RTK,achieved through the application of the integer-estimable model.For large-scale network data processing,we introduce the all-in-view PPP-RTK model,which alleviates the strict common-view requirement at all receivers.Moreover,we present the decentralized PPP-RTK data processing strategy,designed to improve computational efficiency.Overall,this work highlights the various advancements in UDUC GNSS data processing,providing insights into the state-of-the-art techniques employed in China to achieve precise GNSS applications.
基金The National Natural Science Foundation of China under contract No.42206033the Marine Geological Survey Program of China Geological Survey under contract No.DD20221706+1 种基金the Research Foundation of National Engineering Research Center for Gas Hydrate Exploration and Development,Innovation Team Project,under contract No.2022GMGSCXYF41003the Scientific Research Fund of the Second Institute of Oceanography,Ministry of Natural Resources,under contract No.JG2006.
文摘The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement methods,and is being widely used in the field of ocean observation.Shear and inverse methods are now commonly used by the international marine community to process LADCP data and calculate ocean current profiles.The two methods have their advantages and shortcomings.The shear method calculates the value of current shear more accurately,while the accuracy in an absolute value of the current is lower.The inverse method calculates the absolute value of the current velocity more accurately,but the current shear is less accurate.Based on the shear method,this paper proposes a layering shear method to calculate the current velocity profile by“layering averaging”,and proposes corresponding current calculation methods according to the different types of problems in several field observation data from the western Pacific,forming an independent LADCP data processing system.The comparison results have shown that the layering shear method can achieve the same effect as the inverse method in the calculation of the absolute value of current velocity,while retaining the advantages of the shear method in the calculation of a value of the current shear.
基金Supported by the National Key R&D Program of China(No.2016YFC0303900)the Laoshan Laboratory(Nos.MGQNLM-KF201807,LSKJ202203604)the National Natural Science Foundation of China(No.42106072)。
文摘The Kuiyang-ST2000 deep-towed high-resolution multichannel seismic system was designed by the First Institute of Oceanography,Ministry of Natural Resources(FIO,MNR).The system is mainly composed of a plasma spark source(source level:216 dB,main frequency:750 Hz,frequency bandwidth:150-1200 Hz)and a towed hydrophone streamer with 48 channels.Because the source and the towed hydrophone streamer are constantly moving according to the towing configuration,the accurate positioning of the towing hydrophone array and the moveout correction of deep-towed multichannel seismic data processing before imaging are challenging.Initially,according to the characteristics of the system and the towing streamer shape in deep water,travel-time positioning method was used to construct the hydrophone streamer shape,and the results were corrected by using the polynomial curve fitting method.Then,a new data-processing workflow for Kuiyang-ST2000 system data was introduced,mainly including float datum setting,residual static correction,phase-based moveout correction,which allows the imaging algorithms of conventional marine seismic data processing to extend to deep-towed seismic data.We successfully applied the Kuiyang-ST2000 system and methodology of data processing to a gas hydrate survey of the Qiongdongnan and Shenhu areas in the South China Sea,and the results show that the profile has very high vertical and lateral resolutions(0.5 m and 8 m,respectively),which can provide full and accurate details of gas hydrate-related and geohazard sedimentary and structural features in the South China Sea.
基金the National Key Research and Development Program of China(Grant No.2021YFC2203004)the National Natural Science Foundation of China(Grant No.12147102)the Sichuan Youth Science and Technology Innovation Research Team(Grant No.21CXTD0038)。
文摘With the observation of a series of ground-based laser interferometer gravitational wave(GW)detectors such as LIGO and Virgo,nearly 100 GW events have been detected successively.At present,all detected GW events are generated by the mergers of compact binary systems and are identified through the data processing of matched filtering.Based on matched filtering,we use the GW waveform of the Newtonian approximate(NA)model constructed by linearized theory to match the events detected by LIGO and injections to determine the coalescence time and utilize the frequency curve for data fitting to estimate the parameters of the chirp masses of binary black holes(BBHs).The average chirp mass of our results is 22.05_(-6.31)^(+6.31)M_(⊙),which is very close to 23.80_(-3.52)^(+4.83)M_(⊙)provided by GWOSC.In the process,we can analyze LIGO GW events and estimate the chirp masses of the BBHs.This work presents the feasibility and accuracy of the low-order approximate model and data fitting in the application of GW data processing.It is beneficial for further data processing and has certain research value for the preliminary application of GW data.
基金National Natural Science Foundation of China(No.42174011)。
文摘Geodetic functional models,stochastic models,and model parameter estimation theory are fundamental for geodetic data processing.In the past five years,through the unremitting efforts of Chinese scholars in the field of geodetic data processing,according to the application and practice of geodesy,they have made significant contributions in the fields of hypothesis testing theory,un-modeled error,outlier detection,and robust estimation,variance component estimation,complex least squares,and ill-posed problems treatment.Many functional models such as the nonlinear adjustment model,EIV model,and mixed additive and multiplicative random error model are also constructed and improved.Geodetic data inversion is an important part of geodetic data processing,and Chinese scholars have done a lot of work in geodetic data inversion in the past five years,such as seismic slide distribution inversion,intelligent inversion algorithm,multi-source data joint inversion,water reserve change and satellite gravity inversion.This paper introduces the achievements of Chinese scholars in the field of geodetic data processing in the past five years,analyzes the methods used by scholars and the problems solved,and looks forward to the unsolved problems in geodetic data processing and the direction that needs further research in the future.
基金supported by National Key Research and Development Program of China from MOST (2016YFB0501503)
文摘The High Precision Magnetometer(HPM) on board the China Seismo-Electromagnetic Satellite(CSES) allows highly accurate measurement of the geomagnetic field; it includes FGM(Fluxgate Magnetometer) and CDSM(Coupled Dark State Magnetometer)probes. This article introduces the main processing method, algorithm, and processing procedure of the HPM data. First, the FGM and CDSM probes are calibrated according to ground sensor data. Then the FGM linear parameters can be corrected in orbit, by applying the absolute vector magnetic field correction algorithm from CDSM data. At the same time, the magnetic interference of the satellite is eliminated according to ground-satellite magnetic test results. Finally, according to the characteristics of the magnetic field direction in the low latitude region, the transformation matrix between FGM probe and star sensor is calibrated in orbit to determine the correct direction of the magnetic field. Comparing the magnetic field data of CSES and SWARM satellites in five continuous geomagnetic quiet days, the difference in measurements of the vector magnetic field is about 10 nT, which is within the uncertainty interval of geomagnetic disturbance.
基金supported by Science Foundation of China University of Petroleum,Beijing(Grant Number ZX20210024)Chinese Postdoctoral Science Foundation(Grant Number 2021M700172)+1 种基金The Strategic Cooperation Technology Projects of CNPC and CUP(Grant Number ZLZX2020-03)National Natural Science Foundation of China(Grant Number 42004105)
文摘Low-field(nuclear magnetic resonance)NMR has been widely used in petroleum industry,such as well logging and laboratory rock core analysis.However,the signal-to-noise ratio is low due to the low magnetic field strength of NMR tools and the complex petrophysical properties of detected samples.Suppressing the noise and highlighting the available NMR signals is very important for subsequent data processing.Most denoising methods are normally based on fixed mathematical transformation or handdesign feature selectors to suppress noise characteristics,which may not perform well because of their non-adaptive performance to different noisy signals.In this paper,we proposed a“data processing framework”to improve the quality of low field NMR echo data based on dictionary learning.Dictionary learning is a machine learning method based on redundancy and sparse representation theory.Available information in noisy NMR echo data can be adaptively extracted and reconstructed by dictionary learning.The advantages and application effectiveness of the proposed method were verified with a number of numerical simulations,NMR core data analyses,and NMR logging data processing.The results show that dictionary learning can significantly improve the quality of NMR echo data with high noise level and effectively improve the accuracy and reliability of inversion results.
基金National Natural Science Foundation of China(NSFC)under Grant 62073017.
文摘The estimation of the type and parameter of flow field is important for robotic fish.Recent estimation methods cannot meet the requirements of the robotic fish due to the lack of prior knowledge or the under-fitting of the model.A processing method including data preprocessing,feature extraction,feature selection,flow type classification and flow field parameters estimation,is proposed based on the data of the pressure sensors in an artificial lateral line.Probabilistic Neural Network(PNN)is used to classify the flow field type and the Generalized Regressive Neural Network(GRNN)is the best choice for estimating the flow field parameters.Also,a few filtering methods for data preprocessing,three methods for feature selection and nine parameters estimation methods are analysis for choosing better method.The proposed method is verified by the experiments with both simulation and real data.
文摘In this paper,a dynamic linear detecting method,that the non-linear coefficient NL% was led and the non-linearity of data were estimated continuously and dynamically and determined when NL% exceeded reference value (5%),was used for data processing and could solve the problem caused by the phenomenon of substrate depleting occurred following the redox reaction in portable blood sugar analyzer.By contrast to the conventional end-point method,the dynamic linear detecting method is based on multipoint data collecting.Experiments of measuring the calibration glucose solution with 8 various concentrations from 50 mg/dl to 400 mg/dl were carried out with the analyzer developed by our group.The linear regression curve,whose correlation for the data was 0.9995 and the residual was 2.8080,were obtained.The obtained correlation,residual, and the computation workload are all fit for the portable blood sugar analyzer.
基金supported by the National Magnetic Confinement Fusion Science Program of China(Nos.2014GB106000,2014GB106002,and2014GB106003)National Natural Science Foundation of China(Nos.11275234,11375237 and 11505238)Scientific Research Grant of Hefei Science Center of CAS(No.2015SRG-HSC010)
文摘A method of fast data processing has been developed to rapidly obtain evolution of the electron density profile for a multichannel polarimeter-interferometer system(POLARIS)on J-TEXT. Compared with the Abel inversion method, evolution of the density profile analyzed by this method can quickly offer important information. This method has the advantage of fast calculation speed with the order of ten milliseconds per normal shot and it is capable of processing up to 1 MHz sampled data, which is helpful for studying density sawtooth instability and the disruption between shots. In the duration of a flat-top plasma current of usual ohmic discharges on J-TEXT, shape factor u is ranged from 4 to 5. When the disruption of discharge happens, the density profile becomes peaked and the shape factor u typically decreases to 1.
基金The foundation for Teaching Research Project of Hubei University of Technology in Hubei Province in 2020(grant number 2020017).
文摘Experimental Design and Data Processing is an important core professional basic course for food science majors.This course is theoretical and practical,and there are many formulas,abstract contents and difficult to understand,and there are some problems in the teaching process,such as students1 poor interest in learning,insufficient mastery of what they have learned,and inability to combine theory with practice organically.Through analyzing the existing problems,this paper puts forward some reform measures for the teaching mode of experimental design and data processing by using the intelligent teaching of Superstar platform.
基金partly supported by the Operation,Maintenance and Upgrading Fund for Astronomical Telescopes and Facility Instruments,budgeted from the Ministry of Finance of China (MOF) and administered by the Chinese Academy of Sciences (CAS)supported by the National NaturalScience Foundation of China (Grant Nos.11573054,11703065,11603044 and 11873081)HRAJ acknowledges support from a CAS PIFI and UK STFC grant ST/R006598/1。
文摘Based on previous site testing and satellite cloud data,Ali,Daocheng and Muztagh-ata have been selected as candidate sites for the Large Optical/Infrared Telescope(LOT) in China.We present the data collection,processing,management and quality analysis for our site testing based on using similar hardware.We analyze meteorological data,seeing,background light,cloud and precipitable water vapor data from 2017 March 10 to 2019 March 10.We also investigated the relative usefulness of our all-sky camera data in comparison to that from the meteorological TERRA satellite data based on a night-by-night comparison of the correlation and consistency between them.We find a 6% discrepancy arising from a wide range of factors.
基金funded by the National Key R&D Program of China(Grant Nos.2018YFB0504900 and 2015AA123700)
文摘In this paper,the latest progress,major achievements and future plans of Chinese meteorological satellites and the core data processing techniques are discussed.First,the latest three FengYun(FY)meteorological satellites(FY-2H,FY-3D,and FY-4A)and their primary objectives are introduced Second,the core image navigation techniques and accuracies of the FY meteorological satellites are elaborated,including the latest geostationary(FY-2/4)and polar-orbit(FY-3)satellites.Third,the radiometric calibration techniques and accuracies of reflective solar bands,thermal infrared bands,and passive microwave bands for FY meteorological satellites are discussed.It also illustrates the latest progress of real-time calibration with the onboard calibration system and validation with different methods,including the vicarious China radiance calibration site calibration,pseudo invariant calibration site calibration,deep convective clouds calibration,and lunar calibration.Fourth,recent progress of meteorological satellite data assimilation applications and quantitative science produce are summarized at length.The main progress is in meteorological satellite data assimilation by using microwave and hyper-spectral infrared sensors in global and regional numerical weather prediction models.Lastly,the latest progress in radiative transfer,absorption and scattering calculations for satellite remote sensing is summarized,and some important research using a new radiative transfer model are illustrated.
文摘In this paper, I described the methods that I used for the creation of Xlets, which are Java applets that are developed for the IDTV environment;and the methods for online data retrieval and processing that I utilized in these Xlets. The themes that I chose for the Xlets of the IDTV applications are Earthquake and Tsunami Early Warning;Recent Seismic Activity Report;and Emergency Services. The online data regarding the Recent Seismic Activity Report application are provided by the Kandilli Observatory and Earthquake Research Institute (KOERI) of Bogazici University in Istanbul;while the online data for the Earthquake and Tsunami Early Warning and the Emergency Services applications are provided by the Godoro website which I used for storing (and retrieving by the Xlets) the earthquake and tsunami early warning simulation data, and the DVB network subscriber data (such as name and address information) for utilizing in the Emergency Services (Police, Ambulance and Fire Department) application. I have focused on the methodologies to use digital television as an efficient medium to convey timely and useful information regarding seismic warning data to the public, which forms the main research topic of this paper.
文摘With the increasing variety of application software of meteorological satellite ground system, how to provide reasonable hardware resources and improve the efficiency of software is paid more and more attention. In this paper, a set of software classification method based on software operating characteristics is proposed. The method uses software run-time resource consumption to describe the software running characteristics. Firstly, principal component analysis (PCA) is used to reduce the dimension of software running feature data and to interpret software characteristic information. Then the modified K-means algorithm was used to classify the meteorological data processing software. Finally, it combined with the results of principal component analysis to explain the significance of various types of integrated software operating characteristics. And it is used as the basis for optimizing the allocation of software hardware resources and improving the efficiency of software operation.
文摘Data quality has exerted important influence over the application of grain big data, so data cleaning is a necessary and important work. In MapReduce frame, parallel technique is often used to execute data cleaning in high scalability mode, but due to the lack of effective design, there are amounts of computing redundancy in the process of data cleaning, which results in lower performance. In this research, we found that some tasks often are carried out multiple times on same input files, or require same operation results in the process of data cleaning. For this problem, we proposed a new optimization technique that is based on task merge. By merging simple or redundancy computations on same input files, the number of the loop computation in MapReduce can be reduced greatly. The experiment shows, by this means, the overall system runtime is significantly reduced, which proves that the process of data cleaning is optimized. In this paper, we optimized several modules of data cleaning such as entity identification, inconsistent data restoration, and missing value filling. Experimental results show that the proposed method in this paper can increase efficiency for grain big data cleaning.
文摘Slow speed of the Next-Generation sequencing data analysis, compared to the latest high throughput sequencers such as HiSeq X system, using the current industry standard genome analysis pipeline, has been the major factor of data backlog which limits the real-time use of genomic data for precision medicine. This study demonstrates the DRAGEN Bio-IT Processor as a potential candidate to remove the “Big Data Bottleneck”. DRAGENTM accomplished the variant calling, for ~40× coverage WGS data in as low as ~30 minutes using a single command, achieving the over 50-fold data analysis speed while maintaining the similar or better variant calling accuracy than the standard GATK Best Practices workflow. This systematic comparison provides the faster and efficient NGS data analysis alternative to NGS-based healthcare industries and research institutes to meet the requirement for precision medicine based healthcare.
基金Supported by Project of National Natural Science Foundation of China(No.41772346)
文摘This study introduces the site selection and data processing of GNSS receiver calibration networks. According to the design requirements and relevant specifications, the authors investigate the observation conditions of the potential sites and collect the experimental GNSS observation data. TEQC is used to evaluate the data availability rate and multipath effects of the observation data to determine the appropriate site. After the construction and measurement of the calibration network, the baseline processing of the medium and long baseline network is conducted by GAMIT. The accuracy indexes including NRMS, difference between repeated baselines, and closure of independent observation loops all meet the specified criteria.
文摘A novel technique for automatic seismic data processing using both integral and local feature of seismograms was presented in this paper.Here,the term integral feature of seismograms refers to feature which may depict the shape of the whole seismograms.However,unlike some previous efforts which completely abandon the DIAL approach,i.e.,signal detection,phase identification,association,and event localization,and seek to use envelope cross-correlation to detect seismic events directly,our technique keeps following the DIAL approach,but in addition to detect signals corresponding to individual seismic phases,it also detects continuous wave-trains and explores their feature for phase-type identification and signal association.More concrete ideas about how to define wave-trains and combine them with various detections,as well as how to measure and utilize their feature in the seismic data processing were expatiated in the paper.This approach has been applied to the routine data processing by us for years,and test results for a 16 days' period using data from the Xinjiang seismic station network were presented.The automatic processing results have fairly low false and missed event rate simultaneously,showing that the new technique has good application prospects for improvement of the automatic seismic data processing.
文摘In this paper, the use of a signal to noise ratio (SNR) is proposed for the quantification of the goodness of some selected processing techniques of thermographic images, such as differentiated absolute contrast, skewness and kurtosis based algorithms, pulsed phase transform, principal component analysis and thermographic signal reconstruction. A new hybrid technique is also applied (PhAC—Phase absolute contrast), it combines three different processing techniques: phase absolute contrast, pulsed phase thermography and thermographic signal reconstruction. The quality of the results is established on the basis of the values of the parameter SNR, assessed for the present defects in the analyzed specimen, which enabled to quantify and compare their identification and the quality of the results of the employed technique.