This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of t...This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of the product based on the time truncated life test employing theWeibull distribution.The control chart developed supports the examination of the mean lifespan variation for a particular product in the process of manufacturing.Three control limit levels are used:the warning control limit,inner control limit,and outer control limit.Together,they enhance the capability for variation detection.A genetic algorithm can be used for optimization during the in-control process,whereby the optimal parameters can be established for the proposed control chart.The control chart performance is assessed using the average run length,while the influence of the model parameters upon the control chart solution is assessed via sensitivity analysis based on an orthogonal experimental design withmultiple linear regression.A comparative study was conducted based on the out-of-control average run length,in which the developed control chart offered greater sensitivity in the detection of process shifts while making use of smaller samples on average than is the case for existing control charts.Finally,to exhibit the utility of the developed control chart,this paper presents its application using simulated data with parameters drawn from the real set of data.展开更多
The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenz...The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.展开更多
The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with...The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with molecular simulations to improve the sampling efficiency of the vast conformational space of large biomolecules.This review focuses on recent studies that utilize ML-based techniques in the exploration of protein conformational landscape.We first highlight the recent development of ML-aided enhanced sampling methods,including heuristic algorithms and neural networks that are designed to refine the selection of reaction coordinates for the construction of bias potential,or facilitate the exploration of the unsampled region of the energy landscape.Further,we review the development of autoencoder based methods that combine molecular simulations and deep learning to expand the search for protein conformations.Lastly,we discuss the cutting-edge methodologies for the one-shot generation of protein conformations with precise Boltzmann weights.Collectively,this review demonstrates the promising potential of machine learning in revolutionizing our insight into the complex conformational ensembles of proteins.展开更多
Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.Howev...Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.However,traditional blockchain P2P networks face a common challenge where there is often a mismatch between the upper-layer traffic requirements and the underlying physical network topology.This mismatch results in redundant data transmission and inefficient routing,severely constraining the scalability of blockchain systems.To address these pressing issues,we propose FPSblo,an efficient transmission method for blockchain networks.Our inspiration for FPSblo stems from the Farthest Point Sampling(FPS)algorithm,a well-established technique widely utilized in point cloud image processing.In this work,we analogize blockchain nodes to points in a point cloud image and select a representative set of nodes to prioritize message forwarding so that messages reach the network edge quickly and are evenly distributed.Moreover,we compare our model with the Kadcast transmission model,which is a classic improvement model for blockchain P2P transmission networks,the experimental findings show that the FPSblo model reduces 34.8%of transmission redundancy and reduces the overload rate by 37.6%.By conducting experimental analysis,the FPS-BT model enhances the transmission capabilities of the P2P network in blockchain.展开更多
Wideband spectrum sensing with a high-speed analog-digital converter(ADC) presents a challenge for practical systems.The Nyquist folding receiver(NYFR) is a promising scheme for achieving cost-effective real-time spec...Wideband spectrum sensing with a high-speed analog-digital converter(ADC) presents a challenge for practical systems.The Nyquist folding receiver(NYFR) is a promising scheme for achieving cost-effective real-time spectrum sensing,which is subject to the complexity of processing the modulated outputs.In this case,a multipath NYFR architecture with a step-sampling rate for the different paths is proposed.The different numbers of digital channels for each path are designed based on the Chinese remainder theorem(CRT).Then,the detectable frequency range is divided into multiple frequency grids,and the Nyquist zone(NZ) of the input can be obtained by sensing these grids.Thus,high-precision parameter estimation is performed by utilizing the NYFR characteristics.Compared with the existing methods,the scheme proposed in this paper overcomes the challenge of NZ estimation,information damage,many computations,low accuracy,and high false alarm probability.Comparative simulation experiments verify the effectiveness of the proposed architecture in this paper.展开更多
BACKGROUND Pancreatic ductal leaks complicated by endoscopic ultrasonography-guided tissue sampling(EUS-TS)can manifest as acute pancreatitis.CASE SUMMARY A 63-year-old man presented with persistent abdominal pain and...BACKGROUND Pancreatic ductal leaks complicated by endoscopic ultrasonography-guided tissue sampling(EUS-TS)can manifest as acute pancreatitis.CASE SUMMARY A 63-year-old man presented with persistent abdominal pain and weight loss.Diagnosis:Laboratory findings revealed elevated carbohydrate antigen 19-9(5920 U/mL)and carcinoembryonic antigen(23.7 ng/mL)levels.Magnetic resonance imaging of the pancreas revealed an approximately 3 cm ill-defined space-occupying lesion in the inferior aspect of the head,with severe encasement of the superior mesenteric artery.Pancreatic ductal adenocarcinoma was confirmed after pathological examination of specimens obtained by EUS-TS using the fanning method.Interventions and outcomes:The following day,the patient experienced severe abdominal pain with high amylase(265 U/L)and lipase(1173 U/L)levels.Computed tomography of the abdomen revealed edematous wall thickening of the second portion of the duodenum with adjacent fluid collections and a suspicious leak from either the distal common bile duct or the main pancreatic duct in the head.Endoscopic retrograde cholangiopancreatography revealed dye leakage in the head of the main pancreatic duct.Therefore,a 5F 7 cm linear plastic stent was deployed into the pancreatic duct to divert the pancreatic juice.The patient’s abdominal pain improved immediately after pancreatic stent insertion,and amylase and lipase levels normalized within a week.Neoadjuvant chemotherapy was then initiated.CONCLUSION Using the fanning method in EUS-TS can inadvertently cause damage to the pancreatic duct and may lead to clinically significant pancreatitis.Placing a pancreatic stent may immediately resolve acute pancreatitis and shorten the waiting time for curative therapy.When using the fanning method during EUSTS,ductal structures should be excluded to prevent pancreatic ductal leakage.展开更多
Background Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” scenario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resu...Background Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” scenario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resulting in slow convergence, high computational costs, and learning failures, particularly when small datasets are used. Methods A novel method is presented for dense-shape correspondence, whereby the spatial information transformed by neural networks is combined with the projections onto spectral maps to overcome the “chicken or egg” challenge by selectively sampling only points with high confidence in their alignment. These points then contribute to the alignment and spectral loss terms, boosting training, and accelerating convergence by a factor of five. To ensure full unsupervised learning, the Gromov–Hausdorff distance metric was used to select the points with the maximal alignment score displaying most confidence. Results The effectiveness of the proposed approach was demonstrated on several benchmark datasets, whereby results were reported as superior to those of spectral and spatial-based methods. Conclusions The proposed method provides a promising new approach to dense-shape correspondence, addressing the key challenges in the field and offering significant advantages over the current methods, including faster convergence, improved accuracy, and reduced computational costs.展开更多
In order to enhance grain sampling efficiency, in this work a truss type multi-rod grain sampling machine is designed and tested. The sampling machine primarily consists of truss support mechanism, main carriage mecha...In order to enhance grain sampling efficiency, in this work a truss type multi-rod grain sampling machine is designed and tested. The sampling machine primarily consists of truss support mechanism, main carriage mechanism, auxiliary carriage mechanism, sampling rods, and a PLC controller. The movement of the main carriage on the truss, the auxiliary carriage on the main carriage, and the vertical movement of the sampling rods on the auxiliary carriage are controlled through PLC programming. The sampling machine accurately controls the position of the sampling rods, enabling random sampling with six rods to ensure comprehensive and random sampling. Additionally, sampling experiments were conducted, and the results showed that the multi-rod grain sampling machine simultaneously samples with six rods, achieving a sampling frequency of 38 times per hour. The round trip time for the sampling rods is 33 seconds per cycle, and the sampling length direction reaches 18 m. This study provides valuable insights for the design of multi-rod grain sampling machines.展开更多
Uniform linear array(ULA)radars are widely used in the collision-avoidance radar systems of small unmanned aerial vehicles(UAVs).In practice,a ULA's multi-target direction of arrival(DOA)estimation performance suf...Uniform linear array(ULA)radars are widely used in the collision-avoidance radar systems of small unmanned aerial vehicles(UAVs).In practice,a ULA's multi-target direction of arrival(DOA)estimation performance suffers from significant performance degradation owing to the limited number of physical elements.To improve the underdetermined DOA estimation performance of a ULA radar mounted on a small UAV platform,we propose a nonuniform linear motion sampling underdetermined DOA estimation method.Using the motion of the UAV platform,the echo signal is sampled at different positions.Then,according to the concept of difference co-array,a virtual ULA with multiple array elements and a large aperture is synthesized to increase the degrees of freedom(DOFs).Through position analysis of the original and motion arrays,we propose a nonuniform linear motion sampling method based on ULA for determining the optimal DOFs.Under the condition of no increase in the aperture of the physical array,the proposed method obtains a high DOF with fewer sampling runs and greatly improves the underdetermined DOA estimation performance of ULA.The results of numerical simulations conducted herein verify the superior performance of the proposed method.展开更多
A novel adaptive multiple dependent state sampling plan(AMDSSP)was designed to inspect products from a continuous manufacturing process under the accelerated life test(ALT)using both double sampling plan(DSP)and multi...A novel adaptive multiple dependent state sampling plan(AMDSSP)was designed to inspect products from a continuous manufacturing process under the accelerated life test(ALT)using both double sampling plan(DSP)and multiple dependent state sampling plan(MDSSP)concepts.Under accelerated conditions,the lifetime of a product follows the Weibull distribution with a known shape parameter,while the scale parameter can be determined using the acceleration factor(AF).The Arrhenius model is used to estimate AF when the damaging process is temperature-sensitive.An economic design of the proposed sampling plan was also considered for the ALT.A genetic algorithm with nonlinear optimization was used to estimate optimal plan parameters to minimize the average sample number(ASN)and total cost of inspection(TC)under both producer’s and consumer’s risks.Numerical results are presented to support the AMDSSP for the ALT,while performance comparisons between the AMDSSP,the MDSSP and a single sampling plan(SSP)for the ALT are discussed.Results indicated that the AMDSSP was more flexible and efficient for ASN and TC than the MDSSP and SSP plans under accelerated conditions.The AMDSSP also had a higher operating characteristic(OC)curve than both the existing sampling plans.Two real datasets of electronic devices for the ALT at high temperatures demonstrated the practicality and usefulness of the proposed sampling plan.展开更多
We consider solving the forward and inverse partial differential equations(PDEs)which have sharp solutions with physics-informed neural networks(PINNs)in this work.In particular,to better capture the sharpness of the ...We consider solving the forward and inverse partial differential equations(PDEs)which have sharp solutions with physics-informed neural networks(PINNs)in this work.In particular,to better capture the sharpness of the solution,we propose the adaptive sampling methods(ASMs)based on the residual and the gradient of the solution.We first present a residual only-based ASM denoted by ASMⅠ.In this approach,we first train the neural network using a small number of residual points and divide the computational domain into a certain number of sub-domains,then we add new residual points in the sub-domain which has the largest mean absolute value of the residual,and those points which have the largest absolute values of the residual in this sub-domain as new residual points.We further develop a second type of ASM(denoted by ASMⅡ)based on both the residual and the gradient of the solution due to the fact that only the residual may not be able to efficiently capture the sharpness of the solution.The procedure of ASMⅡis almost the same as that of ASMⅠ,and we add new residual points which have not only large residuals but also large gradients.To demonstrate the effectiveness of the present methods,we use both ASMⅠand ASMⅡto solve a number of PDEs,including the Burger equation,the compressible Euler equation,the Poisson equation over an Lshape domain as well as the high-dimensional Poisson equation.It has been shown from the numerical results that the sharp solutions can be well approximated by using either ASMⅠor ASMⅡ,and both methods deliver much more accurate solutions than the original PINNs with the same number of residual points.Moreover,the ASMⅡalgorithm has better performance in terms of accuracy,efficiency,and stability compared with the ASMⅠalgorithm.This means that the gradient of the solution improves the stability and efficiency of the adaptive sampling procedure as well as the accuracy of the solution.Furthermore,we also employ the similar adaptive sampling technique for the data points of boundary conditions(BCs)if the sharpness of the solution is near the boundary.The result of the L-shape Poisson problem indicates that the present method can significantly improve the efficiency,stability,and accuracy.展开更多
Birds maintain complex and intimate associations with a diverse community of microbes in their intestine.Multiple invasive and non-invasive sampling methods are used to characterize these communities to answer a multi...Birds maintain complex and intimate associations with a diverse community of microbes in their intestine.Multiple invasive and non-invasive sampling methods are used to characterize these communities to answer a multitude of eco-evolutionary questions related to host-gut microbiome symbioses.However,the comparability of these invasive and non-invasive sampling methods is sparse with contradicting findings.Through performing a network meta-analysis for 13 published bird gut microbiome studies,here we attempt to investigate the comparability of these invasive and non-invasive sampling methods.The two most used non-invasive sampling methods(cloacal swabs and fecal samples)showed significantly different results in alpha diversity and taxonomic relative abundances compared to invasive samples.Overall,non-invasive samples showed decreased alpha diversity compared to intestinal samples,but the alpha diversities of fecal samples were more comparable to the intestinal samples.On the contrary,the cloacal swabs characterized significantly lower alpha diversities than in intestinal samples,but the taxonomic relative abundances acquired from cloacal swabs were similar to the intestinal samples.Phylogenetic status,diet,and domestication degree of host birds also influenced the differences in microbiota characterization between invasive and non-invasive samples.Our results indicate a general pattern in microbiota differences among intestinal mucosal and non-invasive samples across multiple bird taxa,while highlighting the importance of evaluating the appropriateness of the microbiome sampling methods used to answer specific research questions.The overall results also suggest the potential importance of using both fecal and cloacal swab sampling together to properly characterize bird microbiomes.展开更多
The laboratories in the bauxite processing industry are always under a heavy workload of sample collection, analysis, and compilation of the results. After size reduction from grinding mills, the samples of bauxite ar...The laboratories in the bauxite processing industry are always under a heavy workload of sample collection, analysis, and compilation of the results. After size reduction from grinding mills, the samples of bauxite are collected after intervals of 3 to 4 hours. Large bauxite processing industries producing 1 million tons of pure aluminium can have three grinding mills. Thus, the total number of samples to be tested in one day reaches a figure of 18 to 24. The sample of bauxite ore coming from the grinding mill is tested for its particle size and composition. For testing the composition, the bauxite ore sample is first prepared by fusing it with X-ray flux. Then the sample is sent for X-ray fluorescence analysis. Afterwards, the crucibles are washed in ultrasonic baths to be used for the next testing. The whole procedure takes about 2 - 3 hours. With a large number of samples reaching the laboratory, the chances of error in composition analysis increase. In this study, we have used a composite sampling methodology to reduce the number of samples reaching the laboratory without compromising their validity. The results of the average composition of fifteen samples were measured against composite samples. The mean of difference was calculated. The standard deviation and paired t-test values were evaluated against predetermined critical values obtained using a two-tailed test. It was found from the results that paired test-t values were much lower than the critical values thus validating the composition attained through composite sampling. The composite sampling approach not only reduced the number of samples but also the chemicals used in the laboratory. The objective of improved analytical protocol to reduce the number of samples reaching the laboratory was successfully achieved without compromising the quality of analytical results.展开更多
In this paper,by combining sampling methods for food statistics with years of sample sampling experience,various sampling points and corresponding sampling methods are summarized.It hopes to discover food safety risks...In this paper,by combining sampling methods for food statistics with years of sample sampling experience,various sampling points and corresponding sampling methods are summarized.It hopes to discover food safety risks and improve the level of food safety.展开更多
This work presents multi-fidelity multi-objective infill-sampling surrogate-assisted optimization for airfoil shape optimization.The optimization problem is posed to maximize the lift and drag coefficient ratio subjec...This work presents multi-fidelity multi-objective infill-sampling surrogate-assisted optimization for airfoil shape optimization.The optimization problem is posed to maximize the lift and drag coefficient ratio subject to airfoil geometry constraints.Computational Fluid Dynamic(CFD)and XFoil tools are used for high and low-fidelity simulations of the airfoil to find the real objective function value.A special multi-objective sub-optimization problem is proposed for multiple points infill sampling exploration to improve the surrogate model constructed.To validate and further assess the proposed methods,a conventional surrogate-assisted optimization method and an infill sampling surrogate-assisted optimization criterion are applied with multi-fidelity simulation,while their numerical performance is investigated.The results obtained show that the proposed technique is the best performer for the demonstrated airfoil shape optimization.According to this study,applying multi-fidelity with multi-objective infill sampling criteria for surrogate-assisted optimization is a powerful design tool.展开更多
The world of information technology is more than ever being flooded with huge amounts of data,nearly 2.5 quintillion bytes every day.This large stream of data is called big data,and the amount is increasing each day.T...The world of information technology is more than ever being flooded with huge amounts of data,nearly 2.5 quintillion bytes every day.This large stream of data is called big data,and the amount is increasing each day.This research uses a technique called sampling,which selects a representative subset of the data points,manipulates and analyzes this subset to identify patterns and trends in the larger dataset being examined,and finally,creates models.Sampling uses a small proportion of the original data for analysis and model training,so that it is relatively faster while maintaining data integrity and achieving accurate results.Two deep neural networks,AlexNet and DenseNet,were used in this research to test two sampling techniques,namely sampling with replacement and reservoir sampling.The dataset used for this research was divided into three classes:acceptable,flagged as easy,and flagged as hard.The base models were trained with the whole dataset,whereas the other models were trained on 50%of the original dataset.There were four combinations of model and sampling technique.The F-measure for the AlexNet model was 0.807 while that for the DenseNet model was 0.808.Combination 1 was the AlexNet model and sampling with replacement,achieving an average F-measure of 0.8852.Combination 3 was the AlexNet model and reservoir sampling.It had an average F-measure of 0.8545.Combination 2 was the DenseNet model and sampling with replacement,achieving an average F-measure of 0.8017.Finally,combination 4 was the DenseNet model and reservoir sampling.It had an average F-measure of 0.8111.Overall,we conclude that both models trained on a sampled dataset gave equal or better results compared to the base models,which used the whole dataset.展开更多
Nonparametric(distribution-free)control charts have been introduced in recent years when quality characteristics do not follow a specific distribution.When the sample selection is prohibitively expensive,we prefer ran...Nonparametric(distribution-free)control charts have been introduced in recent years when quality characteristics do not follow a specific distribution.When the sample selection is prohibitively expensive,we prefer ranked-set sampling over simple random sampling because ranked set sampling-based control charts outperform simple random sampling-based control charts.In this study,we proposed a nonparametric homogeneously weighted moving average based on theWilcoxon signed-rank test with ranked set sampling(NPHWMARSS)control chart for detecting shifts in the process location of a continuous and symmetric distribution.Monte Carlo simulations are used to obtain the run length characteristics to evaluate the performance of the proposed NPHWMARSS control chart.The proposed NPHWMARSS control chart’s performance is compared to that of parametric and nonparametric control charts.These control charts include the exponentially weighted moving average(EWMA)control chart,Wilcoxon signed-rank with simple random sampling based the nonparametric EWMA control chart,the nonparametric EWMA sign control chart,Wilcoxon signed-rank with ranked set sampling-based the nonparametric EWMA control chart,and the homogeneously weighted moving average control charts.The findings show that the proposed NPHWMARSS control chart performs better than its competitors,particularly for the small shifts.Finally,an example is presented to demonstrate how the proposed scheme can be implemented practically.展开更多
The firn aquifer beneath the Greenland Ice Sheet may play a significant role in rising sea level. Both traditional mechanical drilling and electric thermal drilling are poorly adapted for effective, low-disturbance sa...The firn aquifer beneath the Greenland Ice Sheet may play a significant role in rising sea level. Both traditional mechanical drilling and electric thermal drilling are poorly adapted for effective, low-disturbance sampling in firn aquifers. We propose using a vibrocoring technique for the undisturbed sampling of dry firn and firn aquifer layers. A remote-controlled vibrocorer is designed to obtain 1-m-long cores with a diameter of 100 mm. The depth capacity of the system is approximately 50 m. The total weight of the vibrocoring system with the surface auxiliary equipment is approximately 110 kg, and corer assembly itself weighs ~60 kg.展开更多
With the rapid growth of network bandwidth,traffic identification is currently an important challenge for network management and security.In recent years,packet sampling has been widely used in most network management...With the rapid growth of network bandwidth,traffic identification is currently an important challenge for network management and security.In recent years,packet sampling has been widely used in most network management systems.In this paper,in order to improve the accuracy of network traffic identification,sampled NetFlow data is applied to traffic identification,and the impact of packet sampling on the accuracy of the identification method is studied.This study includes feature selection,a metric correlation analysis for the application behavior,and a traffic identification algorithm.Theoretical analysis and experimental results show that the significance of behavior characteristics becomes lower in the packet sampling environment.Meanwhile,in this paper,the correlation analysis results in different trends according to different features.However,as long as the flow number meets the statistical requirement,the feature selection and the correlation degree will be independent of the sampling ratio.While in a high sampling ratio,where the effective information would be less,the identification accuracy is much lower than the unsampled packets.Finally,in order to improve the accuracy of the identification,we propose a Deep Belief Networks Application Identification(DBNAI)method,which can achieve better classification performance than other state-of-the-art methods.展开更多
基金the Science,Research and Innovation Promotion Funding(TSRI)(Grant No.FRB660012/0168)managed under Rajamangala University of Technology Thanyaburi(FRB66E0646O.4).
文摘This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of the product based on the time truncated life test employing theWeibull distribution.The control chart developed supports the examination of the mean lifespan variation for a particular product in the process of manufacturing.Three control limit levels are used:the warning control limit,inner control limit,and outer control limit.Together,they enhance the capability for variation detection.A genetic algorithm can be used for optimization during the in-control process,whereby the optimal parameters can be established for the proposed control chart.The control chart performance is assessed using the average run length,while the influence of the model parameters upon the control chart solution is assessed via sensitivity analysis based on an orthogonal experimental design withmultiple linear regression.A comparative study was conducted based on the out-of-control average run length,in which the developed control chart offered greater sensitivity in the detection of process shifts while making use of smaller samples on average than is the case for existing control charts.Finally,to exhibit the utility of the developed control chart,this paper presents its application using simulated data with parameters drawn from the real set of data.
文摘The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.
基金Project supported by the National Key Research and Development Program of China(Grant No.2023YFF1204402)the National Natural Science Foundation of China(Grant Nos.12074079 and 12374208)+1 种基金the Natural Science Foundation of Shanghai(Grant No.22ZR1406800)the China Postdoctoral Science Foundation(Grant No.2022M720815).
文摘The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with molecular simulations to improve the sampling efficiency of the vast conformational space of large biomolecules.This review focuses on recent studies that utilize ML-based techniques in the exploration of protein conformational landscape.We first highlight the recent development of ML-aided enhanced sampling methods,including heuristic algorithms and neural networks that are designed to refine the selection of reaction coordinates for the construction of bias potential,or facilitate the exploration of the unsampled region of the energy landscape.Further,we review the development of autoencoder based methods that combine molecular simulations and deep learning to expand the search for protein conformations.Lastly,we discuss the cutting-edge methodologies for the one-shot generation of protein conformations with precise Boltzmann weights.Collectively,this review demonstrates the promising potential of machine learning in revolutionizing our insight into the complex conformational ensembles of proteins.
基金This present research work was supported by the National Key R&D Program of China(No.2021YFB2700800)the GHfund B(No.202302024490).
文摘Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.However,traditional blockchain P2P networks face a common challenge where there is often a mismatch between the upper-layer traffic requirements and the underlying physical network topology.This mismatch results in redundant data transmission and inefficient routing,severely constraining the scalability of blockchain systems.To address these pressing issues,we propose FPSblo,an efficient transmission method for blockchain networks.Our inspiration for FPSblo stems from the Farthest Point Sampling(FPS)algorithm,a well-established technique widely utilized in point cloud image processing.In this work,we analogize blockchain nodes to points in a point cloud image and select a representative set of nodes to prioritize message forwarding so that messages reach the network edge quickly and are evenly distributed.Moreover,we compare our model with the Kadcast transmission model,which is a classic improvement model for blockchain P2P transmission networks,the experimental findings show that the FPSblo model reduces 34.8%of transmission redundancy and reduces the overload rate by 37.6%.By conducting experimental analysis,the FPS-BT model enhances the transmission capabilities of the P2P network in blockchain.
基金supported by the Key Projects of the 2022 National Defense Science and Technology Foundation Strengthening Plan 173 (Grant No.2022-173ZD-010)the Equipment PreResearch Foundation of The State Key Laboratory (Grant No.6142101200204)。
文摘Wideband spectrum sensing with a high-speed analog-digital converter(ADC) presents a challenge for practical systems.The Nyquist folding receiver(NYFR) is a promising scheme for achieving cost-effective real-time spectrum sensing,which is subject to the complexity of processing the modulated outputs.In this case,a multipath NYFR architecture with a step-sampling rate for the different paths is proposed.The different numbers of digital channels for each path are designed based on the Chinese remainder theorem(CRT).Then,the detectable frequency range is divided into multiple frequency grids,and the Nyquist zone(NZ) of the input can be obtained by sensing these grids.Thus,high-precision parameter estimation is performed by utilizing the NYFR characteristics.Compared with the existing methods,the scheme proposed in this paper overcomes the challenge of NZ estimation,information damage,many computations,low accuracy,and high false alarm probability.Comparative simulation experiments verify the effectiveness of the proposed architecture in this paper.
文摘BACKGROUND Pancreatic ductal leaks complicated by endoscopic ultrasonography-guided tissue sampling(EUS-TS)can manifest as acute pancreatitis.CASE SUMMARY A 63-year-old man presented with persistent abdominal pain and weight loss.Diagnosis:Laboratory findings revealed elevated carbohydrate antigen 19-9(5920 U/mL)and carcinoembryonic antigen(23.7 ng/mL)levels.Magnetic resonance imaging of the pancreas revealed an approximately 3 cm ill-defined space-occupying lesion in the inferior aspect of the head,with severe encasement of the superior mesenteric artery.Pancreatic ductal adenocarcinoma was confirmed after pathological examination of specimens obtained by EUS-TS using the fanning method.Interventions and outcomes:The following day,the patient experienced severe abdominal pain with high amylase(265 U/L)and lipase(1173 U/L)levels.Computed tomography of the abdomen revealed edematous wall thickening of the second portion of the duodenum with adjacent fluid collections and a suspicious leak from either the distal common bile duct or the main pancreatic duct in the head.Endoscopic retrograde cholangiopancreatography revealed dye leakage in the head of the main pancreatic duct.Therefore,a 5F 7 cm linear plastic stent was deployed into the pancreatic duct to divert the pancreatic juice.The patient’s abdominal pain improved immediately after pancreatic stent insertion,and amylase and lipase levels normalized within a week.Neoadjuvant chemotherapy was then initiated.CONCLUSION Using the fanning method in EUS-TS can inadvertently cause damage to the pancreatic duct and may lead to clinically significant pancreatitis.Placing a pancreatic stent may immediately resolve acute pancreatitis and shorten the waiting time for curative therapy.When using the fanning method during EUSTS,ductal structures should be excluded to prevent pancreatic ductal leakage.
基金Supported by the Zimin Institute for Engineering Solutions Advancing Better Lives。
文摘Background Functional mapping, despite its proven efficiency, suffers from a “chicken or egg” scenario, in that, poor spatial features lead to inadequate spectral alignment and vice versa during training, often resulting in slow convergence, high computational costs, and learning failures, particularly when small datasets are used. Methods A novel method is presented for dense-shape correspondence, whereby the spatial information transformed by neural networks is combined with the projections onto spectral maps to overcome the “chicken or egg” challenge by selectively sampling only points with high confidence in their alignment. These points then contribute to the alignment and spectral loss terms, boosting training, and accelerating convergence by a factor of five. To ensure full unsupervised learning, the Gromov–Hausdorff distance metric was used to select the points with the maximal alignment score displaying most confidence. Results The effectiveness of the proposed approach was demonstrated on several benchmark datasets, whereby results were reported as superior to those of spectral and spatial-based methods. Conclusions The proposed method provides a promising new approach to dense-shape correspondence, addressing the key challenges in the field and offering significant advantages over the current methods, including faster convergence, improved accuracy, and reduced computational costs.
文摘In order to enhance grain sampling efficiency, in this work a truss type multi-rod grain sampling machine is designed and tested. The sampling machine primarily consists of truss support mechanism, main carriage mechanism, auxiliary carriage mechanism, sampling rods, and a PLC controller. The movement of the main carriage on the truss, the auxiliary carriage on the main carriage, and the vertical movement of the sampling rods on the auxiliary carriage are controlled through PLC programming. The sampling machine accurately controls the position of the sampling rods, enabling random sampling with six rods to ensure comprehensive and random sampling. Additionally, sampling experiments were conducted, and the results showed that the multi-rod grain sampling machine simultaneously samples with six rods, achieving a sampling frequency of 38 times per hour. The round trip time for the sampling rods is 33 seconds per cycle, and the sampling length direction reaches 18 m. This study provides valuable insights for the design of multi-rod grain sampling machines.
基金National Natural Science Foundation of China(61973037)National 173 Program Project(2019-JCJQ-ZD-324)。
文摘Uniform linear array(ULA)radars are widely used in the collision-avoidance radar systems of small unmanned aerial vehicles(UAVs).In practice,a ULA's multi-target direction of arrival(DOA)estimation performance suffers from significant performance degradation owing to the limited number of physical elements.To improve the underdetermined DOA estimation performance of a ULA radar mounted on a small UAV platform,we propose a nonuniform linear motion sampling underdetermined DOA estimation method.Using the motion of the UAV platform,the echo signal is sampled at different positions.Then,according to the concept of difference co-array,a virtual ULA with multiple array elements and a large aperture is synthesized to increase the degrees of freedom(DOFs).Through position analysis of the original and motion arrays,we propose a nonuniform linear motion sampling method based on ULA for determining the optimal DOFs.Under the condition of no increase in the aperture of the physical array,the proposed method obtains a high DOF with fewer sampling runs and greatly improves the underdetermined DOA estimation performance of ULA.The results of numerical simulations conducted herein verify the superior performance of the proposed method.
基金This research was supported by The Science,Research and Innovation Promotion Funding(TSRI)(Grant No.FRB650070/0168)This research block grants was managed under Rajamangala University of Technology Thanyaburi(FRB65E0634M.3).
文摘A novel adaptive multiple dependent state sampling plan(AMDSSP)was designed to inspect products from a continuous manufacturing process under the accelerated life test(ALT)using both double sampling plan(DSP)and multiple dependent state sampling plan(MDSSP)concepts.Under accelerated conditions,the lifetime of a product follows the Weibull distribution with a known shape parameter,while the scale parameter can be determined using the acceleration factor(AF).The Arrhenius model is used to estimate AF when the damaging process is temperature-sensitive.An economic design of the proposed sampling plan was also considered for the ALT.A genetic algorithm with nonlinear optimization was used to estimate optimal plan parameters to minimize the average sample number(ASN)and total cost of inspection(TC)under both producer’s and consumer’s risks.Numerical results are presented to support the AMDSSP for the ALT,while performance comparisons between the AMDSSP,the MDSSP and a single sampling plan(SSP)for the ALT are discussed.Results indicated that the AMDSSP was more flexible and efficient for ASN and TC than the MDSSP and SSP plans under accelerated conditions.The AMDSSP also had a higher operating characteristic(OC)curve than both the existing sampling plans.Two real datasets of electronic devices for the ALT at high temperatures demonstrated the practicality and usefulness of the proposed sampling plan.
基金Project supported by the National Key R&D Program of China(No.2022YFA1004504)the National Natural Science Foundation of China(Nos.12171404 and 12201229)the Fundamental Research Funds for Central Universities of China(No.20720210037)。
文摘We consider solving the forward and inverse partial differential equations(PDEs)which have sharp solutions with physics-informed neural networks(PINNs)in this work.In particular,to better capture the sharpness of the solution,we propose the adaptive sampling methods(ASMs)based on the residual and the gradient of the solution.We first present a residual only-based ASM denoted by ASMⅠ.In this approach,we first train the neural network using a small number of residual points and divide the computational domain into a certain number of sub-domains,then we add new residual points in the sub-domain which has the largest mean absolute value of the residual,and those points which have the largest absolute values of the residual in this sub-domain as new residual points.We further develop a second type of ASM(denoted by ASMⅡ)based on both the residual and the gradient of the solution due to the fact that only the residual may not be able to efficiently capture the sharpness of the solution.The procedure of ASMⅡis almost the same as that of ASMⅠ,and we add new residual points which have not only large residuals but also large gradients.To demonstrate the effectiveness of the present methods,we use both ASMⅠand ASMⅡto solve a number of PDEs,including the Burger equation,the compressible Euler equation,the Poisson equation over an Lshape domain as well as the high-dimensional Poisson equation.It has been shown from the numerical results that the sharp solutions can be well approximated by using either ASMⅠor ASMⅡ,and both methods deliver much more accurate solutions than the original PINNs with the same number of residual points.Moreover,the ASMⅡalgorithm has better performance in terms of accuracy,efficiency,and stability compared with the ASMⅠalgorithm.This means that the gradient of the solution improves the stability and efficiency of the adaptive sampling procedure as well as the accuracy of the solution.Furthermore,we also employ the similar adaptive sampling technique for the data points of boundary conditions(BCs)if the sharpness of the solution is near the boundary.The result of the L-shape Poisson problem indicates that the present method can significantly improve the efficiency,stability,and accuracy.
基金funded by the National Natural Science Foundation of China(31870370)the Key Grant of Guangxi Nature and Science Foundation(2018GXNSFDA281016)。
文摘Birds maintain complex and intimate associations with a diverse community of microbes in their intestine.Multiple invasive and non-invasive sampling methods are used to characterize these communities to answer a multitude of eco-evolutionary questions related to host-gut microbiome symbioses.However,the comparability of these invasive and non-invasive sampling methods is sparse with contradicting findings.Through performing a network meta-analysis for 13 published bird gut microbiome studies,here we attempt to investigate the comparability of these invasive and non-invasive sampling methods.The two most used non-invasive sampling methods(cloacal swabs and fecal samples)showed significantly different results in alpha diversity and taxonomic relative abundances compared to invasive samples.Overall,non-invasive samples showed decreased alpha diversity compared to intestinal samples,but the alpha diversities of fecal samples were more comparable to the intestinal samples.On the contrary,the cloacal swabs characterized significantly lower alpha diversities than in intestinal samples,but the taxonomic relative abundances acquired from cloacal swabs were similar to the intestinal samples.Phylogenetic status,diet,and domestication degree of host birds also influenced the differences in microbiota characterization between invasive and non-invasive samples.Our results indicate a general pattern in microbiota differences among intestinal mucosal and non-invasive samples across multiple bird taxa,while highlighting the importance of evaluating the appropriateness of the microbiome sampling methods used to answer specific research questions.The overall results also suggest the potential importance of using both fecal and cloacal swab sampling together to properly characterize bird microbiomes.
文摘The laboratories in the bauxite processing industry are always under a heavy workload of sample collection, analysis, and compilation of the results. After size reduction from grinding mills, the samples of bauxite are collected after intervals of 3 to 4 hours. Large bauxite processing industries producing 1 million tons of pure aluminium can have three grinding mills. Thus, the total number of samples to be tested in one day reaches a figure of 18 to 24. The sample of bauxite ore coming from the grinding mill is tested for its particle size and composition. For testing the composition, the bauxite ore sample is first prepared by fusing it with X-ray flux. Then the sample is sent for X-ray fluorescence analysis. Afterwards, the crucibles are washed in ultrasonic baths to be used for the next testing. The whole procedure takes about 2 - 3 hours. With a large number of samples reaching the laboratory, the chances of error in composition analysis increase. In this study, we have used a composite sampling methodology to reduce the number of samples reaching the laboratory without compromising their validity. The results of the average composition of fifteen samples were measured against composite samples. The mean of difference was calculated. The standard deviation and paired t-test values were evaluated against predetermined critical values obtained using a two-tailed test. It was found from the results that paired test-t values were much lower than the critical values thus validating the composition attained through composite sampling. The composite sampling approach not only reduced the number of samples but also the chemicals used in the laboratory. The objective of improved analytical protocol to reduce the number of samples reaching the laboratory was successfully achieved without compromising the quality of analytical results.
文摘In this paper,by combining sampling methods for food statistics with years of sample sampling experience,various sampling points and corresponding sampling methods are summarized.It hopes to discover food safety risks and improve the level of food safety.
基金The authors are grateful for the support from Khon Kaen University Scholarship for ASEAN and GMS Countries’Personnel of Academic Year and the National Research Council of Thailand(N42A650549).
文摘This work presents multi-fidelity multi-objective infill-sampling surrogate-assisted optimization for airfoil shape optimization.The optimization problem is posed to maximize the lift and drag coefficient ratio subject to airfoil geometry constraints.Computational Fluid Dynamic(CFD)and XFoil tools are used for high and low-fidelity simulations of the airfoil to find the real objective function value.A special multi-objective sub-optimization problem is proposed for multiple points infill sampling exploration to improve the surrogate model constructed.To validate and further assess the proposed methods,a conventional surrogate-assisted optimization method and an infill sampling surrogate-assisted optimization criterion are applied with multi-fidelity simulation,while their numerical performance is investigated.The results obtained show that the proposed technique is the best performer for the demonstrated airfoil shape optimization.According to this study,applying multi-fidelity with multi-objective infill sampling criteria for surrogate-assisted optimization is a powerful design tool.
文摘The world of information technology is more than ever being flooded with huge amounts of data,nearly 2.5 quintillion bytes every day.This large stream of data is called big data,and the amount is increasing each day.This research uses a technique called sampling,which selects a representative subset of the data points,manipulates and analyzes this subset to identify patterns and trends in the larger dataset being examined,and finally,creates models.Sampling uses a small proportion of the original data for analysis and model training,so that it is relatively faster while maintaining data integrity and achieving accurate results.Two deep neural networks,AlexNet and DenseNet,were used in this research to test two sampling techniques,namely sampling with replacement and reservoir sampling.The dataset used for this research was divided into three classes:acceptable,flagged as easy,and flagged as hard.The base models were trained with the whole dataset,whereas the other models were trained on 50%of the original dataset.There were four combinations of model and sampling technique.The F-measure for the AlexNet model was 0.807 while that for the DenseNet model was 0.808.Combination 1 was the AlexNet model and sampling with replacement,achieving an average F-measure of 0.8852.Combination 3 was the AlexNet model and reservoir sampling.It had an average F-measure of 0.8545.Combination 2 was the DenseNet model and sampling with replacement,achieving an average F-measure of 0.8017.Finally,combination 4 was the DenseNet model and reservoir sampling.It had an average F-measure of 0.8111.Overall,we conclude that both models trained on a sampled dataset gave equal or better results compared to the base models,which used the whole dataset.
基金Funds are available under the Grant No.RGP.2/132/43 at King Khalid University,Kingdom of Saudi Arabia.
文摘Nonparametric(distribution-free)control charts have been introduced in recent years when quality characteristics do not follow a specific distribution.When the sample selection is prohibitively expensive,we prefer ranked-set sampling over simple random sampling because ranked set sampling-based control charts outperform simple random sampling-based control charts.In this study,we proposed a nonparametric homogeneously weighted moving average based on theWilcoxon signed-rank test with ranked set sampling(NPHWMARSS)control chart for detecting shifts in the process location of a continuous and symmetric distribution.Monte Carlo simulations are used to obtain the run length characteristics to evaluate the performance of the proposed NPHWMARSS control chart.The proposed NPHWMARSS control chart’s performance is compared to that of parametric and nonparametric control charts.These control charts include the exponentially weighted moving average(EWMA)control chart,Wilcoxon signed-rank with simple random sampling based the nonparametric EWMA control chart,the nonparametric EWMA sign control chart,Wilcoxon signed-rank with ranked set sampling-based the nonparametric EWMA control chart,and the homogeneously weighted moving average control charts.The findings show that the proposed NPHWMARSS control chart performs better than its competitors,particularly for the small shifts.Finally,an example is presented to demonstrate how the proposed scheme can be implemented practically.
基金supported the by the National Key R&D Program of China (Grant no. 2021YFC2801400)。
文摘The firn aquifer beneath the Greenland Ice Sheet may play a significant role in rising sea level. Both traditional mechanical drilling and electric thermal drilling are poorly adapted for effective, low-disturbance sampling in firn aquifers. We propose using a vibrocoring technique for the undisturbed sampling of dry firn and firn aquifer layers. A remote-controlled vibrocorer is designed to obtain 1-m-long cores with a diameter of 100 mm. The depth capacity of the system is approximately 50 m. The total weight of the vibrocoring system with the surface auxiliary equipment is approximately 110 kg, and corer assembly itself weighs ~60 kg.
基金supported by Key Scientific and Technological Research Projects in Henan Province(Grand No 192102210125)Key scientific research projects of colleges and universities in Henan Province(23A520054)Open Foundation of State key Laboratory of Networking and Switching Technology(Beijing University of Posts and Telecommunications)(SKLNST-2020-2-01).
文摘With the rapid growth of network bandwidth,traffic identification is currently an important challenge for network management and security.In recent years,packet sampling has been widely used in most network management systems.In this paper,in order to improve the accuracy of network traffic identification,sampled NetFlow data is applied to traffic identification,and the impact of packet sampling on the accuracy of the identification method is studied.This study includes feature selection,a metric correlation analysis for the application behavior,and a traffic identification algorithm.Theoretical analysis and experimental results show that the significance of behavior characteristics becomes lower in the packet sampling environment.Meanwhile,in this paper,the correlation analysis results in different trends according to different features.However,as long as the flow number meets the statistical requirement,the feature selection and the correlation degree will be independent of the sampling ratio.While in a high sampling ratio,where the effective information would be less,the identification accuracy is much lower than the unsampled packets.Finally,in order to improve the accuracy of the identification,we propose a Deep Belief Networks Application Identification(DBNAI)method,which can achieve better classification performance than other state-of-the-art methods.