Decision-making and motion planning are extremely important in autonomous driving to ensure safe driving in a real-world environment.This study proposes an online evolutionary decision-making and motion planning frame...Decision-making and motion planning are extremely important in autonomous driving to ensure safe driving in a real-world environment.This study proposes an online evolutionary decision-making and motion planning framework for autonomous driving based on a hybrid data-and model-driven method.First,a data-driven decision-making module based on deep reinforcement learning(DRL)is developed to pursue a rational driving performance as much as possible.Then,model predictive control(MPC)is employed to execute both longitudinal and lateral motion planning tasks.Multiple constraints are defined according to the vehicle’s physical limit to meet the driving task requirements.Finally,two principles of safety and rationality for the self-evolution of autonomous driving are proposed.A motion envelope is established and embedded into a rational exploration and exploitation scheme,which filters out unreasonable experiences by masking unsafe actions so as to collect high-quality training data for the DRL agent.Experiments with a high-fidelity vehicle model and MATLAB/Simulink co-simulation environment are conducted,and the results show that the proposed online-evolution framework is able to generate safer,more rational,and more efficient driving action in a real-world environment.展开更多
The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenz...The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.展开更多
Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS m...Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.展开更多
In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sam...In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sampling will be valid for some classes of multivariate entire functions,satisfying certain growth conditions.We will show that many known results included in Commun Korean Math Soc,2002,17:731-740,Turk J Math,2017,41:387-403 and Filomat,2020,34:3339-3347 are special cases of our results.Moreover,we estimate the truncation error of this sampling based on localized sampling without decay assumption.Illustrative examples are also presented.展开更多
While autonomous vehicles are vital components of intelligent transportation systems,ensuring the trustworthiness of decision-making remains a substantial challenge in realizing autonomous driving.Therefore,we present...While autonomous vehicles are vital components of intelligent transportation systems,ensuring the trustworthiness of decision-making remains a substantial challenge in realizing autonomous driving.Therefore,we present a novel robust reinforcement learning approach with safety guarantees to attain trustworthy decision-making for autonomous vehicles.The proposed technique ensures decision trustworthiness in terms of policy robustness and collision safety.Specifically,an adversary model is learned online to simulate the worst-case uncertainty by approximating the optimal adversarial perturbations on the observed states and environmental dynamics.In addition,an adversarial robust actor-critic algorithm is developed to enable the agent to learn robust policies against perturbations in observations and dynamics.Moreover,we devise a safety mask to guarantee the collision safety of the autonomous driving agent during both the training and testing processes using an interpretable knowledge model known as the Responsibility-Sensitive Safety Model.Finally,the proposed approach is evaluated through both simulations and experiments.These results indicate that the autonomous driving agent can make trustworthy decisions and drastically reduce the number of collisions through robust safety policies.展开更多
Due to ever-growing soccer data collection approaches and progressing artificial intelligence(AI) methods, soccer analysis, evaluation, and decision-making have received increasing interest from not only the professio...Due to ever-growing soccer data collection approaches and progressing artificial intelligence(AI) methods, soccer analysis, evaluation, and decision-making have received increasing interest from not only the professional sports analytics realm but also the academic AI research community. AI brings gamechanging approaches for soccer analytics where soccer has been a typical benchmark for AI research. The combination has been an emerging topic. In this paper, soccer match analytics are taken as a complete observation-orientation-decision-action(OODA) loop.In addition, as in AI frameworks such as that for reinforcement learning, interacting with a virtual environment enables an evolving model. Therefore, both soccer analytics in the real world and virtual domains are discussed. With the intersection of the OODA loop and the real-virtual domains, available soccer data, including event and tracking data, and diverse orientation and decisionmaking models for both real-world and virtual soccer matches are comprehensively reviewed. Finally, some promising directions in this interdisciplinary area are pointed out. It is claimed that paradigms for both professional sports analytics and AI research could be combined. Moreover, it is quite promising to bridge the gap between the real and virtual domains for soccer match analysis and decision-making.展开更多
Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values...Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values or make ethical decisions,they may not meet the expectations of humans.Traditionally,an ethical decision-making framework is constructed by rule-based or statistical approaches.In this paper,we propose an ethical decision-making framework based on incremental ILP(Inductive Logic Programming),which can overcome the brittleness of rule-based approaches and little interpretability of statistical approaches.As the current incremental ILP makes it difficult to solve conflicts,we propose a novel ethical decision-making framework considering conflicts in this paper,which adopts our proposed incremental ILP system.The framework consists of two processes:the learning process and the deduction process.The first process records bottom clauses with their score functions and learns rules guided by the entailment and the score function.The second process obtains an ethical decision based on the rules.In an ethical scenario about chatbots for teenagers’mental health,we verify that our framework can learn ethical rules and make ethical decisions.Besides,we extract incremental ILP from the framework and compare it with the state-of-the-art ILP systems based on ASP(Answer Set Programming)focusing on conflict resolution.The results of comparisons show that our proposed system can generate better-quality rules than most other systems.展开更多
Stroke is a chronic cerebrovascular disease that carries a high risk.Stroke risk assessment is of great significance in preventing,reversing and reducing the spread and the health hazards caused by stroke.Aiming to ob...Stroke is a chronic cerebrovascular disease that carries a high risk.Stroke risk assessment is of great significance in preventing,reversing and reducing the spread and the health hazards caused by stroke.Aiming to objectively predict and identify strokes,this paper proposes a new stroke risk assessment decision-making model named Logistic-AdaBoost(Logistic-AB)based on machine learning.First,the categorical boosting(CatBoost)method is used to perform feature selection for all features of stroke,and 8 main features are selected to form a new index evaluation system to predict the risk of stroke.Second,the borderline synthetic minority oversampling technique(SMOTE)algorithm is applied to transform the unbalanced stroke dataset into a balanced dataset.Finally,the stroke risk assessment decision-makingmodel Logistic-AB is constructed,and the overall prediction performance of this new model is evaluated by comparing it with ten other similar models.The comparison results show that the new model proposed in this paper performs better than the two single algorithms(logistic regression and AdaBoost)on the four indicators of recall,precision,F1 score,and accuracy,and the overall performance of the proposed model is better than that of common machine learning algorithms.The Logistic-AB model presented in this paper can more accurately predict patients’stroke risk.展开更多
This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of t...This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of the product based on the time truncated life test employing theWeibull distribution.The control chart developed supports the examination of the mean lifespan variation for a particular product in the process of manufacturing.Three control limit levels are used:the warning control limit,inner control limit,and outer control limit.Together,they enhance the capability for variation detection.A genetic algorithm can be used for optimization during the in-control process,whereby the optimal parameters can be established for the proposed control chart.The control chart performance is assessed using the average run length,while the influence of the model parameters upon the control chart solution is assessed via sensitivity analysis based on an orthogonal experimental design withmultiple linear regression.A comparative study was conducted based on the out-of-control average run length,in which the developed control chart offered greater sensitivity in the detection of process shifts while making use of smaller samples on average than is the case for existing control charts.Finally,to exhibit the utility of the developed control chart,this paper presents its application using simulated data with parameters drawn from the real set of data.展开更多
The strategy evolution process of game players is highly uncertain due to random emergent situations and other external disturbances.This paper investigates the issue of strategy interaction and behavioral decision-ma...The strategy evolution process of game players is highly uncertain due to random emergent situations and other external disturbances.This paper investigates the issue of strategy interaction and behavioral decision-making among game players in simulated confrontation scenarios within a random interference environment.It considers the possible risks that random disturbances may pose to the autonomous decision-making of game players,as well as the impact of participants’manipulative behaviors on the state changes of the players.A nonlinear mathematical model is established to describe the strategy decision-making process of the participants in this scenario.Subsequently,the strategy selection interaction relationship,strategy evolution stability,and dynamic decision-making process of the game players are investigated and verified by simulation experiments.The results show that maneuver-related parameters and random environmental interference factors have different effects on the selection and evolutionary speed of the agent’s strategies.Especially in a highly uncertain environment,even small information asymmetry or miscalculation may have a significant impact on decision-making.This also confirms the feasibility and effectiveness of the method proposed in the paper,which can better explain the behavioral decision-making process of the agent in the interaction process.This study provides feasibility analysis ideas and theoretical references for improving multi-agent interactive decision-making and the interpretability of the game system model.展开更多
Purpose–Material selection,driven by wide and often conflicting objectives,is an important,sometimes difficult problem in material engineering.In this context,multi-criteria decision-making(MCDM)methodologies are eff...Purpose–Material selection,driven by wide and often conflicting objectives,is an important,sometimes difficult problem in material engineering.In this context,multi-criteria decision-making(MCDM)methodologies are effective.An approach of MCDM is needed to cater to criteria of material assortment simultaneously.More firms are now concerned about increasing their productivity using mathematical tools.To occupy a gap in the previous literature this research recommends an integrated MCDM and mathematical Bi-objective model for the selection of material.In addition,by using the Technique for Order Preference by Similarity to Ideal Solution(TOPSIS),the inherent ambiguities of decision-makers in paired evaluations are considered in this research.It goes on to construct a mathematical bi-objective model for determining the best item to purchase.Design/methodology/approach–The entropy perspective is implemented in this paper to evaluate the weight parameters,while the TOPSIS technique is used to determine the best and worst intermediate pipe materials for automotive exhaust system.The intermediate pipes are used to join the components of the exhaust systems.The materials usually used to manufacture intermediate pipe are SUS 436LM,SUS 430,SUS 304,SUS 436L,SUH 409 L,SUS 441 L and SUS 439L.These seven materials are evaluated based on tensile strength(TS),hardness(H),elongation(E),yield strength(YS)and cost(C).A hybrid methodology combining entropy-based criteria weighting,with the TOPSIS for alternative ranking,is pursued to identify the optimal design material for an engineered application in this paper.This study aims to help while filling the information gap in selecting the most suitable material for use in the exhaust intermediate pipes.After that,the authors searched for and considered eight materials and evaluated them on the following five criteria:(1)TS,(2)YS,(3)H,(4)E and(5)C.The first two criteria have been chosen because they can have a lot of influence on the behavior of the exhaust intermediate pipes,on their performance and on the cost.In this structure,the weights of the criteria are calculated objectively through the entropy method in order to have an unbiased assessment.This essentially measures the quantity of information each criterion contribution,indicating the relative importance of these criteria better.Subsequently,the materials were ranked using the TOPSIS method in terms of their relative performance by measuring each material from an ideal solution to determine the best alternative.The results show that SUS 309,SUS 432L and SUS 436 LM are the first three materials that the exhaust intermediate pipe optimal design should consider.Findings–The material matrix of the decision presented in Table 3 was normalized through Equation 5,as shown in Table 5,and the matrix was multiplied with weighting criteriaß_j.The obtained weighted normalized matrix V_ij is presented in Table 6.However,the ideal,worst and best value was ascertained by employing Equation 7.This study is based on the selection of material for the development of intermediate pipe using MCDM,and it involves four basic stages,i.e.method of translation criteria,screening process,method of ranking and search for methods.The selection was done through the TOPSIS method,and the criteria weight was obtained by the entropy method.The result showed that the top three materials are SUS 309,SUS 432L and SUS 436 LM,respectively.For the future work,it is suggested to select more alternatives and criteria.The comparison can also be done by using different MCDM techniques like and Choice Expressing Reality(ELECTRE),Decision-Making Trial and Evaluation Laboratory(DEMATEL)and Preference Ranking Organization Method for Enrichment Evaluation(PROMETHEE).Originality/value–The results provide important conclusions for material selection in this targeted application,verifying the employment of mutual entropy-TOPSIS methodology for a series of difficult engineering decisions in material engineering concepts that combine superior capacity with better performance as well as cost-efficiency in various engineering design.展开更多
The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-atten...The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-attention mechanisms falter when applied to datasets with intricate semantic content and extensive dependency structures.In response,this paper introduces a Diffusion Sampling and Label-Driven Co-attention Neural Network(DSLD),which adopts a diffusion sampling method to capture more comprehensive semantic information of the data.Additionally,themodel leverages the joint correlation information of labels and data to introduce the computation of text representation,correcting semantic representationbiases in thedata,andincreasing the accuracyof semantic representation.Ultimately,the model computes the corresponding classification results by synthesizing these rich data semantic representations.Experiments on seven benchmark datasets show that our proposed model achieves competitive results compared to state-of-the-art methods.展开更多
The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with...The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with molecular simulations to improve the sampling efficiency of the vast conformational space of large biomolecules.This review focuses on recent studies that utilize ML-based techniques in the exploration of protein conformational landscape.We first highlight the recent development of ML-aided enhanced sampling methods,including heuristic algorithms and neural networks that are designed to refine the selection of reaction coordinates for the construction of bias potential,or facilitate the exploration of the unsampled region of the energy landscape.Further,we review the development of autoencoder based methods that combine molecular simulations and deep learning to expand the search for protein conformations.Lastly,we discuss the cutting-edge methodologies for the one-shot generation of protein conformations with precise Boltzmann weights.Collectively,this review demonstrates the promising potential of machine learning in revolutionizing our insight into the complex conformational ensembles of proteins.展开更多
Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.Howev...Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.However,traditional blockchain P2P networks face a common challenge where there is often a mismatch between the upper-layer traffic requirements and the underlying physical network topology.This mismatch results in redundant data transmission and inefficient routing,severely constraining the scalability of blockchain systems.To address these pressing issues,we propose FPSblo,an efficient transmission method for blockchain networks.Our inspiration for FPSblo stems from the Farthest Point Sampling(FPS)algorithm,a well-established technique widely utilized in point cloud image processing.In this work,we analogize blockchain nodes to points in a point cloud image and select a representative set of nodes to prioritize message forwarding so that messages reach the network edge quickly and are evenly distributed.Moreover,we compare our model with the Kadcast transmission model,which is a classic improvement model for blockchain P2P transmission networks,the experimental findings show that the FPSblo model reduces 34.8%of transmission redundancy and reduces the overload rate by 37.6%.By conducting experimental analysis,the FPS-BT model enhances the transmission capabilities of the P2P network in blockchain.展开更多
Spherical q-linearDiophantine fuzzy sets(Sq-LDFSs)provedmore effective for handling uncertainty and vagueness in multi-criteria decision-making(MADM).It does not only cover the data in two variable parameters but is a...Spherical q-linearDiophantine fuzzy sets(Sq-LDFSs)provedmore effective for handling uncertainty and vagueness in multi-criteria decision-making(MADM).It does not only cover the data in two variable parameters but is also beneficial for three parametric data.By Pythagorean fuzzy sets,the difference is calculated only between two parameters(membership and non-membership).According to human thoughts,fuzzy data can be found in three parameters(membership uncertainty,and non-membership).So,to make a compromise decision,comparing Sq-LDFSs is essential.Existing measures of different fuzzy sets do,however,can have several flaws that can lead to counterintuitive results.For instance,they treat any increase or decrease in the membership degree as the same as the non-membership degree because the uncertainty does not change,even though each parameter has a different implication.In the Sq-LDFSs comparison,this research develops the differentialmeasure(DFM).Themain goal of the DFM is to cover the unfair arguments that come from treating different types of FSs opposing criteria equally.Due to their relative positions in the attribute space and the similarity of their membership and non-membership degrees,two Sq-LDFSs formthis preference connectionwhen the uncertainty remains same in both sets.According to the degree of superiority or inferiority,two Sq-LDFSs are shown as identical,equivalent,superior,or inferior over one another.The suggested DFM’s fundamental characteristics are provided.Based on the newly developed DFM,a unique approach tomultiple criterion group decision-making is offered.Our suggestedmethod verifies the novel way of calculating the expert weights for Sq-LDFSS as in PFSs.Our proposed technique in three parameters is applied to evaluate solid-state drives and choose the optimum photovoltaic cell in two applications by taking uncertainty parameter zero.The method’s applicability and validity shown by the findings are contrasted with those obtained using various other existing approaches.To assess its stability and usefulness,a sensitivity analysis is done.展开更多
For the problem of slow search and tortuous paths in the Rapidly Exploring Random Tree(RRT)algorithm,a feedback-biased sampling RRT,called FS-RRT,is proposedbasedon RRT.Firstly,toimprove the samplingefficiency of RRT ...For the problem of slow search and tortuous paths in the Rapidly Exploring Random Tree(RRT)algorithm,a feedback-biased sampling RRT,called FS-RRT,is proposedbasedon RRT.Firstly,toimprove the samplingefficiency of RRT to shorten the search time,the search area of the randomtree is restricted to improve the sampling efficiency.Secondly,to obtain better information about obstacles to shorten the path length,a feedback-biased sampling strategy is used instead of the traditional random sampling,the collision of the expanding node with an obstacle generates feedback information so that the next expanding node avoids expanding within a specific angle range.Thirdly,this paper proposes using the inverse optimization strategy to remove redundancy points from the initial path,making the path shorter and more accurate.Finally,to satisfy the smooth operation of the robot in practice,auxiliary points are used to optimize the cubic Bezier curve to avoid path-crossing obstacles when using the Bezier curve optimization.The experimental results demonstrate that,compared to the traditional RRT algorithm,the proposed FS-RRT algorithm performs favorably against mainstream algorithms regarding running time,number of search iterations,and path length.Moreover,the improved algorithm also performs well in a narrow obstacle environment,and its effectiveness is further confirmed by experimental verification.展开更多
Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces...Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces a bias that inflates performance metrics and prevents accurate assessment of a model’s true ability to generalize to new examples.This paper presents an innovative disjoint sampling approach for training SOTA models for the Hyperspectral Image Classification(HSIC).By separating training,validation,and test data without overlap,the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation.Experiments demonstrate the approach significantly improves a model’s generalization compared to alternatives that include training and validation data in test data(A trivial approach involves testing the model on the entire Hyperspectral dataset to generate the ground truth maps.This approach produces higher accuracy but ultimately results in low generalization performance).Disjoint sampling eliminates data leakage between sets and provides reliable metrics for benchmarking progress in HSIC.Disjoint sampling is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors.Overall,with the disjoint test set,the performance of the deep models achieves 96.36%accuracy on Indian Pines data,99.73%on Pavia University data,98.29%on University of Houston data,99.43%on Botswana data,and 99.88%on Salinas data.展开更多
We propose a new framework for the sampling,compression,and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces.Our approach involves constructing a tensor called the RaySe...We propose a new framework for the sampling,compression,and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces.Our approach involves constructing a tensor called the RaySense sketch,which captures nearest neighbors from the underlying geometry of points along a set of rays.We explore various operations that can be performed on the RaySense sketch,leading to different properties and potential applications.Statistical information about the data set can be extracted from the sketch,independent of the ray set.Line integrals on point sets can be efficiently computed using the sketch.We also present several examples illustrating applications of the proposed strategy in practical scenarios.展开更多
Tourism is a popular activity that allows individuals to escape their daily routines and explore new destinations for various reasons,including leisure,pleasure,or business.A recent study has proposed a unique mathema...Tourism is a popular activity that allows individuals to escape their daily routines and explore new destinations for various reasons,including leisure,pleasure,or business.A recent study has proposed a unique mathematical concept called a q−Rung orthopair fuzzy hypersoft set(q−ROFHS)to enhance the formal representation of human thought processes and evaluate tourism carrying capacity.This approach can capture the imprecision and ambiguity often present in human perception.With the advanced mathematical tools in this field,the study has also incorporated the Einstein aggregation operator and score function into the q−ROFHS values to supportmultiattribute decision-making algorithms.By implementing this technique,effective plans can be developed for social and economic development while avoiding detrimental effects such as overcrowding or environmental damage caused by tourism.A case study of selected tourism carrying capacity will demonstrate the proposed methodology.展开更多
Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the los...Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the loss function.The performance of PINNs is generally affected by both training and sampling.Specifically,training methods focus on how to overcome the training difficulties caused by the special PDE residual loss of PINNs,and sampling methods are concerned with the location and distribution of the sampling points upon which evaluations of PDE residual loss are accomplished.However,a common problem among these original PINNs is that they omit special temporal information utilization during the training or sampling stages when dealing with an important PDE category,namely,time-dependent PDEs,where temporal information plays a key role in the algorithms used.There is one method,called Causal PINN,that considers temporal causality at the training level but not special temporal utilization at the sampling level.Incorporating temporal knowledge into sampling remains to be studied.To fill this gap,we propose a novel temporal causality-based adaptive sampling method that dynamically determines the sampling ratio according to both PDE residual and temporal causality.By designing a sampling ratio determined by both residual loss and temporal causality to control the number and location of sampled points in each temporal sub-domain,we provide a practical solution by incorporating temporal information into sampling.Numerical experiments of several nonlinear time-dependent PDEs,including the Cahn–Hilliard,Korteweg–de Vries,Allen–Cahn and wave equations,show that our proposed sampling method can improve the performance.We demonstrate that using such a relatively simple sampling method can improve prediction performance by up to two orders of magnitude compared with the results from other methods,especially when points are limited.展开更多
基金the financial support of the National Key Research and Development Program of China(2020AAA0108100)the Shanghai Municipal Science and Technology Major Project(2021SHZDZX0100)the Shanghai Gaofeng and Gaoyuan Project for University Academic Program Development for funding。
文摘Decision-making and motion planning are extremely important in autonomous driving to ensure safe driving in a real-world environment.This study proposes an online evolutionary decision-making and motion planning framework for autonomous driving based on a hybrid data-and model-driven method.First,a data-driven decision-making module based on deep reinforcement learning(DRL)is developed to pursue a rational driving performance as much as possible.Then,model predictive control(MPC)is employed to execute both longitudinal and lateral motion planning tasks.Multiple constraints are defined according to the vehicle’s physical limit to meet the driving task requirements.Finally,two principles of safety and rationality for the self-evolution of autonomous driving are proposed.A motion envelope is established and embedded into a rational exploration and exploitation scheme,which filters out unreasonable experiences by masking unsafe actions so as to collect high-quality training data for the DRL agent.Experiments with a high-fidelity vehicle model and MATLAB/Simulink co-simulation environment are conducted,and the results show that the proposed online-evolution framework is able to generate safer,more rational,and more efficient driving action in a real-world environment.
文摘The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.
基金supported by the Platform Development Foundation of the China Institute for Radiation Protection(No.YP21030101)the National Natural Science Foundation of China(General Program)(Nos.12175114,U2167209)+1 种基金the National Key R&D Program of China(No.2021YFF0603600)the Tsinghua University Initiative Scientific Research Program(No.20211080081).
文摘Global variance reduction is a bottleneck in Monte Carlo shielding calculations.The global variance reduction problem requires that the statistical error of the entire space is uniform.This study proposed a grid-AIS method for the global variance reduction problem based on the AIS method,which was implemented in the Monte Carlo program MCShield.The proposed method was validated using the VENUS-Ⅲ international benchmark problem and a self-shielding calculation example.The results from the VENUS-Ⅲ benchmark problem showed that the grid-AIS method achieved a significant reduction in the variance of the statistical errors of the MESH grids,decreasing from 1.08×10^(-2) to 3.84×10^(-3),representing a 64.00% reduction.This demonstrates that the grid-AIS method is effective in addressing global issues.The results of the selfshielding calculation demonstrate that the grid-AIS method produced accurate computational results.Moreover,the grid-AIS method exhibited a computational efficiency approximately one order of magnitude higher than that of the AIS method and approximately two orders of magnitude higher than that of the conventional Monte Carlo method.
文摘In this paper,we establish a new multivariate Hermite sampling series involving samples from the function itself and its mixed and non-mixed partial derivatives of arbitrary order.This multivariate form of Hermite sampling will be valid for some classes of multivariate entire functions,satisfying certain growth conditions.We will show that many known results included in Commun Korean Math Soc,2002,17:731-740,Turk J Math,2017,41:387-403 and Filomat,2020,34:3339-3347 are special cases of our results.Moreover,we estimate the truncation error of this sampling based on localized sampling without decay assumption.Illustrative examples are also presented.
基金supported in part by the Start-Up Grant-Nanyang Assistant Professorship Grant of Nanyang Technological Universitythe Agency for Science,Technology and Research(A*STAR)under Advanced Manufacturing and Engineering(AME)Young Individual Research under Grant(A2084c0156)+2 种基金the MTC Individual Research Grant(M22K2c0079)the ANR-NRF Joint Grant(NRF2021-NRF-ANR003 HM Science)the Ministry of Education(MOE)under the Tier 2 Grant(MOE-T2EP50222-0002)。
文摘While autonomous vehicles are vital components of intelligent transportation systems,ensuring the trustworthiness of decision-making remains a substantial challenge in realizing autonomous driving.Therefore,we present a novel robust reinforcement learning approach with safety guarantees to attain trustworthy decision-making for autonomous vehicles.The proposed technique ensures decision trustworthiness in terms of policy robustness and collision safety.Specifically,an adversary model is learned online to simulate the worst-case uncertainty by approximating the optimal adversarial perturbations on the observed states and environmental dynamics.In addition,an adversarial robust actor-critic algorithm is developed to enable the agent to learn robust policies against perturbations in observations and dynamics.Moreover,we devise a safety mask to guarantee the collision safety of the autonomous driving agent during both the training and testing processes using an interpretable knowledge model known as the Responsibility-Sensitive Safety Model.Finally,the proposed approach is evaluated through both simulations and experiments.These results indicate that the autonomous driving agent can make trustworthy decisions and drastically reduce the number of collisions through robust safety policies.
基金supported by the National Key Research,Development Program of China (2020AAA0103404)the Beijing Nova Program (20220484077)the National Natural Science Foundation of China (62073323)。
文摘Due to ever-growing soccer data collection approaches and progressing artificial intelligence(AI) methods, soccer analysis, evaluation, and decision-making have received increasing interest from not only the professional sports analytics realm but also the academic AI research community. AI brings gamechanging approaches for soccer analytics where soccer has been a typical benchmark for AI research. The combination has been an emerging topic. In this paper, soccer match analytics are taken as a complete observation-orientation-decision-action(OODA) loop.In addition, as in AI frameworks such as that for reinforcement learning, interacting with a virtual environment enables an evolving model. Therefore, both soccer analytics in the real world and virtual domains are discussed. With the intersection of the OODA loop and the real-virtual domains, available soccer data, including event and tracking data, and diverse orientation and decisionmaking models for both real-world and virtual soccer matches are comprehensively reviewed. Finally, some promising directions in this interdisciplinary area are pointed out. It is claimed that paradigms for both professional sports analytics and AI research could be combined. Moreover, it is quite promising to bridge the gap between the real and virtual domains for soccer match analysis and decision-making.
基金This work was funded by the National Natural Science Foundation of China Nos.U22A2099,61966009,62006057the Graduate Innovation Program No.YCSW2022286.
文摘Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values or make ethical decisions,they may not meet the expectations of humans.Traditionally,an ethical decision-making framework is constructed by rule-based or statistical approaches.In this paper,we propose an ethical decision-making framework based on incremental ILP(Inductive Logic Programming),which can overcome the brittleness of rule-based approaches and little interpretability of statistical approaches.As the current incremental ILP makes it difficult to solve conflicts,we propose a novel ethical decision-making framework considering conflicts in this paper,which adopts our proposed incremental ILP system.The framework consists of two processes:the learning process and the deduction process.The first process records bottom clauses with their score functions and learns rules guided by the entailment and the score function.The second process obtains an ethical decision based on the rules.In an ethical scenario about chatbots for teenagers’mental health,we verify that our framework can learn ethical rules and make ethical decisions.Besides,we extract incremental ILP from the framework and compare it with the state-of-the-art ILP systems based on ASP(Answer Set Programming)focusing on conflict resolution.The results of comparisons show that our proposed system can generate better-quality rules than most other systems.
基金supported by the National Natural Science Foundation of China (No.72071150).
文摘Stroke is a chronic cerebrovascular disease that carries a high risk.Stroke risk assessment is of great significance in preventing,reversing and reducing the spread and the health hazards caused by stroke.Aiming to objectively predict and identify strokes,this paper proposes a new stroke risk assessment decision-making model named Logistic-AdaBoost(Logistic-AB)based on machine learning.First,the categorical boosting(CatBoost)method is used to perform feature selection for all features of stroke,and 8 main features are selected to form a new index evaluation system to predict the risk of stroke.Second,the borderline synthetic minority oversampling technique(SMOTE)algorithm is applied to transform the unbalanced stroke dataset into a balanced dataset.Finally,the stroke risk assessment decision-makingmodel Logistic-AB is constructed,and the overall prediction performance of this new model is evaluated by comparing it with ten other similar models.The comparison results show that the new model proposed in this paper performs better than the two single algorithms(logistic regression and AdaBoost)on the four indicators of recall,precision,F1 score,and accuracy,and the overall performance of the proposed model is better than that of common machine learning algorithms.The Logistic-AB model presented in this paper can more accurately predict patients’stroke risk.
基金the Science,Research and Innovation Promotion Funding(TSRI)(Grant No.FRB660012/0168)managed under Rajamangala University of Technology Thanyaburi(FRB66E0646O.4).
文摘This study presents the design of a modified attributed control chart based on a double sampling(DS)np chart applied in combination with generalized multiple dependent state(GMDS)sampling to monitor the mean life of the product based on the time truncated life test employing theWeibull distribution.The control chart developed supports the examination of the mean lifespan variation for a particular product in the process of manufacturing.Three control limit levels are used:the warning control limit,inner control limit,and outer control limit.Together,they enhance the capability for variation detection.A genetic algorithm can be used for optimization during the in-control process,whereby the optimal parameters can be established for the proposed control chart.The control chart performance is assessed using the average run length,while the influence of the model parameters upon the control chart solution is assessed via sensitivity analysis based on an orthogonal experimental design withmultiple linear regression.A comparative study was conducted based on the out-of-control average run length,in which the developed control chart offered greater sensitivity in the detection of process shifts while making use of smaller samples on average than is the case for existing control charts.Finally,to exhibit the utility of the developed control chart,this paper presents its application using simulated data with parameters drawn from the real set of data.
文摘The strategy evolution process of game players is highly uncertain due to random emergent situations and other external disturbances.This paper investigates the issue of strategy interaction and behavioral decision-making among game players in simulated confrontation scenarios within a random interference environment.It considers the possible risks that random disturbances may pose to the autonomous decision-making of game players,as well as the impact of participants’manipulative behaviors on the state changes of the players.A nonlinear mathematical model is established to describe the strategy decision-making process of the participants in this scenario.Subsequently,the strategy selection interaction relationship,strategy evolution stability,and dynamic decision-making process of the game players are investigated and verified by simulation experiments.The results show that maneuver-related parameters and random environmental interference factors have different effects on the selection and evolutionary speed of the agent’s strategies.Especially in a highly uncertain environment,even small information asymmetry or miscalculation may have a significant impact on decision-making.This also confirms the feasibility and effectiveness of the method proposed in the paper,which can better explain the behavioral decision-making process of the agent in the interaction process.This study provides feasibility analysis ideas and theoretical references for improving multi-agent interactive decision-making and the interpretability of the game system model.
文摘Purpose–Material selection,driven by wide and often conflicting objectives,is an important,sometimes difficult problem in material engineering.In this context,multi-criteria decision-making(MCDM)methodologies are effective.An approach of MCDM is needed to cater to criteria of material assortment simultaneously.More firms are now concerned about increasing their productivity using mathematical tools.To occupy a gap in the previous literature this research recommends an integrated MCDM and mathematical Bi-objective model for the selection of material.In addition,by using the Technique for Order Preference by Similarity to Ideal Solution(TOPSIS),the inherent ambiguities of decision-makers in paired evaluations are considered in this research.It goes on to construct a mathematical bi-objective model for determining the best item to purchase.Design/methodology/approach–The entropy perspective is implemented in this paper to evaluate the weight parameters,while the TOPSIS technique is used to determine the best and worst intermediate pipe materials for automotive exhaust system.The intermediate pipes are used to join the components of the exhaust systems.The materials usually used to manufacture intermediate pipe are SUS 436LM,SUS 430,SUS 304,SUS 436L,SUH 409 L,SUS 441 L and SUS 439L.These seven materials are evaluated based on tensile strength(TS),hardness(H),elongation(E),yield strength(YS)and cost(C).A hybrid methodology combining entropy-based criteria weighting,with the TOPSIS for alternative ranking,is pursued to identify the optimal design material for an engineered application in this paper.This study aims to help while filling the information gap in selecting the most suitable material for use in the exhaust intermediate pipes.After that,the authors searched for and considered eight materials and evaluated them on the following five criteria:(1)TS,(2)YS,(3)H,(4)E and(5)C.The first two criteria have been chosen because they can have a lot of influence on the behavior of the exhaust intermediate pipes,on their performance and on the cost.In this structure,the weights of the criteria are calculated objectively through the entropy method in order to have an unbiased assessment.This essentially measures the quantity of information each criterion contribution,indicating the relative importance of these criteria better.Subsequently,the materials were ranked using the TOPSIS method in terms of their relative performance by measuring each material from an ideal solution to determine the best alternative.The results show that SUS 309,SUS 432L and SUS 436 LM are the first three materials that the exhaust intermediate pipe optimal design should consider.Findings–The material matrix of the decision presented in Table 3 was normalized through Equation 5,as shown in Table 5,and the matrix was multiplied with weighting criteriaß_j.The obtained weighted normalized matrix V_ij is presented in Table 6.However,the ideal,worst and best value was ascertained by employing Equation 7.This study is based on the selection of material for the development of intermediate pipe using MCDM,and it involves four basic stages,i.e.method of translation criteria,screening process,method of ranking and search for methods.The selection was done through the TOPSIS method,and the criteria weight was obtained by the entropy method.The result showed that the top three materials are SUS 309,SUS 432L and SUS 436 LM,respectively.For the future work,it is suggested to select more alternatives and criteria.The comparison can also be done by using different MCDM techniques like and Choice Expressing Reality(ELECTRE),Decision-Making Trial and Evaluation Laboratory(DEMATEL)and Preference Ranking Organization Method for Enrichment Evaluation(PROMETHEE).Originality/value–The results provide important conclusions for material selection in this targeted application,verifying the employment of mutual entropy-TOPSIS methodology for a series of difficult engineering decisions in material engineering concepts that combine superior capacity with better performance as well as cost-efficiency in various engineering design.
基金the Communication University of China(CUC230A013)the Fundamental Research Funds for the Central Universities.
文摘The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-attention mechanisms falter when applied to datasets with intricate semantic content and extensive dependency structures.In response,this paper introduces a Diffusion Sampling and Label-Driven Co-attention Neural Network(DSLD),which adopts a diffusion sampling method to capture more comprehensive semantic information of the data.Additionally,themodel leverages the joint correlation information of labels and data to introduce the computation of text representation,correcting semantic representationbiases in thedata,andincreasing the accuracyof semantic representation.Ultimately,the model computes the corresponding classification results by synthesizing these rich data semantic representations.Experiments on seven benchmark datasets show that our proposed model achieves competitive results compared to state-of-the-art methods.
基金Project supported by the National Key Research and Development Program of China(Grant No.2023YFF1204402)the National Natural Science Foundation of China(Grant Nos.12074079 and 12374208)+1 种基金the Natural Science Foundation of Shanghai(Grant No.22ZR1406800)the China Postdoctoral Science Foundation(Grant No.2022M720815).
文摘The rapid advancement and broad application of machine learning(ML)have driven a groundbreaking revolution in computational biology.One of the most cutting-edge and important applications of ML is its integration with molecular simulations to improve the sampling efficiency of the vast conformational space of large biomolecules.This review focuses on recent studies that utilize ML-based techniques in the exploration of protein conformational landscape.We first highlight the recent development of ML-aided enhanced sampling methods,including heuristic algorithms and neural networks that are designed to refine the selection of reaction coordinates for the construction of bias potential,or facilitate the exploration of the unsampled region of the energy landscape.Further,we review the development of autoencoder based methods that combine molecular simulations and deep learning to expand the search for protein conformations.Lastly,we discuss the cutting-edge methodologies for the one-shot generation of protein conformations with precise Boltzmann weights.Collectively,this review demonstrates the promising potential of machine learning in revolutionizing our insight into the complex conformational ensembles of proteins.
基金This present research work was supported by the National Key R&D Program of China(No.2021YFB2700800)the GHfund B(No.202302024490).
文摘Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.However,traditional blockchain P2P networks face a common challenge where there is often a mismatch between the upper-layer traffic requirements and the underlying physical network topology.This mismatch results in redundant data transmission and inefficient routing,severely constraining the scalability of blockchain systems.To address these pressing issues,we propose FPSblo,an efficient transmission method for blockchain networks.Our inspiration for FPSblo stems from the Farthest Point Sampling(FPS)algorithm,a well-established technique widely utilized in point cloud image processing.In this work,we analogize blockchain nodes to points in a point cloud image and select a representative set of nodes to prioritize message forwarding so that messages reach the network edge quickly and are evenly distributed.Moreover,we compare our model with the Kadcast transmission model,which is a classic improvement model for blockchain P2P transmission networks,the experimental findings show that the FPSblo model reduces 34.8%of transmission redundancy and reduces the overload rate by 37.6%.By conducting experimental analysis,the FPS-BT model enhances the transmission capabilities of the P2P network in blockchain.
基金the Deanship of Scientific Research at Umm Al-Qura University(Grant Code:22UQU4310396DSR65).
文摘Spherical q-linearDiophantine fuzzy sets(Sq-LDFSs)provedmore effective for handling uncertainty and vagueness in multi-criteria decision-making(MADM).It does not only cover the data in two variable parameters but is also beneficial for three parametric data.By Pythagorean fuzzy sets,the difference is calculated only between two parameters(membership and non-membership).According to human thoughts,fuzzy data can be found in three parameters(membership uncertainty,and non-membership).So,to make a compromise decision,comparing Sq-LDFSs is essential.Existing measures of different fuzzy sets do,however,can have several flaws that can lead to counterintuitive results.For instance,they treat any increase or decrease in the membership degree as the same as the non-membership degree because the uncertainty does not change,even though each parameter has a different implication.In the Sq-LDFSs comparison,this research develops the differentialmeasure(DFM).Themain goal of the DFM is to cover the unfair arguments that come from treating different types of FSs opposing criteria equally.Due to their relative positions in the attribute space and the similarity of their membership and non-membership degrees,two Sq-LDFSs formthis preference connectionwhen the uncertainty remains same in both sets.According to the degree of superiority or inferiority,two Sq-LDFSs are shown as identical,equivalent,superior,or inferior over one another.The suggested DFM’s fundamental characteristics are provided.Based on the newly developed DFM,a unique approach tomultiple criterion group decision-making is offered.Our suggestedmethod verifies the novel way of calculating the expert weights for Sq-LDFSS as in PFSs.Our proposed technique in three parameters is applied to evaluate solid-state drives and choose the optimum photovoltaic cell in two applications by taking uncertainty parameter zero.The method’s applicability and validity shown by the findings are contrasted with those obtained using various other existing approaches.To assess its stability and usefulness,a sensitivity analysis is done.
基金provided by Shaanxi Province’s Key Research and Development Plan(No.2022NY-087).
文摘For the problem of slow search and tortuous paths in the Rapidly Exploring Random Tree(RRT)algorithm,a feedback-biased sampling RRT,called FS-RRT,is proposedbasedon RRT.Firstly,toimprove the samplingefficiency of RRT to shorten the search time,the search area of the randomtree is restricted to improve the sampling efficiency.Secondly,to obtain better information about obstacles to shorten the path length,a feedback-biased sampling strategy is used instead of the traditional random sampling,the collision of the expanding node with an obstacle generates feedback information so that the next expanding node avoids expanding within a specific angle range.Thirdly,this paper proposes using the inverse optimization strategy to remove redundancy points from the initial path,making the path shorter and more accurate.Finally,to satisfy the smooth operation of the robot in practice,auxiliary points are used to optimize the cubic Bezier curve to avoid path-crossing obstacles when using the Bezier curve optimization.The experimental results demonstrate that,compared to the traditional RRT algorithm,the proposed FS-RRT algorithm performs favorably against mainstream algorithms regarding running time,number of search iterations,and path length.Moreover,the improved algorithm also performs well in a narrow obstacle environment,and its effectiveness is further confirmed by experimental verification.
基金the Researchers Supporting Project number(RSPD2024R848),King Saud University,Riyadh,Saudi Arabia.
文摘Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces a bias that inflates performance metrics and prevents accurate assessment of a model’s true ability to generalize to new examples.This paper presents an innovative disjoint sampling approach for training SOTA models for the Hyperspectral Image Classification(HSIC).By separating training,validation,and test data without overlap,the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation.Experiments demonstrate the approach significantly improves a model’s generalization compared to alternatives that include training and validation data in test data(A trivial approach involves testing the model on the entire Hyperspectral dataset to generate the ground truth maps.This approach produces higher accuracy but ultimately results in low generalization performance).Disjoint sampling eliminates data leakage between sets and provides reliable metrics for benchmarking progress in HSIC.Disjoint sampling is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors.Overall,with the disjoint test set,the performance of the deep models achieves 96.36%accuracy on Indian Pines data,99.73%on Pavia University data,98.29%on University of Houston data,99.43%on Botswana data,and 99.88%on Salinas data.
基金supported by the National Science Foundation(Grant No.DMS-1440415)partially supported by a grant from the Simons Foundation,NSF Grants DMS-1720171 and DMS-2110895a Discovery Grant from Natural Sciences and Engineering Research Council of Canada.
文摘We propose a new framework for the sampling,compression,and analysis of distributions of point sets and other geometric objects embedded in Euclidean spaces.Our approach involves constructing a tensor called the RaySense sketch,which captures nearest neighbors from the underlying geometry of points along a set of rays.We explore various operations that can be performed on the RaySense sketch,leading to different properties and potential applications.Statistical information about the data set can be extracted from the sketch,independent of the ray set.Line integrals on point sets can be efficiently computed using the sketch.We also present several examples illustrating applications of the proposed strategy in practical scenarios.
基金the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1A4A1031509).
文摘Tourism is a popular activity that allows individuals to escape their daily routines and explore new destinations for various reasons,including leisure,pleasure,or business.A recent study has proposed a unique mathematical concept called a q−Rung orthopair fuzzy hypersoft set(q−ROFHS)to enhance the formal representation of human thought processes and evaluate tourism carrying capacity.This approach can capture the imprecision and ambiguity often present in human perception.With the advanced mathematical tools in this field,the study has also incorporated the Einstein aggregation operator and score function into the q−ROFHS values to supportmultiattribute decision-making algorithms.By implementing this technique,effective plans can be developed for social and economic development while avoiding detrimental effects such as overcrowding or environmental damage caused by tourism.A case study of selected tourism carrying capacity will demonstrate the proposed methodology.
基金Project supported by the Key National Natural Science Foundation of China(Grant No.62136005)the National Natural Science Foundation of China(Grant Nos.61922087,61906201,and 62006238)。
文摘Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the loss function.The performance of PINNs is generally affected by both training and sampling.Specifically,training methods focus on how to overcome the training difficulties caused by the special PDE residual loss of PINNs,and sampling methods are concerned with the location and distribution of the sampling points upon which evaluations of PDE residual loss are accomplished.However,a common problem among these original PINNs is that they omit special temporal information utilization during the training or sampling stages when dealing with an important PDE category,namely,time-dependent PDEs,where temporal information plays a key role in the algorithms used.There is one method,called Causal PINN,that considers temporal causality at the training level but not special temporal utilization at the sampling level.Incorporating temporal knowledge into sampling remains to be studied.To fill this gap,we propose a novel temporal causality-based adaptive sampling method that dynamically determines the sampling ratio according to both PDE residual and temporal causality.By designing a sampling ratio determined by both residual loss and temporal causality to control the number and location of sampled points in each temporal sub-domain,we provide a practical solution by incorporating temporal information into sampling.Numerical experiments of several nonlinear time-dependent PDEs,including the Cahn–Hilliard,Korteweg–de Vries,Allen–Cahn and wave equations,show that our proposed sampling method can improve the performance.We demonstrate that using such a relatively simple sampling method can improve prediction performance by up to two orders of magnitude compared with the results from other methods,especially when points are limited.