Crude oil scheduling optimization is an effective method to enhance the economic benefits of oil refining.But uncertainties,including uncertain demands of crude distillation units(CDUs),might make the production plans...Crude oil scheduling optimization is an effective method to enhance the economic benefits of oil refining.But uncertainties,including uncertain demands of crude distillation units(CDUs),might make the production plans made by the traditional deterministic optimization models infeasible.A data-driven Wasserstein distributionally robust chance-constrained(WDRCC)optimization approach is proposed in this paper to deal with demand uncertainty in crude oil scheduling.First,a new deterministic crude oil scheduling optimization model is developed as the basis of this approach.The Wasserstein distance is then used to build ambiguity sets from historical data to describe the possible realizations of probability distributions of uncertain demands.A cross-validation method is advanced to choose suitable radii for these ambiguity sets.The deterministic model is reformulated as a WDRCC optimization model for crude oil scheduling to guarantee the demand constraints hold with a desired high probability even in the worst situation in ambiguity sets.The proposed WDRCC model is transferred into an equivalent conditional value-at-risk representation and further derived as a mixed-integer nonlinear programming counterpart.Industrial case studies from a real-world refinery are conducted to show the effectiveness of the proposed method.Out-of-sample tests demonstrate that the solution of the WDRCC model is more robust than those of the deterministic model and the chance-constrained model.展开更多
This paper proposes a new goodness-of-fit test for normality based on the L~ Wasserstein distance. The authors first construct a probability through the Bootstrap resampling. Although the probability is not distribute...This paper proposes a new goodness-of-fit test for normality based on the L~ Wasserstein distance. The authors first construct a probability through the Bootstrap resampling. Although the probability is not distributed uniformly on the interval (0, 1) under the null hypothesis, it is shown that its distribution is free from the unknown parameters, which indicates that such a probability can be taken as the test statistic. It emerges from the simulation study of power that the new test is able to better discriminate between the normal distribution and those distributions with short tails. For such alternatives, it has a substantially better power than existing tests including the Anderson-Darling test and Shapiro-Wilk test, which are two of the best tests for normality. In addition, the sensitivity analysis of tests is also investigated in the presence of moderate perturbation, which shows that the new test is a rather robust test.展开更多
Distributionally robust optimization is a dominant paradigm for decision-making problems where the distribution of random variables is unknown.We investigate a distributionally robust optimization problem with ambigui...Distributionally robust optimization is a dominant paradigm for decision-making problems where the distribution of random variables is unknown.We investigate a distributionally robust optimization problem with ambiguities in the objective function and countably infinite constraints.The ambiguity set is defined as a Wasserstein ball centered at the empirical distribution.Based on the concentration inequality of Wasserstein distance,we establish the asymptotic convergence property of the datadriven distributionally robust optimization problem when the sample size goes to infinity.We show that with probability 1,the optimal value and the optimal solution set of the data-driven distributionally robust problem converge to those of the stochastic optimization problem with true distribution.Finally,we provide numerical evidences for the established theoretical results.展开更多
We present a multi-phase image segmentation method based on the histogram of the Gabor feature space,which consists of a set of Gabor-filter responses with various orientations,scales and frequencies.Our model replace...We present a multi-phase image segmentation method based on the histogram of the Gabor feature space,which consists of a set of Gabor-filter responses with various orientations,scales and frequencies.Our model replaces the error function term in the original fuzzy region competition model with squared 2-Wasserstein distance function,which is a metric to measure the distance of two histograms.The energy functional is minimized by alternative minimization method and the existence of closed-form solutions is guaranteed when the exponent of the fuzzy membership term being 1 or 2.We test our model on both simple synthetic texture images and complex natural images with two or more phases.Experimental results are shown and compared to other recent results.展开更多
The inter-cycle correlation of fission source distributions(FSDs)in the Monte Carlo power iteration process results in variance underestimation of tallied physical quantities,especially in large local tallies.This stu...The inter-cycle correlation of fission source distributions(FSDs)in the Monte Carlo power iteration process results in variance underestimation of tallied physical quantities,especially in large local tallies.This study provides a mesh-free semiquantitative variance underestimation elimination method to obtain a credible confidence interval for the tallied results.This method comprises two procedures:Estimation and Elimination.The FSD inter-cycle correlation length is estimated in the Estimation procedure using the Sliced Wasserstein distance algorithm.The batch method was then used in the elimination procedure.The FSD inter-cycle correlation length was proved to be the optimum batch length to eliminate the variance underestimation problem.We exemplified this method using the OECD sphere array model and 3D PWR BEAVRS model.The results showed that the average variance underestimation ratios of local tallies declined from 37 to 87%to within±5%in these models.展开更多
In order to improve the quality of low-dose computational tomography (CT)images, the paper proposes an improved image denoising approach based on WGAN-gpwith Wasserstein distance. For improving the training and the co...In order to improve the quality of low-dose computational tomography (CT)images, the paper proposes an improved image denoising approach based on WGAN-gpwith Wasserstein distance. For improving the training and the convergence efficiency, thegiven method introduces the gradient penalty term to WGAN network. The novelperceptual loss is introduced to make the texture information of the low-dose imagessensitive to the diagnostician eye. The experimental results show that compared with thestate-of-art methods, the time complexity is reduced, and the visual quality of low-doseCT images is significantly improved.展开更多
In this paper, we present our research on building computing machines consciousness about intuitive geometry based on mathematics experiments and statistical inference. The investigation consists of the following five...In this paper, we present our research on building computing machines consciousness about intuitive geometry based on mathematics experiments and statistical inference. The investigation consists of the following five steps. At first, we select a set of geometric configurations and for each configuration we construct a large amount of geometric data as observation data using dynamic geometry programs together with the pseudo-random number generator. Secondly, we refer to the geometric predicates in the algebraic method of machine proof of geometric theorems to construct statistics suitable for measuring the approximate geometric relationships in the observation data. In the third step, we propose a geometric relationship detection method based on the similarity of data distribution, where the search space has been reduced into small batches of data by pre-searching for efficiency, and the hypothetical test of the possible geometric relationships in the search results has be performed. In the fourth step, we explore the integer relation of the line segment lengths in the geometric configuration in addition. At the final step, we do numerical experiments for the pre-selected geometric configurations to verify the effectiveness of our method. The results show that computer equipped with the above procedures can find out the hidden geometric relations from the randomly generated data of related geometric configurations, and in this sense, computing machines can actually attain certain consciousness of intuitive geometry as early civilized humans in ancient Mesopotamia.展开更多
Image denoising is often used as a preprocessing step in computer vision tasks,which can help improve the accuracy of image processing models.Due to the imperfection of imaging systems,transmission media and recording...Image denoising is often used as a preprocessing step in computer vision tasks,which can help improve the accuracy of image processing models.Due to the imperfection of imaging systems,transmission media and recording equipment,digital images are often contaminated with various noises during their formation,which troubles the visual effects and even hinders people’s normal recognition.The pollution of noise directly affects the processing of image edge detection,feature extraction,pattern recognition,etc.,making it difficult for people to break through the bottleneck by modifying the model.Many traditional filtering methods have shown poor performance since they do not have optimal expression and adaptation for specific images.Meanwhile,deep learning technology opens up new possibilities for image denoising.In this paper,we propose a novel neural network which is based on generative adversarial networks for image denoising.Inspired by U-net,our method employs a novel symmetrical encoder-decoder based generator network.The encoder adopts convolutional neural networks to extract features,while the decoder outputs the noise in the images by deconvolutional neural networks.Specially,shortcuts are added between designated layers,which can preserve image texture details and prevent gradient explosions.Besides,in order to improve the training stability of the model,we add Wasserstein distance in loss function as an optimization.We use the peak signal-to-noise ratio(PSNR)to evaluate our model and we can prove the effectiveness of it with experimental results.When compared to the state-of-the-art approaches,our method presents competitive performance.展开更多
The Random Batch Method proposed in our previous work(Jin et al.J Comput Phys,2020)is not only a numerical method for interacting particle systems and its mean-field limit,but also can be viewed as a model of the part...The Random Batch Method proposed in our previous work(Jin et al.J Comput Phys,2020)is not only a numerical method for interacting particle systems and its mean-field limit,but also can be viewed as a model of the particle system in which particles interact,at discrete time,with randomly selected mini-batch of particles.In this paper,we investigate the mean-field limit of this model as the number of particles N→∞.Unlike the classical mean field limit for interacting particle systems where the law of large numbers plays the role and the chaos is propagated to later times,the mean field limit now does not rely on the law of large numbers and the chaos is imposed at every discrete time.Despite this,we will not only justify this mean-field limit(discrete in time)but will also show that the limit,as the discrete time intervalτ→0,approaches to the solution of a nonlinear Fokker-Planck equation arising as the mean-field limit of the original interacting particle system in the Wasserstein distance.展开更多
Purpose–Conventional image super-resolution reconstruction by the conventional deep learning architectures suffers from the problems of hard training and gradient disappearing.In order to solve such problems,the purp...Purpose–Conventional image super-resolution reconstruction by the conventional deep learning architectures suffers from the problems of hard training and gradient disappearing.In order to solve such problems,the purposeof this paperis to proposea novel image super-resolutionalgorithmbasedon improved generative adversarial networks(GANs)with Wasserstein distance and gradient penalty.Design/methodology/approach–The proposed algorithm first introduces the conventional GANs architecture,the Wasserstein distance and the gradient penalty for the task of image super-resolution reconstruction(SRWGANs-GP).In addition,a novel perceptual loss function is designed for the SRWGANs-GP to meet the task of image super-resolution reconstruction.The content loss is extracted from the deep model’s feature maps,and such features are introduced to calculate mean square error(MSE)for the loss calculation of generators.Findings–To validate the effectiveness and feasibility of the proposed algorithm,a lot of compared experiments are applied on three common data sets,i.e.Set5,Set14 and BSD100.Experimental results have shown that the proposed SRWGANs-GP architecture has a stable error gradient and iteratively convergence.Compared with the baseline deep models,the proposed GANs models have a significant improvement on performance and efficiency for image super-resolution reconstruction.The MSE calculated by the deep model’s feature maps gives more advantages for constructing contour and texture.Originality/value–Compared with the state-of-the-art algorithms,the proposed algorithm obtains a better performance on image super-resolution and better reconstruction results on contour and texture.展开更多
This paper discusses a numerical method for computing the evolution of large interacting system of quantum particles.The idea of the random batch method is to replace the total interaction of each particle with the N−...This paper discusses a numerical method for computing the evolution of large interacting system of quantum particles.The idea of the random batch method is to replace the total interaction of each particle with the N−1 other particles by the interaction with p≪N particles chosen at random at each time step,multiplied by(N−1)/p.This reduces the computational cost of computing the interaction potential per time step from O(N^(2))to O(N).For simplicity,we consider only in this work the case p=1—in other words,we assume that N is even,and that at each time step,the N particles are organized in N/2 pairs,with a random reshuffling of the pairs at the beginning of each time step.We obtain a convergence estimate for the Wigner transform of the single-particle reduced density matrix of the particle system at time t that is both uniform in N>1 and independent of the Planck constant h̵.The key idea is to use a new type of distance on the set of quantum states that is reminiscent of the Wasserstein distance of exponent 1(or Monge-Kantorovich-Rubinstein distance)on the set of Borel probability measures on Rd used in the context of optimal transport.展开更多
We study a class of diffusion processes, which are determined by solutions X(t) to stochastic functional differential equation with infinite memory and random switching represented by Markov chain Λ(t): Under suitabl...We study a class of diffusion processes, which are determined by solutions X(t) to stochastic functional differential equation with infinite memory and random switching represented by Markov chain Λ(t): Under suitable conditions, we investigate convergence and boundedness of both the solutions X(t) and the functional solutions Xt: We show that two solutions (resp., functional solutions) from different initial data living in the same initial switching regime will be close with high probability as time variable tends to infinity, and that the solutions (resp., functional solutions) are uniformly bounded in the mean square sense. Moreover, we prove existence and uniqueness of the invariant probability measure of two-component Markov-Feller process (Xt,Λ(t));and establish exponential bounds on the rate of convergence to the invariant probability measure under Wasserstein distance. Finally, we provide a concrete example to illustrate our main results.展开更多
We investigate a particle system with mean field interaction living in a random environment characterized by a regime-switching process.The switching process is allowed to be dependent on the particle system.The well-...We investigate a particle system with mean field interaction living in a random environment characterized by a regime-switching process.The switching process is allowed to be dependent on the particle system.The well-posedness and various properties of the limit conditional McKean-Vlasov SDEs are studied,and the conditional propagation of chaos is established with explicit estimate of the convergence rate.展开更多
In this work, by constructing optimal Markovian couplings we investigate exponential convergence rate in the Wasserstein distance for the transmission control protocol process. Most importantly, we provide a variation...In this work, by constructing optimal Markovian couplings we investigate exponential convergence rate in the Wasserstein distance for the transmission control protocol process. Most importantly, we provide a variational formula for the lower bound of the exponential convergence rate.展开更多
Due to their intrinsic link with nonlinear Fokker-Planck equations and many other applications,distribution dependent stochastic differential equations(DDSDEs)have been intensively investigated.In this paper,we summar...Due to their intrinsic link with nonlinear Fokker-Planck equations and many other applications,distribution dependent stochastic differential equations(DDSDEs)have been intensively investigated.In this paper,we summarize some recent progresses in the study of DDSDEs,which include the correspondence of weak solutions and nonlinear Fokker-Planck equations,the well-posedness,regularity estimates,exponential ergodicity,long time large deviations,and comparison theorems.展开更多
In order to meet the real-time performance requirements,intelligent decisions in Internet of things applications must take place right here right now at the network edge.Pushing the artificial intelligence frontier to...In order to meet the real-time performance requirements,intelligent decisions in Internet of things applications must take place right here right now at the network edge.Pushing the artificial intelligence frontier to achieve edge intelligence is nontrivial due to the constrained computing resources and limited training data at the network edge.To tackle these challenges,we develop a distributionally robust optimization(DRO)-based edge learning algorithm,where the uncertainty model is constructed to foster the synergy of cloud knowledge and local training.Specifically,the cloud transferred knowledge is in the form of a Dirichlet process prior distribution for the edge model parameters,and the edge device further constructs an uncertainty set centered around the empirical distribution of its local samples.The edge learning DRO problem,subject to these two distributional uncertainty constraints,is recast as a single-layer optimization problem using a duality approach.We then use an Expectation-Maximization algorithm-inspired method to derive a convex relaxation,based on which we devise algorithms to learn the edge model.Furthermore,we illustrate that the meta-learning fast adaptation procedure is equivalent to our proposed Dirichlet process prior-based approach.Finally,extensive experiments are implemented to showcase the performance gain over standard approaches using edge data only.展开更多
基金the supports from National Natural Science Foundation of China(61988101,62073142,22178103)National Natural Science Fund for Distinguished Young Scholars(61925305)International(Regional)Cooperation and Exchange Project(61720106008)。
文摘Crude oil scheduling optimization is an effective method to enhance the economic benefits of oil refining.But uncertainties,including uncertain demands of crude distillation units(CDUs),might make the production plans made by the traditional deterministic optimization models infeasible.A data-driven Wasserstein distributionally robust chance-constrained(WDRCC)optimization approach is proposed in this paper to deal with demand uncertainty in crude oil scheduling.First,a new deterministic crude oil scheduling optimization model is developed as the basis of this approach.The Wasserstein distance is then used to build ambiguity sets from historical data to describe the possible realizations of probability distributions of uncertain demands.A cross-validation method is advanced to choose suitable radii for these ambiguity sets.The deterministic model is reformulated as a WDRCC optimization model for crude oil scheduling to guarantee the demand constraints hold with a desired high probability even in the worst situation in ambiguity sets.The proposed WDRCC model is transferred into an equivalent conditional value-at-risk representation and further derived as a mixed-integer nonlinear programming counterpart.Industrial case studies from a real-world refinery are conducted to show the effectiveness of the proposed method.Out-of-sample tests demonstrate that the solution of the WDRCC model is more robust than those of the deterministic model and the chance-constrained model.
基金supported by the National Natural Science Foundation of China under Grant Nos.11201005 and 11071015the Natural Science Foundation of Anhui Province under Grant Nos.1308085QA13 and 1208085MA11the Key Project of Anhui Education Committee under Grant Nos.KJ2012A135 and 2012SQRL028ZD
文摘This paper proposes a new goodness-of-fit test for normality based on the L~ Wasserstein distance. The authors first construct a probability through the Bootstrap resampling. Although the probability is not distributed uniformly on the interval (0, 1) under the null hypothesis, it is shown that its distribution is free from the unknown parameters, which indicates that such a probability can be taken as the test statistic. It emerges from the simulation study of power that the new test is able to better discriminate between the normal distribution and those distributions with short tails. For such alternatives, it has a substantially better power than existing tests including the Anderson-Darling test and Shapiro-Wilk test, which are two of the best tests for normality. In addition, the sensitivity analysis of tests is also investigated in the presence of moderate perturbation, which shows that the new test is a rather robust test.
基金the National Natural Science Foundation of China(Nos.11991023,11901449,11735011).
文摘Distributionally robust optimization is a dominant paradigm for decision-making problems where the distribution of random variables is unknown.We investigate a distributionally robust optimization problem with ambiguities in the objective function and countably infinite constraints.The ambiguity set is defined as a Wasserstein ball centered at the empirical distribution.Based on the concentration inequality of Wasserstein distance,we establish the asymptotic convergence property of the datadriven distributionally robust optimization problem when the sample size goes to infinity.We show that with probability 1,the optimal value and the optimal solution set of the data-driven distributionally robust problem converge to those of the stochastic optimization problem with true distribution.Finally,we provide numerical evidences for the established theoretical results.
文摘We present a multi-phase image segmentation method based on the histogram of the Gabor feature space,which consists of a set of Gabor-filter responses with various orientations,scales and frequencies.Our model replaces the error function term in the original fuzzy region competition model with squared 2-Wasserstein distance function,which is a metric to measure the distance of two histograms.The energy functional is minimized by alternative minimization method and the existence of closed-form solutions is guaranteed when the exponent of the fuzzy membership term being 1 or 2.We test our model on both simple synthetic texture images and complex natural images with two or more phases.Experimental results are shown and compared to other recent results.
基金supported by China Nuclear Power Engineering Co.,Ltd.Scientific Research Project(No.KY22104)the fellowship of China Postdoctoral Science Foundation(No.2022M721793).
文摘The inter-cycle correlation of fission source distributions(FSDs)in the Monte Carlo power iteration process results in variance underestimation of tallied physical quantities,especially in large local tallies.This study provides a mesh-free semiquantitative variance underestimation elimination method to obtain a credible confidence interval for the tallied results.This method comprises two procedures:Estimation and Elimination.The FSD inter-cycle correlation length is estimated in the Estimation procedure using the Sliced Wasserstein distance algorithm.The batch method was then used in the elimination procedure.The FSD inter-cycle correlation length was proved to be the optimum batch length to eliminate the variance underestimation problem.We exemplified this method using the OECD sphere array model and 3D PWR BEAVRS model.The results showed that the average variance underestimation ratios of local tallies declined from 37 to 87%to within±5%in these models.
基金supported by National Natural Science Foundation ofChina (61672279)Project of “Six Talents Peak” in Jiangsu (2012-WLW-023)OpenFoundation of State Key Laboratory of Hydrology-Water Resources and HydraulicEngineering, Nanjing Hydraulic Research Institute, China (2016491411).
文摘In order to improve the quality of low-dose computational tomography (CT)images, the paper proposes an improved image denoising approach based on WGAN-gpwith Wasserstein distance. For improving the training and the convergence efficiency, thegiven method introduces the gradient penalty term to WGAN network. The novelperceptual loss is introduced to make the texture information of the low-dose imagessensitive to the diagnostician eye. The experimental results show that compared with thestate-of-art methods, the time complexity is reduced, and the visual quality of low-doseCT images is significantly improved.
文摘In this paper, we present our research on building computing machines consciousness about intuitive geometry based on mathematics experiments and statistical inference. The investigation consists of the following five steps. At first, we select a set of geometric configurations and for each configuration we construct a large amount of geometric data as observation data using dynamic geometry programs together with the pseudo-random number generator. Secondly, we refer to the geometric predicates in the algebraic method of machine proof of geometric theorems to construct statistics suitable for measuring the approximate geometric relationships in the observation data. In the third step, we propose a geometric relationship detection method based on the similarity of data distribution, where the search space has been reduced into small batches of data by pre-searching for efficiency, and the hypothetical test of the possible geometric relationships in the search results has be performed. In the fourth step, we explore the integer relation of the line segment lengths in the geometric configuration in addition. At the final step, we do numerical experiments for the pre-selected geometric configurations to verify the effectiveness of our method. The results show that computer equipped with the above procedures can find out the hidden geometric relations from the randomly generated data of related geometric configurations, and in this sense, computing machines can actually attain certain consciousness of intuitive geometry as early civilized humans in ancient Mesopotamia.
基金supported by the National Natural Science Foundation of China(61872231,61701297)the Major Program of the National Social Science Foundation of China(Grant No.20&ZD130).
文摘Image denoising is often used as a preprocessing step in computer vision tasks,which can help improve the accuracy of image processing models.Due to the imperfection of imaging systems,transmission media and recording equipment,digital images are often contaminated with various noises during their formation,which troubles the visual effects and even hinders people’s normal recognition.The pollution of noise directly affects the processing of image edge detection,feature extraction,pattern recognition,etc.,making it difficult for people to break through the bottleneck by modifying the model.Many traditional filtering methods have shown poor performance since they do not have optimal expression and adaptation for specific images.Meanwhile,deep learning technology opens up new possibilities for image denoising.In this paper,we propose a novel neural network which is based on generative adversarial networks for image denoising.Inspired by U-net,our method employs a novel symmetrical encoder-decoder based generator network.The encoder adopts convolutional neural networks to extract features,while the decoder outputs the noise in the images by deconvolutional neural networks.Specially,shortcuts are added between designated layers,which can preserve image texture details and prevent gradient explosions.Besides,in order to improve the training stability of the model,we add Wasserstein distance in loss function as an optimization.We use the peak signal-to-noise ratio(PSNR)to evaluate our model and we can prove the effectiveness of it with experimental results.When compared to the state-of-the-art approaches,our method presents competitive performance.
基金supported by National Natural Science Foundation of China(Grant No.31571071)supported by National Natural Science Foundation of China(Grant Nos.11901389 and 11971314)Shanghai Sailing Program(Grant No.19YF1421300)。
文摘The Random Batch Method proposed in our previous work(Jin et al.J Comput Phys,2020)is not only a numerical method for interacting particle systems and its mean-field limit,but also can be viewed as a model of the particle system in which particles interact,at discrete time,with randomly selected mini-batch of particles.In this paper,we investigate the mean-field limit of this model as the number of particles N→∞.Unlike the classical mean field limit for interacting particle systems where the law of large numbers plays the role and the chaos is propagated to later times,the mean field limit now does not rely on the law of large numbers and the chaos is imposed at every discrete time.Despite this,we will not only justify this mean-field limit(discrete in time)but will also show that the limit,as the discrete time intervalτ→0,approaches to the solution of a nonlinear Fokker-Planck equation arising as the mean-field limit of the original interacting particle system in the Wasserstein distance.
文摘Purpose–Conventional image super-resolution reconstruction by the conventional deep learning architectures suffers from the problems of hard training and gradient disappearing.In order to solve such problems,the purposeof this paperis to proposea novel image super-resolutionalgorithmbasedon improved generative adversarial networks(GANs)with Wasserstein distance and gradient penalty.Design/methodology/approach–The proposed algorithm first introduces the conventional GANs architecture,the Wasserstein distance and the gradient penalty for the task of image super-resolution reconstruction(SRWGANs-GP).In addition,a novel perceptual loss function is designed for the SRWGANs-GP to meet the task of image super-resolution reconstruction.The content loss is extracted from the deep model’s feature maps,and such features are introduced to calculate mean square error(MSE)for the loss calculation of generators.Findings–To validate the effectiveness and feasibility of the proposed algorithm,a lot of compared experiments are applied on three common data sets,i.e.Set5,Set14 and BSD100.Experimental results have shown that the proposed SRWGANs-GP architecture has a stable error gradient and iteratively convergence.Compared with the baseline deep models,the proposed GANs models have a significant improvement on performance and efficiency for image super-resolution reconstruction.The MSE calculated by the deep model’s feature maps gives more advantages for constructing contour and texture.Originality/value–Compared with the state-of-the-art algorithms,the proposed algorithm obtains a better performance on image super-resolution and better reconstruction results on contour and texture.
基金The work of Shi Jin was partly supported by NSFC grants No.11871297 and No.31571071We thank E.Moulines for kindly indicating several references on stochastic approximation.
文摘This paper discusses a numerical method for computing the evolution of large interacting system of quantum particles.The idea of the random batch method is to replace the total interaction of each particle with the N−1 other particles by the interaction with p≪N particles chosen at random at each time step,multiplied by(N−1)/p.This reduces the computational cost of computing the interaction potential per time step from O(N^(2))to O(N).For simplicity,we consider only in this work the case p=1—in other words,we assume that N is even,and that at each time step,the N particles are organized in N/2 pairs,with a random reshuffling of the pairs at the beginning of each time step.We obtain a convergence estimate for the Wigner transform of the single-particle reduced density matrix of the particle system at time t that is both uniform in N>1 and independent of the Planck constant h̵.The key idea is to use a new type of distance on the set of quantum states that is reminiscent of the Wasserstein distance of exponent 1(or Monge-Kantorovich-Rubinstein distance)on the set of Borel probability measures on Rd used in the context of optimal transport.
基金This work was supported in part by the National Natural Science Foundation of China(Grant No.12071031).
文摘We study a class of diffusion processes, which are determined by solutions X(t) to stochastic functional differential equation with infinite memory and random switching represented by Markov chain Λ(t): Under suitable conditions, we investigate convergence and boundedness of both the solutions X(t) and the functional solutions Xt: We show that two solutions (resp., functional solutions) from different initial data living in the same initial switching regime will be close with high probability as time variable tends to infinity, and that the solutions (resp., functional solutions) are uniformly bounded in the mean square sense. Moreover, we prove existence and uniqueness of the invariant probability measure of two-component Markov-Feller process (Xt,Λ(t));and establish exponential bounds on the rate of convergence to the invariant probability measure under Wasserstein distance. Finally, we provide a concrete example to illustrate our main results.
基金supported in part by the National Natural Science Foundation of China(Grant Nos.11771327,11831014).
文摘We investigate a particle system with mean field interaction living in a random environment characterized by a regime-switching process.The switching process is allowed to be dependent on the particle system.The well-posedness and various properties of the limit conditional McKean-Vlasov SDEs are studied,and the conditional propagation of chaos is established with explicit estimate of the convergence rate.
基金Supported NNSF of China(Grant Nos.11771327,2018JJ2478,11831014,12071340)。
文摘In this work, by constructing optimal Markovian couplings we investigate exponential convergence rate in the Wasserstein distance for the transmission control protocol process. Most importantly, we provide a variational formula for the lower bound of the exponential convergence rate.
基金This work was supported in part by the National Natural Science Foundation of China(Grant Nos.11771326,11831014,11921001,11801406).
文摘Due to their intrinsic link with nonlinear Fokker-Planck equations and many other applications,distribution dependent stochastic differential equations(DDSDEs)have been intensively investigated.In this paper,we summarize some recent progresses in the study of DDSDEs,which include the correspondence of weak solutions and nonlinear Fokker-Planck equations,the well-posedness,regularity estimates,exponential ergodicity,long time large deviations,and comparison theorems.
基金This work was supported in part by NSF under Grant CPS-1739344,ARO under grant W911NF-16-1-0448the DTRA under Grant HDTRA1-13-1-0029Part of this work will appear in the Proceedings of 40th IEEE International Conference on Distributed Computing Systems(ICDCS),Singapore,July 8-10,2020。
文摘In order to meet the real-time performance requirements,intelligent decisions in Internet of things applications must take place right here right now at the network edge.Pushing the artificial intelligence frontier to achieve edge intelligence is nontrivial due to the constrained computing resources and limited training data at the network edge.To tackle these challenges,we develop a distributionally robust optimization(DRO)-based edge learning algorithm,where the uncertainty model is constructed to foster the synergy of cloud knowledge and local training.Specifically,the cloud transferred knowledge is in the form of a Dirichlet process prior distribution for the edge model parameters,and the edge device further constructs an uncertainty set centered around the empirical distribution of its local samples.The edge learning DRO problem,subject to these two distributional uncertainty constraints,is recast as a single-layer optimization problem using a duality approach.We then use an Expectation-Maximization algorithm-inspired method to derive a convex relaxation,based on which we devise algorithms to learn the edge model.Furthermore,we illustrate that the meta-learning fast adaptation procedure is equivalent to our proposed Dirichlet process prior-based approach.Finally,extensive experiments are implemented to showcase the performance gain over standard approaches using edge data only.