We study distributed optimization problems over a directed network,where nodes aim to minimize the sum of local objective functions via directed communications with neighbors.Many algorithms are designed to solve it f...We study distributed optimization problems over a directed network,where nodes aim to minimize the sum of local objective functions via directed communications with neighbors.Many algorithms are designed to solve it for synchronized or randomly activated implementation,which may create deadlocks in practice.In sharp contrast,we propose a fully asynchronous push-pull gradient(APPG) algorithm,where each node updates without waiting for any other node by using possibly delayed information from neighbors.Then,we construct two novel augmented networks to analyze asynchrony and delays,and quantify its convergence rate from the worst-case point of view.Particularly,all nodes of APPG converge to the same optimal solution at a linear rate of O(λ^(k)) if local functions have Lipschitz-continuous gradients and their sum satisfies the Polyak-?ojasiewicz condition(convexity is not required),where λ ∈(0,1) is explicitly given and the virtual counter k increases by one when any node updates.Finally,the advantage of APPG over the synchronous counterpart and its linear speedup efficiency are numerically validated via a logistic regression problem.展开更多
The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high co...The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high cost of communication and complex modeling.Meanwhile,the traditional numerical iterative solution cannot deal with uncertainty and solution efficiency,which is difficult to apply online.For the coordinated optimization problem of the electricity-gas-heat IES in this study,we constructed a model for the distributed IES with a dynamic distribution factor and transformed the centralized optimization problem into a distributed optimization problem in the multi-agent reinforcement learning environment using multi-agent deep deterministic policy gradient.Introducing the dynamic distribution factor allows the system to consider the impact of changes in real-time supply and demand on system optimization,dynamically coordinating different energy sources for complementary utilization and effectively improving the system economy.Compared with centralized optimization,the distributed model with multiple decision centers can achieve similar results while easing the pressure on system communication.The proposed method considers the dual uncertainty of renewable energy and load in the training.Compared with the traditional iterative solution method,it can better cope with uncertainty and realize real-time decision making of the system,which is conducive to the online application.Finally,we verify the effectiveness of the proposed method using an example of an IES coupled with three energy hub agents.展开更多
This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm w...This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm with a switching mechanism to guarantee that all agents eventually converge to an optimal solution point,while their control inputs are constrained in their own nonconvex region.It is worth noting that the mechanism is performed to tackle the coexistence of the nonconvex constraint operator and the optimization gradient term.Based on the dynamic transformation technique,the original nonlinear dynamic system is transformed into an equivalent one with a nonlinear error term.By utilizing the nonnegative matrix theory,it is shown that the optimization problem can be solved when the union of switching communication graphs is jointly strongly connected.Finally,a numerical simulation example is used to demonstrate the acquired theoretical results.展开更多
A continuous⁃time distributed optimization was researched for second⁃order heterogeneous multi⁃agent systems.The aim of this study is to keep the velocities of all agents the same and make the velocities converge to t...A continuous⁃time distributed optimization was researched for second⁃order heterogeneous multi⁃agent systems.The aim of this study is to keep the velocities of all agents the same and make the velocities converge to the optimal value to minimize the sum of local cost functions.First,an effective distributed controller which only uses local information was designed.Then,the stability and optimization of the systems were verified.Finally,a simulation case was used to illustrate the analytical results.展开更多
In this paper,the problem of online distributed optimization subject to a convex set is studied via a network of agents.Each agent only has access to a noisy gradient of its own objective function,and can communicate ...In this paper,the problem of online distributed optimization subject to a convex set is studied via a network of agents.Each agent only has access to a noisy gradient of its own objective function,and can communicate with its neighbors via a network.To handle this problem,an online distributed stochastic mirror descent algorithm is proposed.Existing works on online distributed algorithms involving stochastic gradients only provide the expectation bounds of the regrets.Different from them,we study the high probability bound of the regrets,i.e.,the sublinear bound of the regret is characterized by the natural logarithm of the failure probability's inverse.Under mild assumptions on the graph connectivity,we prove that the dynamic regret grows sublinearly with a high probability if the deviation in the minimizer sequence is sublinear with the square root of the time horizon.Finally,a simulation is provided to demonstrate the effectiveness of our theoretical results.展开更多
In this paper,the distributed optimization problem is investigated for a class of general nonlinear model-free multi-agent systems.The dynamical model of each agent is unknown and only the input/output data are availa...In this paper,the distributed optimization problem is investigated for a class of general nonlinear model-free multi-agent systems.The dynamical model of each agent is unknown and only the input/output data are available.A model-free adaptive control method is employed,by which the original unknown nonlinear system is equivalently converted into a dynamic linearized model.An event-triggered consensus scheme is developed to guarantee that the consensus error of the outputs of all agents is convergent.Then,by means of the distributed gradient descent method,a novel event-triggered model-free adaptive distributed optimization algorithm is put forward.Sufficient conditions are established to ensure the consensus and optimality of the addressed system.Finally,simulation results are provided to validate the effectiveness of the proposed approach.展开更多
This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state inf...This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state information with its current neighbors through a time-varying digraph. In addition, the agents do not have access to the information about the current cost functions until decisions are made. Different from most existing works on online distributed optimization, here we consider the case where the cost functions are strongly pseudoconvex and real gradients of the cost functions are not available. To handle this problem, a random gradient-free online distributed algorithm involving the multi-point gradient estimator is proposed. Of particular interest is that under the proposed algorithm, each agent only uses the estimation information of gradients instead of the real gradient information to make decisions. The dynamic regret is employed to measure the proposed algorithm. We prove that if the cumulative deviation of the minimizer sequence grows within a certain rate, then the expectation of dynamic regret increases sublinearly. Finally, a simulation example is given to corroborate the validity of our results.展开更多
This paper studies an online distributed optimization problem over multi-agent systems.In this problem,the goal of agents is to cooperatively minimize the sum of locally dynamic cost functions.Different from most exis...This paper studies an online distributed optimization problem over multi-agent systems.In this problem,the goal of agents is to cooperatively minimize the sum of locally dynamic cost functions.Different from most existing works on distributed optimization,here we consider the case where the cost function is strongly pseudoconvex and real gradients of objective functions are not available.To handle this problem,an online zeroth-order stochastic optimization algorithm involving the single-point gradient estimator is proposed.Under the algorithm,each agent only has access to the information associated with its own cost function and the estimate of the gradient,and exchange local state information with its immediate neighbors via a time-varying digraph.The performance of the algorithm is measured by the expectation of dynamic regret.Under mild assumptions on graphs,we prove that if the cumulative deviation of minimizer sequence grows within a certain rate,then the expectation of dynamic regret grows sublinearly.Finally,a simulation example is given to illustrate the validity of our results.展开更多
We are investigating the distributed optimization problem,where a network of nodes works together to minimize a global objective that is a finite sum of their stored local functions.Since nodes exchange optimization p...We are investigating the distributed optimization problem,where a network of nodes works together to minimize a global objective that is a finite sum of their stored local functions.Since nodes exchange optimization parameters through the wireless network,large-scale training models can create communication bottlenecks,resulting in slower training times.To address this issue,CHOCO-SGD was proposed,which allows compressing information with arbitrary precision without reducing the convergence rate for strongly convex objective functions.Nevertheless,most convex functions are not strongly convex(such as logistic regression or Lasso),which raises the question of whether this algorithm can be applied to non-strongly convex functions.In this paper,we provide the first theoretical analysis of the convergence rate of CHOCO-SGD on non-strongly convex objectives.We derive a sufficient condition,which limits the fidelity of compression,to guarantee convergence.Moreover,our analysis demonstrates that within the fidelity threshold,this algorithm can significantly reduce transmission burden while maintaining the same convergence rate order as its no-compression equivalent.Numerical experiments further validate the theoretical findings by demonstrating that CHOCO-SGD improves communication efficiency and keeps the same convergence rate order simultaneously.And experiments also show that the algorithm fails to converge with low compression fidelity and in time-varying topologies.Overall,our study offers valuable insights into the potential applicability of CHOCO-SGD for non-strongly convex objectives.Additionally,we provide practical guidelines for researchers seeking to utilize this algorithm in real-world scenarios.展开更多
This paper considers distributed stochastic optimization,in which a number of agents cooperate to optimize a global objective function through local computations and information exchanges with neighbors over a network...This paper considers distributed stochastic optimization,in which a number of agents cooperate to optimize a global objective function through local computations and information exchanges with neighbors over a network.Stochastic optimization problems are usually tackled by variants of projected stochastic gradient descent.However,projecting a point onto a feasible set is often expensive.The Frank-Wolfe(FW)method has well-documented merits in handling convex constraints,but existing stochastic FW algorithms are basically developed for centralized settings.In this context,the present work puts forth a distributed stochastic Frank-Wolfe solver,by judiciously combining Nesterov's momentum and gradient tracking techniques for stochastic convex and nonconvex optimization over networks.It is shown that the convergence rate of the proposed algorithm is O(k^(-1/2))for convex optimization,and O(1/log_(2)(k))for nonconvex optimization.The efficacy of the algorithm is demonstrated by numerical simulations against a number of competing alternatives.展开更多
This paper studies the distributed optimization problem when the objective functions might be nondifferentiable and subject to heterogeneous set constraints.Unlike existing subgradient methods,the authors focus on the...This paper studies the distributed optimization problem when the objective functions might be nondifferentiable and subject to heterogeneous set constraints.Unlike existing subgradient methods,the authors focus on the case when the exact subgradients of the local objective functions can not be accessed by the agents.To solve this problem,the authors propose a projected primaldual dynamics using only the objective function’s approximate subgradients.The authors first prove that the formulated optimization problem can generally be solved with an error depending upon the accuracy of the available subgradients.Then,the authors show the exact solvability of this distributed optimization problem when the accumulated approximation error of inexact subgradients is not too large.After that,the authors also give a novel componentwise normalized variant to improve the transient behavior of the convergent sequence.The effectiveness of the proposed algorithms is verified by a numerical example.展开更多
In this paper,the optimization problem subject to N nonidentical closed convex set constraints is studied.The aim is to design a corresponding distributed optimization algorithm over the fixed unbalanced graph to solv...In this paper,the optimization problem subject to N nonidentical closed convex set constraints is studied.The aim is to design a corresponding distributed optimization algorithm over the fixed unbalanced graph to solve the considered problem.To this end,with the push-sum framework improved,the distributed optimization algorithm is newly designed,and its strict convergence analysis is given under the assumption that the involved graph is strongly connected.Finally,simulation results support the good performance of the proposed algorithm.展开更多
In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations a...In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations and the training of deep learning model that needs great computing power support, the distributed algorithm that can carry out multi-party joint modeling has attracted everyone’s attention. The distributed training mode relieves the huge pressure of centralized model on computer computing power and communication. However, most distributed algorithms currently work in a master-slave mode, often including a central server for coordination, which to some extent will cause communication pressure, data leakage, privacy violations and other issues. To solve these problems, a decentralized fully distributed algorithm based on deep random weight neural network is proposed. The algorithm decomposes the original objective function into several sub-problems under consistency constraints, combines the decentralized average consensus (DAC) and alternating direction method of multipliers (ADMM), and achieves the goal of joint modeling and training through local calculation and communication of each node. Finally, we compare the proposed decentralized algorithm with several centralized deep neural networks with random weights, and experimental results demonstrate the effectiveness of the proposed algorithm.展开更多
In this paper,we consider distributed convex optimization problems on multi-agent networks.We develop and analyze the distributed gradient method which allows each agent to compute its dynamic stepsize by utilizing th...In this paper,we consider distributed convex optimization problems on multi-agent networks.We develop and analyze the distributed gradient method which allows each agent to compute its dynamic stepsize by utilizing the time-varying estimate of the local function value at the global optimal solution.Our approach can be applied to both synchronous and asynchronous communication protocols.Specifically,we propose the distributed subgradient with uncoordinated dynamic stepsizes(DS-UD)algorithm for synchronous protocol and the AsynDGD algorithm for asynchronous protocol.Theoretical analysis shows that the proposed algorithms guarantee that all agents reach a consensus on the solution to the multi-agent optimization problem.Moreover,the proposed approach with dynamic stepsizes eliminates the requirement of diminishing stepsize in existing works.Numerical examples of distributed estimation in sensor networks are provided to illustrate the effectiveness of the proposed approach.展开更多
The state-based potential game is discussed and a game-based approach is proposed for distributed optimization problem in this paper.A continuous-time model is employed to design the state dynamics and learning algori...The state-based potential game is discussed and a game-based approach is proposed for distributed optimization problem in this paper.A continuous-time model is employed to design the state dynamics and learning algorithms of the state-based potential game with Lagrangian multipliers as the states.It is shown that the stationary state Nash equilibrium of the designed game contains the optimal solution of the optimization problem.Moreover,the convergence and stability of the learning algorithms are obtained for both undirected and directed communication graph.Additionally,the application to plug-in electric vehicle management is also discussed.展开更多
This paper studies a novel distributed optimization problem that aims to minimize the sum of the non-convex objective functionals of the multi-agent network under privacy protection, which means that the local objecti...This paper studies a novel distributed optimization problem that aims to minimize the sum of the non-convex objective functionals of the multi-agent network under privacy protection, which means that the local objective of each agent is unknown to others. The above problem involves complexity simultaneously in the time and space aspects. Yet existing works about distributed optimization mainly consider privacy protection in the space aspect where the decision variable is a vector with finite dimensions. In contrast, when the time aspect is considered in this paper, the decision variable is a continuous function concerning time. Hence, the minimization of the overall functional belongs to the calculus of variations. Traditional works usually aim to seek the optimal decision function. Due to privacy protection and non-convexity, the Euler-Lagrange equation of the proposed problem is a complicated partial differential equation.Hence, we seek the optimal decision derivative function rather than the decision function. This manner can be regarded as seeking the control input for an optimal control problem, for which we propose a centralized reinforcement learning(RL) framework. In the space aspect, we further present a distributed reinforcement learning framework to deal with the impact of privacy protection. Finally, rigorous theoretical analysis and simulation validate the effectiveness of our framework.展开更多
The heating,ventilation,and air-conditioning(HVAC)systems account for about half of the building energy consumption.The optimization methodology access to optimal control strategies of chiller plant has always been of...The heating,ventilation,and air-conditioning(HVAC)systems account for about half of the building energy consumption.The optimization methodology access to optimal control strategies of chiller plant has always been of great concern as it significantly contributes to the energy use of the whole HVAC system.Given that conventional centralized optimization methods relying on a central operator may suffer from dimensionality and a tremendous calculation burden,and show poorer flexibility when solving complex optimization issues,in this paper,a novel distributed optimization approach is presented for chiller plant control.In the proposed distributed control scheme,both trade-offs of coupled subsystems and optimal allocation among devices of the same subsystem are considered by developing a double-layer optimization structure.Non-cooperative game is used to mathematically formulate the interaction between controlled components as well as to divide the initial system-scale nonlinear optimization problem into local-scale ones.To solve these tasks,strategy updating mechanisms(PSO and IPM)are utilized.In this way,the approximate global optimal controlled variables of devices in the chiller plant can be obtained in a distributed and local-knowledge-enabled way without neither global information nor the central workstation.Furthermore,the existence and effectiveness of the proposed distributed scheme were verified by simulation case studies.Simulation results indicate that,by using the proposed distributed optimization scheme,a significant energy saving on a typical summer day can be obtained(1809.47 kW·h).The deviation from the central optimal solution is 3.83%.展开更多
The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of...The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms.展开更多
In this paper,a distributed chunkbased optimization algorithm is proposed for the resource allocation in broadband ultra-dense small cell networks.Based on the proposed algorithm,the power and subcarrier allocation pr...In this paper,a distributed chunkbased optimization algorithm is proposed for the resource allocation in broadband ultra-dense small cell networks.Based on the proposed algorithm,the power and subcarrier allocation problems are jointly optimized.In order to make the resource allocation suitable for large scale networks,the optimization problem is decomposed first based on an effective decomposition algorithm named optimal condition decomposition(OCD) algorithm.Furthermore,aiming at reducing implementation complexity,the subcarriers are divided into chunks and are allocated chunk by chunk.The simulation results show that the proposed algorithm achieves more superior performance than uniform power allocation scheme and Lagrange relaxation method,and then the proposed algorithm can strike a balance between the complexity and performance of the multi-carrier Ultra-Dense Networks.展开更多
Encouraging citizens to invest in small-scale renewable resources is crucial for transitioning towards a sustainable and clean energy system.Local energy communities(LECs)are expected to play a vital role in this cont...Encouraging citizens to invest in small-scale renewable resources is crucial for transitioning towards a sustainable and clean energy system.Local energy communities(LECs)are expected to play a vital role in this context.However,energy scheduling in LECs presents various challenges,including the preservation of customer privacy,adherence to distribution network constraints,and the management of computational burdens.This paper introduces a novel approach for energy scheduling in renewable-based LECs using a decentralized optimization method.The proposed approach uses the Limitedmemory Broyden–Fletcher–Goldfarb–Shanno(L-BFGS)method,significantly reducing the computational effort required for solving the mixed integer programming(MIP)problem.It incorporates network constraints,evaluates energy losses,and enables community participants to provide ancillary services like a regulation reserve to the grid utility.To assess its robustness and efficiency,the proposed approach is tested on an 84-bus radial distribution network.Results indicate that the proposed distributed approach not only matches the accuracy of the corresponding centralized model but also exhibits scalability and preserves participant privacy.展开更多
基金Supported by National Natural Science Foundation of China(62033006,62203254)。
文摘We study distributed optimization problems over a directed network,where nodes aim to minimize the sum of local objective functions via directed communications with neighbors.Many algorithms are designed to solve it for synchronized or randomly activated implementation,which may create deadlocks in practice.In sharp contrast,we propose a fully asynchronous push-pull gradient(APPG) algorithm,where each node updates without waiting for any other node by using possibly delayed information from neighbors.Then,we construct two novel augmented networks to analyze asynchrony and delays,and quantify its convergence rate from the worst-case point of view.Particularly,all nodes of APPG converge to the same optimal solution at a linear rate of O(λ^(k)) if local functions have Lipschitz-continuous gradients and their sum satisfies the Polyak-?ojasiewicz condition(convexity is not required),where λ ∈(0,1) is explicitly given and the virtual counter k increases by one when any node updates.Finally,the advantage of APPG over the synchronous counterpart and its linear speedup efficiency are numerically validated via a logistic regression problem.
基金supported by The National Key R&D Program of China(2020YFB0905900):Research on artificial intelligence application of power internet of things.
文摘The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high cost of communication and complex modeling.Meanwhile,the traditional numerical iterative solution cannot deal with uncertainty and solution efficiency,which is difficult to apply online.For the coordinated optimization problem of the electricity-gas-heat IES in this study,we constructed a model for the distributed IES with a dynamic distribution factor and transformed the centralized optimization problem into a distributed optimization problem in the multi-agent reinforcement learning environment using multi-agent deep deterministic policy gradient.Introducing the dynamic distribution factor allows the system to consider the impact of changes in real-time supply and demand on system optimization,dynamically coordinating different energy sources for complementary utilization and effectively improving the system economy.Compared with centralized optimization,the distributed model with multiple decision centers can achieve similar results while easing the pressure on system communication.The proposed method considers the dual uncertainty of renewable energy and load in the training.Compared with the traditional iterative solution method,it can better cope with uncertainty and realize real-time decision making of the system,which is conducive to the online application.Finally,we verify the effectiveness of the proposed method using an example of an IES coupled with three energy hub agents.
基金Project supported by the National Engineering Research Center of Rail Transportation Operation and Control System,Beijing Jiaotong University(Grant No.NERC2019K002)。
文摘This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm with a switching mechanism to guarantee that all agents eventually converge to an optimal solution point,while their control inputs are constrained in their own nonconvex region.It is worth noting that the mechanism is performed to tackle the coexistence of the nonconvex constraint operator and the optimization gradient term.Based on the dynamic transformation technique,the original nonlinear dynamic system is transformed into an equivalent one with a nonlinear error term.By utilizing the nonnegative matrix theory,it is shown that the optimization problem can be solved when the union of switching communication graphs is jointly strongly connected.Finally,a numerical simulation example is used to demonstrate the acquired theoretical results.
基金Sponsored by the National Natural Science Foundation of China(Grant Nos.61573199 and 61571441)。
文摘A continuous⁃time distributed optimization was researched for second⁃order heterogeneous multi⁃agent systems.The aim of this study is to keep the velocities of all agents the same and make the velocities converge to the optimal value to minimize the sum of local cost functions.First,an effective distributed controller which only uses local information was designed.Then,the stability and optimization of the systems were verified.Finally,a simulation case was used to illustrate the analytical results.
文摘In this paper,the problem of online distributed optimization subject to a convex set is studied via a network of agents.Each agent only has access to a noisy gradient of its own objective function,and can communicate with its neighbors via a network.To handle this problem,an online distributed stochastic mirror descent algorithm is proposed.Existing works on online distributed algorithms involving stochastic gradients only provide the expectation bounds of the regrets.Different from them,we study the high probability bound of the regrets,i.e.,the sublinear bound of the regret is characterized by the natural logarithm of the failure probability's inverse.Under mild assumptions on the graph connectivity,we prove that the dynamic regret grows sublinearly with a high probability if the deviation in the minimizer sequence is sublinear with the square root of the time horizon.Finally,a simulation is provided to demonstrate the effectiveness of our theoretical results.
基金Project supported by the National Natural Science Foundation of China(No.62003213)。
文摘In this paper,the distributed optimization problem is investigated for a class of general nonlinear model-free multi-agent systems.The dynamical model of each agent is unknown and only the input/output data are available.A model-free adaptive control method is employed,by which the original unknown nonlinear system is equivalently converted into a dynamic linearized model.An event-triggered consensus scheme is developed to guarantee that the consensus error of the outputs of all agents is convergent.Then,by means of the distributed gradient descent method,a novel event-triggered model-free adaptive distributed optimization algorithm is put forward.Sufficient conditions are established to ensure the consensus and optimality of the addressed system.Finally,simulation results are provided to validate the effectiveness of the proposed approach.
基金supported by the National Natural Science Foundation of China(Nos.62103169,51875380)the China Postdoctoral Science Foundation(No.2021M691313).
文摘This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state information with its current neighbors through a time-varying digraph. In addition, the agents do not have access to the information about the current cost functions until decisions are made. Different from most existing works on online distributed optimization, here we consider the case where the cost functions are strongly pseudoconvex and real gradients of the cost functions are not available. To handle this problem, a random gradient-free online distributed algorithm involving the multi-point gradient estimator is proposed. Of particular interest is that under the proposed algorithm, each agent only uses the estimation information of gradients instead of the real gradient information to make decisions. The dynamic regret is employed to measure the proposed algorithm. We prove that if the cumulative deviation of the minimizer sequence grows within a certain rate, then the expectation of dynamic regret increases sublinearly. Finally, a simulation example is given to corroborate the validity of our results.
基金Supported by National Natural Science Foundation of China(62103169,51875380)China Postdoctoral Science Foundation(2021M691313)。
文摘This paper studies an online distributed optimization problem over multi-agent systems.In this problem,the goal of agents is to cooperatively minimize the sum of locally dynamic cost functions.Different from most existing works on distributed optimization,here we consider the case where the cost function is strongly pseudoconvex and real gradients of objective functions are not available.To handle this problem,an online zeroth-order stochastic optimization algorithm involving the single-point gradient estimator is proposed.Under the algorithm,each agent only has access to the information associated with its own cost function and the estimate of the gradient,and exchange local state information with its immediate neighbors via a time-varying digraph.The performance of the algorithm is measured by the expectation of dynamic regret.Under mild assumptions on graphs,we prove that if the cumulative deviation of minimizer sequence grows within a certain rate,then the expectation of dynamic regret grows sublinearly.Finally,a simulation example is given to illustrate the validity of our results.
基金supported in part by the Shanghai Natural Science Foundation under the Grant 22ZR1407000.
文摘We are investigating the distributed optimization problem,where a network of nodes works together to minimize a global objective that is a finite sum of their stored local functions.Since nodes exchange optimization parameters through the wireless network,large-scale training models can create communication bottlenecks,resulting in slower training times.To address this issue,CHOCO-SGD was proposed,which allows compressing information with arbitrary precision without reducing the convergence rate for strongly convex objective functions.Nevertheless,most convex functions are not strongly convex(such as logistic regression or Lasso),which raises the question of whether this algorithm can be applied to non-strongly convex functions.In this paper,we provide the first theoretical analysis of the convergence rate of CHOCO-SGD on non-strongly convex objectives.We derive a sufficient condition,which limits the fidelity of compression,to guarantee convergence.Moreover,our analysis demonstrates that within the fidelity threshold,this algorithm can significantly reduce transmission burden while maintaining the same convergence rate order as its no-compression equivalent.Numerical experiments further validate the theoretical findings by demonstrating that CHOCO-SGD improves communication efficiency and keeps the same convergence rate order simultaneously.And experiments also show that the algorithm fails to converge with low compression fidelity and in time-varying topologies.Overall,our study offers valuable insights into the potential applicability of CHOCO-SGD for non-strongly convex objectives.Additionally,we provide practical guidelines for researchers seeking to utilize this algorithm in real-world scenarios.
基金supported in part by the National Key R&D Program of China(2021YFB1714800)the National Natural Science Foundation of China(62222303,62073035,62173034,61925303,62088101,61873033)+1 种基金the CAAI-Huawei MindSpore Open Fundthe Chongqing Natural Science Foundation(2021ZX4100027)。
文摘This paper considers distributed stochastic optimization,in which a number of agents cooperate to optimize a global objective function through local computations and information exchanges with neighbors over a network.Stochastic optimization problems are usually tackled by variants of projected stochastic gradient descent.However,projecting a point onto a feasible set is often expensive.The Frank-Wolfe(FW)method has well-documented merits in handling convex constraints,but existing stochastic FW algorithms are basically developed for centralized settings.In this context,the present work puts forth a distributed stochastic Frank-Wolfe solver,by judiciously combining Nesterov's momentum and gradient tracking techniques for stochastic convex and nonconvex optimization over networks.It is shown that the convergence rate of the proposed algorithm is O(k^(-1/2))for convex optimization,and O(1/log_(2)(k))for nonconvex optimization.The efficacy of the algorithm is demonstrated by numerical simulations against a number of competing alternatives.
基金supported by the National Natural Science Foundation of China under Grant No.61973043。
文摘This paper studies the distributed optimization problem when the objective functions might be nondifferentiable and subject to heterogeneous set constraints.Unlike existing subgradient methods,the authors focus on the case when the exact subgradients of the local objective functions can not be accessed by the agents.To solve this problem,the authors propose a projected primaldual dynamics using only the objective function’s approximate subgradients.The authors first prove that the formulated optimization problem can generally be solved with an error depending upon the accuracy of the available subgradients.Then,the authors show the exact solvability of this distributed optimization problem when the accumulated approximation error of inexact subgradients is not too large.After that,the authors also give a novel componentwise normalized variant to improve the transient behavior of the convergent sequence.The effectiveness of the proposed algorithms is verified by a numerical example.
基金Project supported by the Science and Technology Project from State Grid Zhejiang Electric Power Co.,Ltd.,China(No.5211JY20001Q)。
文摘In this paper,the optimization problem subject to N nonidentical closed convex set constraints is studied.The aim is to design a corresponding distributed optimization algorithm over the fixed unbalanced graph to solve the considered problem.To this end,with the push-sum framework improved,the distributed optimization algorithm is newly designed,and its strict convergence analysis is given under the assumption that the involved graph is strongly connected.Finally,simulation results support the good performance of the proposed algorithm.
文摘In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations and the training of deep learning model that needs great computing power support, the distributed algorithm that can carry out multi-party joint modeling has attracted everyone’s attention. The distributed training mode relieves the huge pressure of centralized model on computer computing power and communication. However, most distributed algorithms currently work in a master-slave mode, often including a central server for coordination, which to some extent will cause communication pressure, data leakage, privacy violations and other issues. To solve these problems, a decentralized fully distributed algorithm based on deep random weight neural network is proposed. The algorithm decomposes the original objective function into several sub-problems under consistency constraints, combines the decentralized average consensus (DAC) and alternating direction method of multipliers (ADMM), and achieves the goal of joint modeling and training through local calculation and communication of each node. Finally, we compare the proposed decentralized algorithm with several centralized deep neural networks with random weights, and experimental results demonstrate the effectiveness of the proposed algorithm.
基金supported by the Key Research and Development Project in Guangdong Province(2020B0101050001)the National Science Foundation of China(61973214,61590924,61963030)the Natural Science Foundation of Shanghai(19ZR1476200)。
文摘In this paper,we consider distributed convex optimization problems on multi-agent networks.We develop and analyze the distributed gradient method which allows each agent to compute its dynamic stepsize by utilizing the time-varying estimate of the local function value at the global optimal solution.Our approach can be applied to both synchronous and asynchronous communication protocols.Specifically,we propose the distributed subgradient with uncoordinated dynamic stepsizes(DS-UD)algorithm for synchronous protocol and the AsynDGD algorithm for asynchronous protocol.Theoretical analysis shows that the proposed algorithms guarantee that all agents reach a consensus on the solution to the multi-agent optimization problem.Moreover,the proposed approach with dynamic stepsizes eliminates the requirement of diminishing stepsize in existing works.Numerical examples of distributed estimation in sensor networks are provided to illustrate the effectiveness of the proposed approach.
基金This work was supported by the NNSF of China[grant number 61174071]by 973 Program[grant number 2014CB845301/2/3].
文摘The state-based potential game is discussed and a game-based approach is proposed for distributed optimization problem in this paper.A continuous-time model is employed to design the state dynamics and learning algorithms of the state-based potential game with Lagrangian multipliers as the states.It is shown that the stationary state Nash equilibrium of the designed game contains the optimal solution of the optimization problem.Moreover,the convergence and stability of the learning algorithms are obtained for both undirected and directed communication graph.Additionally,the application to plug-in electric vehicle management is also discussed.
基金supported in part by the National Natural Science Foundation of China(NSFC)(61773260)the Ministry of Science and Technology (2018YFB130590)。
文摘This paper studies a novel distributed optimization problem that aims to minimize the sum of the non-convex objective functionals of the multi-agent network under privacy protection, which means that the local objective of each agent is unknown to others. The above problem involves complexity simultaneously in the time and space aspects. Yet existing works about distributed optimization mainly consider privacy protection in the space aspect where the decision variable is a vector with finite dimensions. In contrast, when the time aspect is considered in this paper, the decision variable is a continuous function concerning time. Hence, the minimization of the overall functional belongs to the calculus of variations. Traditional works usually aim to seek the optimal decision function. Due to privacy protection and non-convexity, the Euler-Lagrange equation of the proposed problem is a complicated partial differential equation.Hence, we seek the optimal decision derivative function rather than the decision function. This manner can be regarded as seeking the control input for an optimal control problem, for which we propose a centralized reinforcement learning(RL) framework. In the space aspect, we further present a distributed reinforcement learning framework to deal with the impact of privacy protection. Finally, rigorous theoretical analysis and simulation validate the effectiveness of our framework.
基金supported by the National Natural Science Foundation of China(No.51978481)support provided by China Scholarship Council(No.202006260140)。
文摘The heating,ventilation,and air-conditioning(HVAC)systems account for about half of the building energy consumption.The optimization methodology access to optimal control strategies of chiller plant has always been of great concern as it significantly contributes to the energy use of the whole HVAC system.Given that conventional centralized optimization methods relying on a central operator may suffer from dimensionality and a tremendous calculation burden,and show poorer flexibility when solving complex optimization issues,in this paper,a novel distributed optimization approach is presented for chiller plant control.In the proposed distributed control scheme,both trade-offs of coupled subsystems and optimal allocation among devices of the same subsystem are considered by developing a double-layer optimization structure.Non-cooperative game is used to mathematically formulate the interaction between controlled components as well as to divide the initial system-scale nonlinear optimization problem into local-scale ones.To solve these tasks,strategy updating mechanisms(PSO and IPM)are utilized.In this way,the approximate global optimal controlled variables of devices in the chiller plant can be obtained in a distributed and local-knowledge-enabled way without neither global information nor the central workstation.Furthermore,the existence and effectiveness of the proposed distributed scheme were verified by simulation case studies.Simulation results indicate that,by using the proposed distributed optimization scheme,a significant energy saving on a typical summer day can be obtained(1809.47 kW·h).The deviation from the central optimal solution is 3.83%.
基金supported by the Knut and Alice Wallenberg Foundationthe Swedish Foundation for Strategic Research+1 种基金the Swedish Research Councilthe National Natural Science Foundation of China(62133003,61991403,61991404,61991400)。
文摘The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms.
基金supported in part by Beijing Natural Science Foundation(4152047)the 863 project No.2014AA01A701+1 种基金111 Project of China under Grant B14010China Mobile Research Institute under grant[2014]451
文摘In this paper,a distributed chunkbased optimization algorithm is proposed for the resource allocation in broadband ultra-dense small cell networks.Based on the proposed algorithm,the power and subcarrier allocation problems are jointly optimized.In order to make the resource allocation suitable for large scale networks,the optimization problem is decomposed first based on an effective decomposition algorithm named optimal condition decomposition(OCD) algorithm.Furthermore,aiming at reducing implementation complexity,the subcarriers are divided into chunks and are allocated chunk by chunk.The simulation results show that the proposed algorithm achieves more superior performance than uniform power allocation scheme and Lagrange relaxation method,and then the proposed algorithm can strike a balance between the complexity and performance of the multi-carrier Ultra-Dense Networks.
基金supported in part by the Ministry of Research,Innovation and Digitalization under Project PNRR-C9-I8-760090/23.05.2023 CF30/14.11.2022.
文摘Encouraging citizens to invest in small-scale renewable resources is crucial for transitioning towards a sustainable and clean energy system.Local energy communities(LECs)are expected to play a vital role in this context.However,energy scheduling in LECs presents various challenges,including the preservation of customer privacy,adherence to distribution network constraints,and the management of computational burdens.This paper introduces a novel approach for energy scheduling in renewable-based LECs using a decentralized optimization method.The proposed approach uses the Limitedmemory Broyden–Fletcher–Goldfarb–Shanno(L-BFGS)method,significantly reducing the computational effort required for solving the mixed integer programming(MIP)problem.It incorporates network constraints,evaluates energy losses,and enables community participants to provide ancillary services like a regulation reserve to the grid utility.To assess its robustness and efficiency,the proposed approach is tested on an 84-bus radial distribution network.Results indicate that the proposed distributed approach not only matches the accuracy of the corresponding centralized model but also exhibits scalability and preserves participant privacy.