期刊文献+
共找到113篇文章
< 1 2 6 >
每页显示 20 50 100
Fully asynchronous distributed optimization with linear convergence over directed networks
1
作者 SHA Xingyu ZHANG Jiaqi YOU Keyou 《中山大学学报(自然科学版)(中英文)》 CAS CSCD 北大核心 2023年第5期1-23,共23页
We study distributed optimization problems over a directed network,where nodes aim to minimize the sum of local objective functions via directed communications with neighbors.Many algorithms are designed to solve it f... We study distributed optimization problems over a directed network,where nodes aim to minimize the sum of local objective functions via directed communications with neighbors.Many algorithms are designed to solve it for synchronized or randomly activated implementation,which may create deadlocks in practice.In sharp contrast,we propose a fully asynchronous push-pull gradient(APPG) algorithm,where each node updates without waiting for any other node by using possibly delayed information from neighbors.Then,we construct two novel augmented networks to analyze asynchrony and delays,and quantify its convergence rate from the worst-case point of view.Particularly,all nodes of APPG converge to the same optimal solution at a linear rate of O(λ^(k)) if local functions have Lipschitz-continuous gradients and their sum satisfies the Polyak-?ojasiewicz condition(convexity is not required),where λ ∈(0,1) is explicitly given and the virtual counter k increases by one when any node updates.Finally,the advantage of APPG over the synchronous counterpart and its linear speedup efficiency are numerically validated via a logistic regression problem. 展开更多
关键词 fully asynchronous distributed optimization linear convergence Polyak-Łojasiewicz condition
下载PDF
Distributed optimization of electricity-Gas-Heat integrated energy system with multi-agent deep reinforcement learning 被引量:3
2
作者 Lei Dong Jing Wei +1 位作者 Hao Lin Xinying Wang 《Global Energy Interconnection》 EI CAS CSCD 2022年第6期604-617,共14页
The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high co... The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high cost of communication and complex modeling.Meanwhile,the traditional numerical iterative solution cannot deal with uncertainty and solution efficiency,which is difficult to apply online.For the coordinated optimization problem of the electricity-gas-heat IES in this study,we constructed a model for the distributed IES with a dynamic distribution factor and transformed the centralized optimization problem into a distributed optimization problem in the multi-agent reinforcement learning environment using multi-agent deep deterministic policy gradient.Introducing the dynamic distribution factor allows the system to consider the impact of changes in real-time supply and demand on system optimization,dynamically coordinating different energy sources for complementary utilization and effectively improving the system economy.Compared with centralized optimization,the distributed model with multiple decision centers can achieve similar results while easing the pressure on system communication.The proposed method considers the dual uncertainty of renewable energy and load in the training.Compared with the traditional iterative solution method,it can better cope with uncertainty and realize real-time decision making of the system,which is conducive to the online application.Finally,we verify the effectiveness of the proposed method using an example of an IES coupled with three energy hub agents. 展开更多
关键词 Integrated energy system Multi-agent system distributed optimization Multi-agent deep deterministic policy gradient Real-time optimization decision
下载PDF
Distributed optimization for discrete-time multiagent systems with nonconvex control input constraints and switching topologies
3
作者 Xiao-Yu Shen Shuai Su Hai-Liang Hou 《Chinese Physics B》 SCIE EI CAS CSCD 2021年第12期283-290,共8页
This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm w... This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm with a switching mechanism to guarantee that all agents eventually converge to an optimal solution point,while their control inputs are constrained in their own nonconvex region.It is worth noting that the mechanism is performed to tackle the coexistence of the nonconvex constraint operator and the optimization gradient term.Based on the dynamic transformation technique,the original nonlinear dynamic system is transformed into an equivalent one with a nonlinear error term.By utilizing the nonnegative matrix theory,it is shown that the optimization problem can be solved when the union of switching communication graphs is jointly strongly connected.Finally,a numerical simulation example is used to demonstrate the acquired theoretical results. 展开更多
关键词 multiagent systems nonconvex input constraints switching topologies distributed optimization
原文传递
Distributed Optimization for Heterogenous Second⁃Order Multi⁃Agent Systems
4
作者 Qing Zhang Zhikun Gong +1 位作者 Zhengquan Yang Zengqiang Chen 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2020年第4期53-59,共7页
A continuous⁃time distributed optimization was researched for second⁃order heterogeneous multi⁃agent systems.The aim of this study is to keep the velocities of all agents the same and make the velocities converge to t... A continuous⁃time distributed optimization was researched for second⁃order heterogeneous multi⁃agent systems.The aim of this study is to keep the velocities of all agents the same and make the velocities converge to the optimal value to minimize the sum of local cost functions.First,an effective distributed controller which only uses local information was designed.Then,the stability and optimization of the systems were verified.Finally,a simulation case was used to illustrate the analytical results. 展开更多
关键词 distributed optimization heterogeneous multi⁃agent system local cost function CONSENSUS
下载PDF
Online distributed optimization with stochastic gradients:high probability bound of regrets
5
作者 Yuchen Yang Kaihong Lu Long Wang 《Control Theory and Technology》 EI CSCD 2024年第3期419-430,共12页
In this paper,the problem of online distributed optimization subject to a convex set is studied via a network of agents.Each agent only has access to a noisy gradient of its own objective function,and can communicate ... In this paper,the problem of online distributed optimization subject to a convex set is studied via a network of agents.Each agent only has access to a noisy gradient of its own objective function,and can communicate with its neighbors via a network.To handle this problem,an online distributed stochastic mirror descent algorithm is proposed.Existing works on online distributed algorithms involving stochastic gradients only provide the expectation bounds of the regrets.Different from them,we study the high probability bound of the regrets,i.e.,the sublinear bound of the regret is characterized by the natural logarithm of the failure probability's inverse.Under mild assumptions on the graph connectivity,we prove that the dynamic regret grows sublinearly with a high probability if the deviation in the minimizer sequence is sublinear with the square root of the time horizon.Finally,a simulation is provided to demonstrate the effectiveness of our theoretical results. 展开更多
关键词 distributed optimization Online optimization Stochastic gradient High probability
原文传递
Event-triggered distributed optimization for model-free multi-agent systems
6
作者 Shanshan ZHENG Shuai LIU Licheng WANG 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第2期214-224,共11页
In this paper,the distributed optimization problem is investigated for a class of general nonlinear model-free multi-agent systems.The dynamical model of each agent is unknown and only the input/output data are availa... In this paper,the distributed optimization problem is investigated for a class of general nonlinear model-free multi-agent systems.The dynamical model of each agent is unknown and only the input/output data are available.A model-free adaptive control method is employed,by which the original unknown nonlinear system is equivalently converted into a dynamic linearized model.An event-triggered consensus scheme is developed to guarantee that the consensus error of the outputs of all agents is convergent.Then,by means of the distributed gradient descent method,a novel event-triggered model-free adaptive distributed optimization algorithm is put forward.Sufficient conditions are established to ensure the consensus and optimality of the addressed system.Finally,simulation results are provided to validate the effectiveness of the proposed approach. 展开更多
关键词 distributed optimization Multi-agent systems Model-free adaptive control Event-triggered mechanism
原文传递
Random gradient-free method for online distributed optimization with strongly pseudoconvex cost functions
7
作者 Xiaoxi Yan Cheng Li +1 位作者 Kaihong Lu Hang Xu 《Control Theory and Technology》 EI CSCD 2024年第1期14-24,共11页
This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state inf... This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state information with its current neighbors through a time-varying digraph. In addition, the agents do not have access to the information about the current cost functions until decisions are made. Different from most existing works on online distributed optimization, here we consider the case where the cost functions are strongly pseudoconvex and real gradients of the cost functions are not available. To handle this problem, a random gradient-free online distributed algorithm involving the multi-point gradient estimator is proposed. Of particular interest is that under the proposed algorithm, each agent only uses the estimation information of gradients instead of the real gradient information to make decisions. The dynamic regret is employed to measure the proposed algorithm. We prove that if the cumulative deviation of the minimizer sequence grows within a certain rate, then the expectation of dynamic regret increases sublinearly. Finally, a simulation example is given to corroborate the validity of our results. 展开更多
关键词 Multi-agent system Online distributed optimization Pseudoconvex optimization Random gradient-free method
原文传递
Zeroth-Order Methods for Online Distributed Optimization with Strongly Pseudoconvex Cost Functions
8
作者 Xiaoxi YAN Muyuan MA Kaihong LU 《Journal of Systems Science and Information》 CSCD 2024年第1期145-160,共16页
This paper studies an online distributed optimization problem over multi-agent systems.In this problem,the goal of agents is to cooperatively minimize the sum of locally dynamic cost functions.Different from most exis... This paper studies an online distributed optimization problem over multi-agent systems.In this problem,the goal of agents is to cooperatively minimize the sum of locally dynamic cost functions.Different from most existing works on distributed optimization,here we consider the case where the cost function is strongly pseudoconvex and real gradients of objective functions are not available.To handle this problem,an online zeroth-order stochastic optimization algorithm involving the single-point gradient estimator is proposed.Under the algorithm,each agent only has access to the information associated with its own cost function and the estimate of the gradient,and exchange local state information with its immediate neighbors via a time-varying digraph.The performance of the algorithm is measured by the expectation of dynamic regret.Under mild assumptions on graphs,we prove that if the cumulative deviation of minimizer sequence grows within a certain rate,then the expectation of dynamic regret grows sublinearly.Finally,a simulation example is given to illustrate the validity of our results. 展开更多
关键词 multi-agent systems strongly pseudoconvex function single-point gradient estimator online distributed optimization
原文传递
Distributed Stochastic Optimization with Compression for Non-Strongly Convex Objectives
9
作者 Xuanjie Li Yuedong Xu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期459-481,共23页
We are investigating the distributed optimization problem,where a network of nodes works together to minimize a global objective that is a finite sum of their stored local functions.Since nodes exchange optimization p... We are investigating the distributed optimization problem,where a network of nodes works together to minimize a global objective that is a finite sum of their stored local functions.Since nodes exchange optimization parameters through the wireless network,large-scale training models can create communication bottlenecks,resulting in slower training times.To address this issue,CHOCO-SGD was proposed,which allows compressing information with arbitrary precision without reducing the convergence rate for strongly convex objective functions.Nevertheless,most convex functions are not strongly convex(such as logistic regression or Lasso),which raises the question of whether this algorithm can be applied to non-strongly convex functions.In this paper,we provide the first theoretical analysis of the convergence rate of CHOCO-SGD on non-strongly convex objectives.We derive a sufficient condition,which limits the fidelity of compression,to guarantee convergence.Moreover,our analysis demonstrates that within the fidelity threshold,this algorithm can significantly reduce transmission burden while maintaining the same convergence rate order as its no-compression equivalent.Numerical experiments further validate the theoretical findings by demonstrating that CHOCO-SGD improves communication efficiency and keeps the same convergence rate order simultaneously.And experiments also show that the algorithm fails to converge with low compression fidelity and in time-varying topologies.Overall,our study offers valuable insights into the potential applicability of CHOCO-SGD for non-strongly convex objectives.Additionally,we provide practical guidelines for researchers seeking to utilize this algorithm in real-world scenarios. 展开更多
关键词 distributed stochastic optimization arbitrary compression fidelity non-strongly convex objective function
下载PDF
Distributed Momentum-Based Frank-Wolfe Algorithm for Stochastic Optimization 被引量:1
10
作者 Jie Hou Xianlin Zeng +2 位作者 Gang Wang Jian Sun Jie Chen 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第3期685-699,共15页
This paper considers distributed stochastic optimization,in which a number of agents cooperate to optimize a global objective function through local computations and information exchanges with neighbors over a network... This paper considers distributed stochastic optimization,in which a number of agents cooperate to optimize a global objective function through local computations and information exchanges with neighbors over a network.Stochastic optimization problems are usually tackled by variants of projected stochastic gradient descent.However,projecting a point onto a feasible set is often expensive.The Frank-Wolfe(FW)method has well-documented merits in handling convex constraints,but existing stochastic FW algorithms are basically developed for centralized settings.In this context,the present work puts forth a distributed stochastic Frank-Wolfe solver,by judiciously combining Nesterov's momentum and gradient tracking techniques for stochastic convex and nonconvex optimization over networks.It is shown that the convergence rate of the proposed algorithm is O(k^(-1/2))for convex optimization,and O(1/log_(2)(k))for nonconvex optimization.The efficacy of the algorithm is demonstrated by numerical simulations against a number of competing alternatives. 展开更多
关键词 distributed optimization Frank-Wolfe(FW)algorithms momentum-based method stochastic optimization
下载PDF
Primal-Dual ε-Subgradient Method for Distributed Optimization
11
作者 ZHU Kui TANG Yutao 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2023年第2期577-590,共14页
This paper studies the distributed optimization problem when the objective functions might be nondifferentiable and subject to heterogeneous set constraints.Unlike existing subgradient methods,the authors focus on the... This paper studies the distributed optimization problem when the objective functions might be nondifferentiable and subject to heterogeneous set constraints.Unlike existing subgradient methods,the authors focus on the case when the exact subgradients of the local objective functions can not be accessed by the agents.To solve this problem,the authors propose a projected primaldual dynamics using only the objective function’s approximate subgradients.The authors first prove that the formulated optimization problem can generally be solved with an error depending upon the accuracy of the available subgradients.Then,the authors show the exact solvability of this distributed optimization problem when the accumulated approximation error of inexact subgradients is not too large.After that,the authors also give a novel componentwise normalized variant to improve the transient behavior of the convergent sequence.The effectiveness of the proposed algorithms is verified by a numerical example. 展开更多
关键词 Constrained optimization distributed optimization e-subgradient primal-dual dynamics
原文传递
Distributed optimization based on improved push-sum framework for optimization problem with multiple local constraints and its application in smart grid
12
作者 Qian XU Chutian YU +2 位作者 Xiang YUAN Mengli WEI Hongzhe LIU 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2023年第9期1253-1260,共8页
In this paper,the optimization problem subject to N nonidentical closed convex set constraints is studied.The aim is to design a corresponding distributed optimization algorithm over the fixed unbalanced graph to solv... In this paper,the optimization problem subject to N nonidentical closed convex set constraints is studied.The aim is to design a corresponding distributed optimization algorithm over the fixed unbalanced graph to solve the considered problem.To this end,with the push-sum framework improved,the distributed optimization algorithm is newly designed,and its strict convergence analysis is given under the assumption that the involved graph is strongly connected.Finally,simulation results support the good performance of the proposed algorithm. 展开更多
关键词 distributed optimization Nonidentical constraints Improved push-sum framework
原文传递
Fully Distributed Learning for Deep Random Vector Functional-Link Networks
13
作者 Huada Zhu Wu Ai 《Journal of Applied Mathematics and Physics》 2024年第4期1247-1262,共16页
In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations a... In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations and the training of deep learning model that needs great computing power support, the distributed algorithm that can carry out multi-party joint modeling has attracted everyone’s attention. The distributed training mode relieves the huge pressure of centralized model on computer computing power and communication. However, most distributed algorithms currently work in a master-slave mode, often including a central server for coordination, which to some extent will cause communication pressure, data leakage, privacy violations and other issues. To solve these problems, a decentralized fully distributed algorithm based on deep random weight neural network is proposed. The algorithm decomposes the original objective function into several sub-problems under consistency constraints, combines the decentralized average consensus (DAC) and alternating direction method of multipliers (ADMM), and achieves the goal of joint modeling and training through local calculation and communication of each node. Finally, we compare the proposed decentralized algorithm with several centralized deep neural networks with random weights, and experimental results demonstrate the effectiveness of the proposed algorithm. 展开更多
关键词 distributed optimization Deep Neural Network Random Vector Functional-Link (RVFL) Network Alternating Direction Method of Multipliers (ADMM)
下载PDF
Distributed Subgradient Algorithm for Multi-Agent Optimization With Dynamic Stepsize 被引量:3
14
作者 Xiaoxing Ren Dewei Li +1 位作者 Yugeng Xi Haibin Shao 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第8期1451-1464,共14页
In this paper,we consider distributed convex optimization problems on multi-agent networks.We develop and analyze the distributed gradient method which allows each agent to compute its dynamic stepsize by utilizing th... In this paper,we consider distributed convex optimization problems on multi-agent networks.We develop and analyze the distributed gradient method which allows each agent to compute its dynamic stepsize by utilizing the time-varying estimate of the local function value at the global optimal solution.Our approach can be applied to both synchronous and asynchronous communication protocols.Specifically,we propose the distributed subgradient with uncoordinated dynamic stepsizes(DS-UD)algorithm for synchronous protocol and the AsynDGD algorithm for asynchronous protocol.Theoretical analysis shows that the proposed algorithms guarantee that all agents reach a consensus on the solution to the multi-agent optimization problem.Moreover,the proposed approach with dynamic stepsizes eliminates the requirement of diminishing stepsize in existing works.Numerical examples of distributed estimation in sensor networks are provided to illustrate the effectiveness of the proposed approach. 展开更多
关键词 distributed optimization dynamic stepsize gradient method multi-agent networks
下载PDF
Potential game design for a class of distributed optimization problems 被引量:1
15
作者 Peng Yi Yanqiong Zhang Yiguang Hong 《Journal of Control and Decision》 EI 2014年第2期166-179,共14页
The state-based potential game is discussed and a game-based approach is proposed for distributed optimization problem in this paper.A continuous-time model is employed to design the state dynamics and learning algori... The state-based potential game is discussed and a game-based approach is proposed for distributed optimization problem in this paper.A continuous-time model is employed to design the state dynamics and learning algorithms of the state-based potential game with Lagrangian multipliers as the states.It is shown that the stationary state Nash equilibrium of the designed game contains the optimal solution of the optimization problem.Moreover,the convergence and stability of the learning algorithms are obtained for both undirected and directed communication graph.Additionally,the application to plug-in electric vehicle management is also discussed. 展开更多
关键词 distributed optimization potential game multi-agent systems plug-in electric vehicle
原文传递
An Optimal Control-Based Distributed Reinforcement Learning Framework for A Class of Non-Convex Objective Functionals of the Multi-Agent Network 被引量:2
16
作者 Zhe Chen Ning Li 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第11期2081-2093,共13页
This paper studies a novel distributed optimization problem that aims to minimize the sum of the non-convex objective functionals of the multi-agent network under privacy protection, which means that the local objecti... This paper studies a novel distributed optimization problem that aims to minimize the sum of the non-convex objective functionals of the multi-agent network under privacy protection, which means that the local objective of each agent is unknown to others. The above problem involves complexity simultaneously in the time and space aspects. Yet existing works about distributed optimization mainly consider privacy protection in the space aspect where the decision variable is a vector with finite dimensions. In contrast, when the time aspect is considered in this paper, the decision variable is a continuous function concerning time. Hence, the minimization of the overall functional belongs to the calculus of variations. Traditional works usually aim to seek the optimal decision function. Due to privacy protection and non-convexity, the Euler-Lagrange equation of the proposed problem is a complicated partial differential equation.Hence, we seek the optimal decision derivative function rather than the decision function. This manner can be regarded as seeking the control input for an optimal control problem, for which we propose a centralized reinforcement learning(RL) framework. In the space aspect, we further present a distributed reinforcement learning framework to deal with the impact of privacy protection. Finally, rigorous theoretical analysis and simulation validate the effectiveness of our framework. 展开更多
关键词 distributed optimization MULTI-AGENT optimal control reinforcement learning(RL)
下载PDF
A non-cooperative game-based distributed optimization method for chiller plant control
17
作者 Shiyao Li Yiqun Pan +1 位作者 Qiujian Wang Zhizhong Huang 《Building Simulation》 SCIE EI CSCD 2022年第6期1015-1034,共20页
The heating,ventilation,and air-conditioning(HVAC)systems account for about half of the building energy consumption.The optimization methodology access to optimal control strategies of chiller plant has always been of... The heating,ventilation,and air-conditioning(HVAC)systems account for about half of the building energy consumption.The optimization methodology access to optimal control strategies of chiller plant has always been of great concern as it significantly contributes to the energy use of the whole HVAC system.Given that conventional centralized optimization methods relying on a central operator may suffer from dimensionality and a tremendous calculation burden,and show poorer flexibility when solving complex optimization issues,in this paper,a novel distributed optimization approach is presented for chiller plant control.In the proposed distributed control scheme,both trade-offs of coupled subsystems and optimal allocation among devices of the same subsystem are considered by developing a double-layer optimization structure.Non-cooperative game is used to mathematically formulate the interaction between controlled components as well as to divide the initial system-scale nonlinear optimization problem into local-scale ones.To solve these tasks,strategy updating mechanisms(PSO and IPM)are utilized.In this way,the approximate global optimal controlled variables of devices in the chiller plant can be obtained in a distributed and local-knowledge-enabled way without neither global information nor the central workstation.Furthermore,the existence and effectiveness of the proposed distributed scheme were verified by simulation case studies.Simulation results indicate that,by using the proposed distributed optimization scheme,a significant energy saving on a typical summer day can be obtained(1809.47 kW·h).The deviation from the central optimal solution is 3.83%. 展开更多
关键词 chiller plant operation optimization distributed optimization non-cooperative game double-layer optimization graph theory
原文传递
A Primal-Dual SGD Algorithm for Distributed Nonconvex Optimization 被引量:4
18
作者 Xinlei Yi Shengjun Zhang +2 位作者 Tao Yang Tianyou Chai Karl Henrik Johansson 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第5期812-833,共22页
The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of... The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms. 展开更多
关键词 distributed nonconvex optimization linear speedup Polyak-Lojasiewicz(P-L)condition primal-dual algorithm stochastic gradient descent
下载PDF
Distributed Chunk-Based Optimization for MultiCarrier Ultra-Dense Networks 被引量:2
19
作者 GUO Shaozhen XING Chengwen +2 位作者 FEI Zesong ZHOU Gui YAN Xinge 《China Communications》 SCIE CSCD 2016年第1期80-90,共11页
In this paper,a distributed chunkbased optimization algorithm is proposed for the resource allocation in broadband ultra-dense small cell networks.Based on the proposed algorithm,the power and subcarrier allocation pr... In this paper,a distributed chunkbased optimization algorithm is proposed for the resource allocation in broadband ultra-dense small cell networks.Based on the proposed algorithm,the power and subcarrier allocation problems are jointly optimized.In order to make the resource allocation suitable for large scale networks,the optimization problem is decomposed first based on an effective decomposition algorithm named optimal condition decomposition(OCD) algorithm.Furthermore,aiming at reducing implementation complexity,the subcarriers are divided into chunks and are allocated chunk by chunk.The simulation results show that the proposed algorithm achieves more superior performance than uniform power allocation scheme and Lagrange relaxation method,and then the proposed algorithm can strike a balance between the complexity and performance of the multi-carrier Ultra-Dense Networks. 展开更多
关键词 ultra-dense small cell networks optimization chunk power allocation subcarrier allocation distributed resource allocation
下载PDF
Distributed Energy and Reserve Scheduling in Local Energy Communities Using L-BFGS Optimization
20
作者 Mohammad Dolatabadi Alireza Zakariazadeh +1 位作者 Alberto Borghetti Pierluigi Siano 《CSEE Journal of Power and Energy Systems》 SCIE EI CSCD 2024年第3期942-952,共11页
Encouraging citizens to invest in small-scale renewable resources is crucial for transitioning towards a sustainable and clean energy system.Local energy communities(LECs)are expected to play a vital role in this cont... Encouraging citizens to invest in small-scale renewable resources is crucial for transitioning towards a sustainable and clean energy system.Local energy communities(LECs)are expected to play a vital role in this context.However,energy scheduling in LECs presents various challenges,including the preservation of customer privacy,adherence to distribution network constraints,and the management of computational burdens.This paper introduces a novel approach for energy scheduling in renewable-based LECs using a decentralized optimization method.The proposed approach uses the Limitedmemory Broyden–Fletcher–Goldfarb–Shanno(L-BFGS)method,significantly reducing the computational effort required for solving the mixed integer programming(MIP)problem.It incorporates network constraints,evaluates energy losses,and enables community participants to provide ancillary services like a regulation reserve to the grid utility.To assess its robustness and efficiency,the proposed approach is tested on an 84-bus radial distribution network.Results indicate that the proposed distributed approach not only matches the accuracy of the corresponding centralized model but also exhibits scalability and preserves participant privacy. 展开更多
关键词 distributed optimization flexibility services L-BFGS method local energy community RENEWABLES RESERVE
原文传递
上一页 1 2 6 下一页 到第
使用帮助 返回顶部