The operation of electricity grids has become increasingly complex due to the current upheaval and the increase in renewable energy production.As a consequence,active grid management is reaching its limits with conven...The operation of electricity grids has become increasingly complex due to the current upheaval and the increase in renewable energy production.As a consequence,active grid management is reaching its limits with conventional approaches.In the context of the Learning to Run a Power Network(L2RPN)challenge,it has been shown that Reinforcement Learning(RL)is an efficient and reliable approach with considerable potential for automatic grid operation.In this article,we analyse the submitted agent from Binbinchen and provide novel strategies to improve the agent,both for the RL and the rule-based approach.The main improvement is a N-1 strategy,where we consider topology actions that keep the grid stable,even if one line is disconnected.More,we also propose a topology reversion to the original grid,which proved to be beneficial.The improvements are tested against reference approaches on the challenge test sets and are able to increase the performance of the rule-based agent by 27%.In direct comparison between rule-based and RL agent we find similar performance.However,the RL agent has a clear computational advantage.We also analyse the behaviour in an exemplary case in more detail to provide additional insights.Here,we observe that through the N-1 strategy,the actions of both the rule-based and the RL agent become more diversified.展开更多
基金This work was supported by the Competence Centre for Cognitive Energy Systems of the Fraunhofer IEE and the research group Rein-forcement Learning for cognitive energy systems(RL4CES)from the Intelligent Embedded Systems of the University Kassel.
文摘The operation of electricity grids has become increasingly complex due to the current upheaval and the increase in renewable energy production.As a consequence,active grid management is reaching its limits with conventional approaches.In the context of the Learning to Run a Power Network(L2RPN)challenge,it has been shown that Reinforcement Learning(RL)is an efficient and reliable approach with considerable potential for automatic grid operation.In this article,we analyse the submitted agent from Binbinchen and provide novel strategies to improve the agent,both for the RL and the rule-based approach.The main improvement is a N-1 strategy,where we consider topology actions that keep the grid stable,even if one line is disconnected.More,we also propose a topology reversion to the original grid,which proved to be beneficial.The improvements are tested against reference approaches on the challenge test sets and are able to increase the performance of the rule-based agent by 27%.In direct comparison between rule-based and RL agent we find similar performance.However,the RL agent has a clear computational advantage.We also analyse the behaviour in an exemplary case in more detail to provide additional insights.Here,we observe that through the N-1 strategy,the actions of both the rule-based and the RL agent become more diversified.