When a line failure occurs in a power grid, a load transfer is implemented to reconfigure the network by changingthe states of tie-switches and load demands. Computation speed is one of the major performance indicator...When a line failure occurs in a power grid, a load transfer is implemented to reconfigure the network by changingthe states of tie-switches and load demands. Computation speed is one of the major performance indicators inpower grid load transfer, as a fast load transfer model can greatly reduce the economic loss of post-fault powergrids. In this study, a reinforcement learning method is developed based on a deep deterministic policy gradient.The tedious training process of the reinforcement learning model can be conducted offline, so the model showssatisfactory performance in real-time operation, indicating that it is suitable for fast load transfer. Consideringthat the reinforcement learning model performs poorly in satisfying safety constraints, a safe action-correctionframework is proposed to modify the learning model. In the framework, the action of load shedding is correctedaccording to sensitivity analysis results under a small discrete increment so as to match the constraints of line flowlimits. The results of case studies indicate that the proposed method is practical for fast and safe power grid loadtransfer.展开更多
基金the Incubation Project of State Grid Jiangsu Corporation of China“Construction and Application of Intelligent Load Transferring Platform for Active Distribution Networks”(JF2023031).
文摘When a line failure occurs in a power grid, a load transfer is implemented to reconfigure the network by changingthe states of tie-switches and load demands. Computation speed is one of the major performance indicators inpower grid load transfer, as a fast load transfer model can greatly reduce the economic loss of post-fault powergrids. In this study, a reinforcement learning method is developed based on a deep deterministic policy gradient.The tedious training process of the reinforcement learning model can be conducted offline, so the model showssatisfactory performance in real-time operation, indicating that it is suitable for fast load transfer. Consideringthat the reinforcement learning model performs poorly in satisfying safety constraints, a safe action-correctionframework is proposed to modify the learning model. In the framework, the action of load shedding is correctedaccording to sensitivity analysis results under a small discrete increment so as to match the constraints of line flowlimits. The results of case studies indicate that the proposed method is practical for fast and safe power grid loadtransfer.