화학공학소재연구정보센터
International Journal of Energy Research, Vol.43, No.8, 3853-3868, 2019
Optimising residential electric vehicle charging under renewable energy: Multi-agent learning in software simulation and hardware-in-the-loop evaluation
The integration of intermittent renewable energy sources coupled with the increasing demand of electric vehicles (EVs) poses new challenges to the electrical grid. To address this, many solutions based on demand response have been presented. These solutions are typically tested only in software-based simulations. In this paper, we present the application in hardware-in-the-loop (HIL) of a recently proposed algorithm for decentralised EV charging, prediction-based multi-agent reinforcement learning (P-MARL), to the problem of optimal EV residential charging under intermittent wind power and variable household baseload demands. P-MARL is an approach that can address EV charging objectives in a demand response aware manner, to avoid peak power usage while maximising the exploitation of renewable energy sources. We first train and test our algorithm in a residential neighbourhood scenario using GridLAB-D, a software power network simulator. Once agents learn optimal behaviour for EV charging while avoiding peak power demand in the software simulator, we port our solution to HIL while emulating the same scenario, in order to decrease the effects of agent learning on power networks. Experimental results carried out in a laboratory microgrid show that our approach makes full use of the available wind power, and smooths grid demand while charging EVs for their next day's trip, achieving a peak-to-average ration of 1.67, down from 2.24 in the baseline case. We also provide an analysis of the additional demand response effects observed in HIL, such as voltage drops and transients, which can impact the grid and are not observable in the GridLAB-D software simulation.