Enhancing Reinforcement Learning for Home Energy Management via Policy Transfer and Prioritized Level Replay

Konferenz: PESS 2025 - IEEE Power and Energy Student Summit
08.10.2025-10.10.2025 in Munich, Germany

doi:10.30420/566656007

Tagungsband: PESS 2025 – IEEE Power and Energy Student Summit,

Seiten: 6Sprache: EnglischTyp: PDF

Autoren:
Bley, Christoph; Peric, Verdran S.

Inhalt:
This study investigates reinforcement learning (RL) for Home Energy Management Systems (HEMS) with a focus on transferability across buildings with differing technical parameters including thermal storage size, heat pump characteristics, photovoltaic capacity and insulation level. Using a recurrent Soft Actor-Critic (RSAC) architecture and two robustness extensions—Domain Randomization (RSAC-DR) and Prioritized Level Replay (RSAC-PLR)—agents are pre-trained in a model house and deployed in a mismatched real house. Results show that RL policies can be transferred without major performance loss and ultimately outperform a rule-based controller (RBC) in both cost efficiency and comfort. In particular, RSAC-PLR achieved the best trade-off, reducing electricity costs the most while substantially improving comfort. However, all RL agents exhibited a significant cost increase during their first deployment winter, highlighting the need for further research into strategies that ensure robust early performance.