In recent years, the development of powerful Reinforcement Learning (RL) techniques has assured that this form of machine learning will become an indispensable component in the industry (e.g., manufacturing, electric power systems and grid management). In addition, RL will find its way into daily human activities through (semi-)autonomous cars, socially assistive robotics and household solar storage management.
However, any powerful RL technique is built around complex function approximation procedures that transform this framework into a black-box approach. This may prevent its application in future critical domains, considering the European General Data Protection Regulation (GDPR). It is a crucial moment to shift research efforts into creating an Explainable RL (XRL) framework by opening up the learning process to the user, explaining what has been learned, how the learning progressed and how the knowledge was applied. The approach taken is to augment existing reinforcement learning techniques, such that no loss in learning capacity is inflicted.