The world is a connected place, in which the cloud plays an increasingly vital role. One example is the Internet of Things, which has a topic of conversation in virtually all industries. The control of physical devices is no exception: modern wireless sensors allow these devices to move forward from local controllers towards smarter cloud-based architectures. This new ability of aggregating information from similar devices allows one to obtain a wider view of the problem, and achieve more effective, flexible and robust control. However, such devices are not always as similar as one might first expect. Local context or discrepancies in hardware cause them to interact with the world differently. The challenge is organizing their control in order to benefit from the similarities, but also identifying the differences and optimizing group performance.
In this project, we focus on the control of a group of similar interconnected devices, namely fleet control. Applications for fleet control range from interconnected vehicles, over industrial machines to energy production equipment. For example, consider a wind farm, which is a fleet of wind turbines generating energy on a large scale. The current trend is to place turbines together in a farm, which is particularly the case offshore. Advantages of such a farm include minimized transmission costs and maximum energy output from the available space. The farms have power outputs that are comparable to a conventional gas plant and should therefore be controlled as such. This means that energy production should be predictable and steerable over the farm. Today, each wind turbine decides based on its own sensed information, rather than deciding based on the bigger picture: overall weather conditions, energy demand at that time and the current health conditions of the turbines. Fleet control will improve predictability of energy output for the electricity grid and reduce the risk of failure by reducing loads on turbines that are already damaged.
We propose to automate fleet-wide control using Reinforcement Learning. This technique allows an agent to optimize its control strategy by interacting with its environment and assessing the quality of control decisions. Sharing data with a group of similar agents reduces the required interaction time of each agent, which is desirable for fragile devices and machines that are hard to maintain. We will explore ideas from multi-task reinforcement learning to cluster agents based on their similarities, both in terms of environment dynamics and effects of control decisions.
Research topics: