Open Access
Subscription or Fee Access
Probabilistic Learning and Planning for Optimal Management of Wind Farms
Abstract
Wind energy is a renewable energy source that has been growing rapidly in recent years. However, wind farms have relatively high cost, of which operation and maintenance (O&M) cost takes up 25-35%. An optimal strategy for O&M can significantly reduce this cost. Wind turbines are subject to fatigue-induced degradation and need periodic inspections and repairs. A maintenance strategy for the farm has to be based on the prior knowledge on the degradation process, and on data collected in-field, by sensors and visual inspections. Traditional methods to model the O&M process, as Markov Decision Processes (MDPs) and Partially Observable MDPs (POMDPs), have limitations which do not permit them to include properly the knowledge available, and may result in non-optimal solutions. Specifically, the conditional probabilities for the evolution of the degradation state and the precision of the observations are usually affected by epistemic uncertainty. While MPDs and POMDPs are formulated for fixed transition and observation probabilities, the Bayes-Adaptive POMDP (BA-POMDP) framework treats those conditional probabilities as random variables, and is therefore suitable for including epistemic uncertainty. In this paper we propose a novel method to learn and update the distribution of these variables during the management process; and to select the optimal strategy under model uncertainty. We validate our approach with synthetic data, and we show that the management cost of a wind farm can be significantly less than that related to the POMDP approach, as the BA-POMDP framework allows finding robust optimal solutions, and learning from the environment to adapt the policy depending on the observations collected.