Open Access
Subscription Access
Structural Active Control Framework Using Reinforcement Learning
Abstract
To maintain structural integrity and functionality structures are designed to accommodate operational loads as well as natural hazards during their lifetime. Active control systems are an efficient solution for structural response control when a structure is subjected to unexpected extreme loads. However, development of these systems through traditional means is limited by their model dependent nature. Recent advancements in adaptive learning methods, in particular, reinforcement learning (RL), for real-time decision-making problems, along with rapid growth in high-performance computational resources, enable structural engineers to transform the classic modelbased active control problem to a purely data-driven one. In this paper, we present a novel RL-based approach for designing active controllers by introducing RL-Controller, a flexible and scalable simulation environment. RL-Controller includes attributes and functionalities that are necessary to model active structural control mechanisms in detail. We show that the proposed framework is easily trainable for a five-story benchmark linear building with 65% reductions on average in inter story drifts (ISD) when subjected to strong ground motions. In a comparative study with an LQG active controller, we demonstrate that the proposed model-free algorithm learns actuator forcing strategies that yield higher performance, e.g., 25% more ISD reductions on average with respect to LQG, without using prior information about the mechanical properties of the system.
DOI
10.12783/shm2021/36293
10.12783/shm2021/36293
Full Text:
PDFRefbacks
- There are currently no refbacks.