Open Access Open Access  Restricted Access Subscription Access

Image to Image Translation of Structural Damage Using Generative Adversarial Networks

SUBIN VARGHESE, REBECCA WANG, VEDHUS HOSKERE

Abstract


In the aftermath of earthquakes, structures can become unsafe and hazardous for humans to safely reside. Automated methods that detect structural damage can be invaluable for rapid inspections and faster recovery times. Deep neural networks (DNNs) have proven to be an effective means to classify damaged areas in images of structures but have limited generalizability due to the lack of large and diverse annotated datasets (e.g., variations in building properties like size, shape, color). Given a dataset of paired images of damaged and undamaged structures supervised deep learning methods could be employed, but such paired correspondences of images required for training are exceedingly difficult to acquire. Obtaining a variety of undamaged images, and a smaller set of damaged images is more viable. We present a novel application of deep learning for unpaired image-to-image translation between undamaged and damaged structures as a means of data augmentation to combat the lack of diverse data. Unpaired image-to-image translation is achieved using Cycle Consistent Adversarial Network (CCAN) architectures, which have the capability to translate images while retaining the geometric structure of an image. We explore the capability of the original CCAN architecture, and propose a new architecture for unpaired image-to-image translation (termed Eigen Integrated Generative Adversarial Network or EIGAN) that addresses shortcomings of the original architecture for our application. We create a new unpaired dataset to translate an image between domains of damaged and undamaged structures. The dataset created consists of a set of damaged and undamaged buildings from Mexico City affected by the 2017 Puebla earthquake. Qualitative and quantitative results of the various architectures are presented to better compare the quality of the translated images. A comparison is also done on the performance of DNNs trained to classify damaged structures using generated images. The results demonstrate that targeted image-to-image translation of undamaged to damaged structures is an effective means of data augmentation to improve network performance.


DOI
10.12783/shm2021/36307

Full Text:

PDF

Refbacks

  • There are currently no refbacks.