Satellite Imagery Recognition of Damaged Buildings Based on Convolutional Neural Networks with Transfer Learning

Conference: CAIBDA 2022 - 2nd International Conference on Artificial Intelligence, Big Data and Algorithms
06/17/2022 - 06/19/2022 at Nanjing, China

Proceedings: CAIBDA 2022

Pages: 4Language: englishTyp: PDF

Authors:
Yang, Bingjia (School of Internet Finance and Information Engineering, Guangdong University of Finance, Guangzhou, China)

Abstract:
Detecting damaged buildings is necessary for rescuing work after a hurricane. To classify damaged and undamaged buildings, using algorithms of image recognition on satellite images of buildings is better than human visual inspection which was time-consuming and unreliable. In this study, pre-trained Convolution Neural Networks (CNNs) are used to solve the problem and try to find a best way with better accuracy than previous studies. Considering that the pre-trained models based on ImageNet do not have the weights of satellite images, this paper also discusses the feasibility of transfer learning and fine-tuning techniques in this study to avoid unnecessary detours in experiments by other researchers. This study applies 4 models, namely, VGG-16 freezing VGG-16 layers, VGG-16 with fine-tuning, MobileNetV2 freezing MobileNetV2 layers, MobileNetV2 with fine-tuning. Overall, MobileNetV2 performs better than VGG-16. The accuracy of MobileNetV2 with fine-tuning and non-fine-tuning is 96.95% in the validation set. Whether in the balanced test set or the unbalanced test set, the accuracy of MobileNetV2 is even higher than 98%. VGG-16 with fine-tuning performs better than VGG-16 without fine-tuning in both their validation sets and two test sets. In terms of validation sets of VGG-16, the accuracy of non-fine-tuning and fine-tuning are 92.75% and 97.10% respectively. But it can be seen that the existence of fine-tuning does not seem to affect the accuracy in the model of MobileNetV2, probably because of the high accuracy.