A comparative analysis of Convolution Neural Network models on Amazon Cloud Service

Konferenz: AIIPCC 2022 - The Third International Conference on Artificial Intelligence, Information Processing and Cloud Computing
21.06.2022 - 22.06.2022 in Online

Tagungsband: AIIPCC 2022

Seiten: 7Sprache: EnglischTyp: PDF

Autoren:
Doukha, Rim (Computer Science Department, FPMs, University of Mons, Belgium & National Higher School of Computer Sciences and Systems Analysis, Mohammed V University, Rabat, Morocco)
Mahmoudi, Sidi Ahmed; Manneback, Pierre (Computer Science Department, FPMs, University of Mons, Belgium)
Zbakh, Mostapha (National Higher School of Computer Sciences and Systems Analysis, Mohammed V University, Rabat, Morocco)

Inhalt:
Deep Learning (DL) is increasingly used in Cloud Computing services, where almost unlimited computing resources are available to accelerate the training, testing, and deployment of models. However, it is important to mention the challenges that developers may face when using a Cloud services, for instance the variation of application requirements over time in terms of computation, memory and energy consumption. This variation may require migration to higher performance resources. Indeed, Cloud services dedicated to DL applications such as Graphics Processing Unit (GPU) resources are quite expensive, specifically for small and medium companies. In this context, it is beneficial for Cloud users to understand the needs of DL applications in order to guarantee the well-functioning of their applications over time and reduce the cost of the allocated resources. While considering the importance and the complexity of Convolutional Neural Networks (CNN) in DL, this paper presents a comparative analysis of different types of CNN models (ResNet50, VGG16, VGG19, Inception-v3, Xception) in order to find out when migrating to more powerful GPUs is advantageous in terms of execution time and cost. This analysis was conducted by extracting GPU usage, execution time and associated costs for training models and was performed using Amazon Elastic Computing (EC2) instances dedicated to DL and Amazon CloudWatch for monitoring model metrics. Experimental results showed that is recommended to migrate models using more than 90% of GPU performance to more powerful infrastructures compared to those using less than 90%.