FLBERT: Fast Lite BERT For Bone Point Cloud Self-Supervised Learning

Conference: CAIBDA 2022 - 2nd International Conference on Artificial Intelligence, Big Data and Algorithms
06/17/2022 - 06/19/2022 at Nanjing, China

Proceedings: CAIBDA 2022

Pages: 4Language: englishTyp: PDF

Authors:
Zhou, Changhong; Jiang, Junfeng (College of Internet of Things Engineering, Hohai University, Changzhou, China)

Abstract:
Increasing model size when pretraining BERT can result in improved performance on downstream tasks. At the same time, it increases the model inference time and GPU burden. To this end, we propose a novel self-supervised method, called FLBERT, to address these two questions with parameter sharing and additive attention. In addition, we represent the point cloud as a set of unordered groups of points with position embeddings, and we convert the point cloud to a sequence of point proxy. The experiments demonstrate that FLBERT can effectively reduce model size by 56% and improve inference time by 42% with less than 0.5% loss in accuracy.