FLBERT: Fast Lite BERT For Bone Point Cloud Self-Supervised Learning

Konferenz: CAIBDA 2022 - 2nd International Conference on Artificial Intelligence, Big Data and Algorithms
17.06.2022 - 19.06.2022 in Nanjing, China

Tagungsband: CAIBDA 2022

Seiten: 4Sprache: EnglischTyp: PDF

Autoren:
Zhou, Changhong; Jiang, Junfeng (College of Internet of Things Engineering, Hohai University, Changzhou, China)

Inhalt:
Increasing model size when pretraining BERT can result in improved performance on downstream tasks. At the same time, it increases the model inference time and GPU burden. To this end, we propose a novel self-supervised method, called FLBERT, to address these two questions with parameter sharing and additive attention. In addition, we represent the point cloud as a set of unordered groups of points with position embeddings, and we convert the point cloud to a sequence of point proxy. The experiments demonstrate that FLBERT can effectively reduce model size by 56% and improve inference time by 42% with less than 0.5% loss in accuracy.