LeFlow: Enabling Flexible FPGA High-Level Synthesis of Tensorflow Deep Neural Networks
Conference: FSP Workshop 2018 - Fifth International Workshop on FPGAs for Software Programmers
08/31/2018 at Dublin, Ireland
Pages: 8Language: englishTyp: PDFPersonal VDE Members are entitled to a 10% discount on this title
Noronha, Daniel H.; Salehpour, Bahar; Wilton, Steven J. E. (Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, B. C., Canada)
Recent work has shown that Field-Programmable Gate Arrays (FPGAs) play an important role in the acceleration of Machine Learning applications. Initial specification of machine learning applications are often done using a high-level Python-oriented framework such as Tensorflow, followed by a manual translation to either C or RTL for synthesis using vendor tools. This manual translation step is time-consuming and requires expertise that limit the applicability of FPGAs in this important domain. In this paper, we present an open-source tool-flow that maps numerical computation models written in Tensorflow to synthesizable hardware. Unlike other tools, which are often constrained by a small number of inflexible templates, our flow uses Google’s XLA compiler which emits LLVM code directly from a Tensorflow specification. This LLVM code can then be used with a high-level synthesis tool to automatically generate hardware. We show that our flow allows users to generate Deep Neural Networks with very few lines of Python code.