Compressing deep neural networks on FPGAs to binary and ternary precision with hls4ml

Ngadiuba, Jennifer and Loncar, Vladimir and Pierini, Maurizio and Summers, Sioni and Di Guglielmo, Giuseppe and Duarte, Javier and Harris, Philip and Rankin, Dylan and Jindariani, Sergo and Liu, Mia and Pedro, Kevin and Tran, Nhan and Kreinar, Edward and Sagear, Sheila and Wu, Zhenbin and Hoang, Duc (2020) Compressing deep neural networks on FPGAs to binary and ternary precision with hls4ml. Machine Learning: Science and Technology, 2 (1). 015001. ISSN 2632-2153

[thumbnail of Ngadiuba_2021_Mach._Learn.__Sci._Technol._2_015001.pdf] Text
Ngadiuba_2021_Mach._Learn.__Sci._Technol._2_015001.pdf - Published Version

Download (1MB)

Abstract

We present the implementation of binary and ternary neural networks in the hls4ml library, designed to automatically convert deep neural network models to digital circuits with field-programmable gate arrays (FPGA) firmware. Starting from benchmark models trained with floating point precision, we investigate different strategies to reduce the network's resource consumption by reducing the numerical precision of the network parameters to binary or ternary. We discuss the trade-off between model accuracy and resource consumption. In addition, we show how to balance between latency and accuracy by retaining full precision on a selected subset of network components. As an example, we consider two multiclass classification tasks: handwritten digit recognition with the MNIST data set and jet identification with simulated proton-proton collisions at the CERN Large Hadron Collider. The binary and ternary implementation has similar performance to the higher precision implementation while using drastically fewer FPGA resources.

Item Type: Article
Subjects: Research Scholar Guardian > Multidisciplinary
Depositing User: Unnamed user with email support@scholarguardian.com
Date Deposited: 15 Sep 2023 05:09
Last Modified: 15 Sep 2023 05:09
URI: http://science.sdpublishers.org/id/eprint/1290

Actions (login required)

View Item
View Item