Efficient Approximate Floating-Point Multiplier With Runtime Reconfigurable Frequency and Precision
Efficient Approximate Floating-Point Multiplier With Runtime Reconfigurable Frequency and Precision
Abstract:
Deep Neural Networks (DNNs) perform intensive matrix multiplications but can tolerate inaccurate intermediate results to some degree. This makes them a perfect target for energy reduction by approximate computing. However, current research in this direction requires DNNs redesign and does not provide the flexibility for users to trade accuracy for energy saving. In this brief, we propose a runtime reconfigurable approximate floating-point multiplier and present details of its hardware implementation. The flexible computation precision is provided by our error correction module, which is controlled by reconfigurable clock signals. The circuit design solves the glitch and metastability problems. The proposed approximate multiplier with three precision levels is evaluated on Synopsys design compiler and Xilinx FPGA platforms. Experimental results demonstrate the advantages of our approach in terms of speed, hardware overhead, and power consumption, while ensuring a controllable accuracy loss for DNNs inferences.
” Thanks for Visit this project Pages – Register This Project and Buy soon with Novelty “
Efficient Approximate Floating-Point Multiplier With Runtime Reconfigurable Frequency and Precision