Precision-scalable deep neural network (DNN) accelerator designs have attracted much research interest. Since the computation of most DNNs is dominated by multiply-accumulate (MAC) operations, designing efficient precision-scalable MAC (PSMAC) units is of central importance. This brief proposes two low-complexity PSMAC unit architectures based on the well-known one, Fusion Unit (FU), which is composed of a few basic units called Bit Bricks (BBs). We first simplify the architecture of BB through optimizing some redundant logic. Then a top-level architecture for PSMAC unit is devised by recursively employing BBs. Accordingly, two low-complexity PSMAC unit architectures are presented for two different kinds of quantization schemes. Moreover, we provide an insight into the decomposed multiplications and further reduce the bit widths of the two architectures. Experimental results show that our proposed architectures can save up to 44.18% area cost and 45.45% power consumption when compared with the state-of-the-art design.
Software Implementation:
Modelsim
Xilinx
” Thanks for Visit this project Pages – Register This Project and Buy soon with Novelty “
Low-Complexity Precision-Scalable Multiply-Accumulate Unit Architectures for Deep Neural Network Accelerators