Feature matching is a fundamental step in many real-time computer vision applications such as simultaneous localization and mapping, motion analysis, and stereo correspondence. The performance of these applications depends on the distinctiveness of the visual feature descriptors used, and the speed at which they can be extracted from video frames. When combined with standard key-point detectors, the rotationaware binary robust independent elementary features (rBRIEF) descriptor has been shown to outperform its counterparts. In this paper, we present a deep-pipelined stream processing architecture that is capable of extracting rBRIEF features from highthroughput video frames. To achieve high processing rate and low complexity hardware, the proposed architecture incorporates an enhanced moving summation strategy to calculate the keypoints’ patch moments and employs approximate computations to achieve patch rotation. Multiplier-less circuitry is introduced throughout the architecture to avoid the use of costly multipliers. Implementation on the Altera Aria V device demonstrates that the proposed architecture leads to 53.3% reduction in hardware resources (adaptive logic modules), while achieving 50% higher accuracy (in terms of average Hamming distance) when compared to the state-of-the-art architecture. In addition, the proposed architecture is able to process high-resolution (1920×1080) images at 60 fps, while consuming only 456.15 mW power.
Software Implementation:
Modelsim
Xilinx
Advantages:
Costly multipliers are not required.
Less Power consumption even in high resolution (1920 x 1080).