DSD-MatchingNet: sparse to dense deformable f | EurekAlert!

Visualize a correspondence map

Image: left to right: one image with a key point the red circle in (a), intermediate feature maps generated by DSD-MatchingNet (b,c,d), final correspondence map (e), and predicted correspondence point in the other image (And)
Opinion more

Credit: Beijing Zhongke Journal Publishing Co., Ltd. Ltd.

Detection methods based on deep convolutional networks search for points of interest by generating response maps using supervised, self-supervised, and unsupervised methods. Supervised methods use anchors to guide the model training process; However, model performance is likely limited by the anchor generation method. Self-supervised and unsupervised methods rarely require human annotations. Instead, they use geometric constraints between two images to guide the model. Feature descriptors use local information (ie patches) about the detected key points to find the correct correspondences. Due to their exceptional information extraction and representation capabilities, deep learning techniques have performed well at describing features. The feature description is often formulated as a supervised learning problem, in which the feature space is learned in such a way that matching features are as close as possible, while unmatched features are further apart. Along this line of research, current methods are divided into two categories: metric learning and descriptive learning. The difference between these two methods lies in the output of the descriptors. Metric learning methods learn discriminatory measures of similarity, while descriptive learning generates descriptive representations from raw images or patches. Many methods adopt a comprehensive approach to integrate feature discovery, feature description, and feature matching into the matching pipeline, which is beneficial to improving matching performance. Several recent studies have shown competitive results in matching local advantages. However, their robustness and accuracy are often limited by challenging conditions, such as lighting and seasonal changes. Matching of local features may fail to establish a sufficiently reliable correspondence due to lighting differences and point-of-view changes. Correspondence accuracy plays an important role in the pipeline of computer vision tasks. The better the detection and matching quality, the more accurate and powerful the results. We consider shape awareness to be useful for feature matching. Therefore, in this study, we introduce DSD-MatchingNet for local feature matching. To alleviate the lack of shape awareness of features, we first introduce a deformable feature extraction framework with deformable convolutional networks, which allows us to learn a dynamic receptive field, estimate local transformations, and adjust for geometric differences. Second, to facilitate the implementation of matching at the pixel level, we develop sparse-to-dense vertical matching for learning correspondence maps. We then adopt the correspondence estimation error and the consistent error of the course to obtain a more accurate and robust correspondence. By making effective use of the above methods, the accuracy of DSD-MatchingNet was enhanced on the HPatches and Aachen Day-Night datasets. The main contributions of this study are summarized as follows:

We propose a new network, DSD-MatchingNet, that takes advantage of sparse-to-dense supercolumn matching for robust and accurate local feature matching.

We propose a deformable feature extraction framework to obtain dense multi-level feature maps, which are used for further sparse-to-dense matching. Deformable convolution networks are introduced into our framework to create a dynamic receptive field, which is useful for feature matching. This encourages the network to create more robust messaging.

We propose pixel-level correspondence error and correspondence symmetry to penalize incorrect predictions, which helps the network find exact matches.

Not giving an opinion: AAAS and EurekAlert! Not responsible for the accuracy of the newsletters sent on EurekAlert! Through contributing organizations or for using any information through the EurekAlert system.

Leave a Comment