Feature detectors and descriptors

Functions

Functions and classes to detect and describe image features

Bundles OpenCV feature detectors and descriptors into the FeatureDD class

Also makes it easier to mix and match feature detectors and descriptors from different pacakges (e.g. skimage and OpenCV). See CensureVggFD for an example

valis.feature_detectors.filter_features(kp, desc, n_keep=20000)[source]

Get keypoints with highest response

Parameters:
  • kp (list) – List of cv2.KeyPoint detected by an OpenCV feature detector.

  • desc (ndarray) – 2D numpy array of keypoint descriptors, where each row is a keypoint and each column a feature.

  • n_keep (int) – Maximum number of features that are retained.

Returns:

  • Keypoints and and corresponding descriptors that the the n_keep highest

  • responses.

Classes

Base feature detector

class valis.feature_detectors.FeatureDD(kp_detector=None, kp_descriptor=None)[source]

Abstract class for feature detection and description.

User can create other feature detectors as subclasses, but each must return keypoint positions in xy coordinates along with the descriptors for each keypoint.

Note that in some cases, such as KAZE, kp_detector can also detect features. However, in other cases, there may need to be a separate feature detector (like BRISK or ORB) and feature descriptor (like VGG).

kp_detector

Keypoint detetor, by default from OpenCV

Type:

object

kp_descriptor

Keypoint descriptor, by default from OpenCV

Type:

object

kp_detector_name

Name of keypoint detector

Type:

str

kp_descriptor

Name of keypoint descriptor

Type:

str

detectAndCompute(image, mask=None)

Detects and describes keypoints in image

__init__(kp_detector=None, kp_descriptor=None)[source]
Parameters:
  • kp_detector (object) – Keypoint detetor, by default from OpenCV

  • kp_descriptor (object) – Keypoint descriptor, by default from OpenCV

detect_and_compute(image, mask=None)[source]

Detect the features in the image

Detect the features in the image using the defined kp_detector, then describe the features using the kp_descriptor. The user can override this method so they don’t have to use OpenCV’s Keypoint class.

Parameters:
  • image (ndarray) – Image in which the features will be detected. Should be a 2D uint8 image if using OpenCV

  • mask (ndarray, optional) – Binary image with same shape as image, where foreground > 0, and background = 0. If provided, feature detection will only be performed on the foreground.

Returns:

  • kp (ndarry) – (N, 2) array positions of keypoints in xy corrdinates for N keypoints

  • desc (ndarry) – (N, M) array containing M features for each of the N keypoints

BRISK

class valis.feature_detectors.BriskFD(kp_descriptor=< cv2.BRISK 0x7ff69af5ee70>)[source]

Bases: FeatureDD

Uses BRISK for feature detection and description

KAZE

class valis.feature_detectors.KazeFD(kp_descriptor=< cv2.KAZE 0x7ff69af5eeb0>)[source]

Bases: FeatureDD

Uses KAZE for feature detection and description

AKAZE

class valis.feature_detectors.AkazeFD(kp_descriptor=< cv2.AKAZE 0x7ff69af5ee90>)[source]

Bases: FeatureDD

Uses AKAZE for feature detection and description

DAISY

class valis.feature_detectors.DaisyFD(kp_detector=< cv2.BRISK 0x7ff69af5ecf0>, kp_descriptor=< cv2.xfeatures2d.DAISY 0x7ff69af5eed0>)[source]

Bases: FeatureDD

Uses BRISK for feature detection and DAISY for feature description

LATCH

class valis.feature_detectors.LatchFD(kp_detector=< cv2.BRISK 0x7ff69af5ecf0>, kp_descriptor=< cv2.xfeatures2d.LATCH 0x7ff69af5ef10>)[source]

Bases: FeatureDD

Uses BRISK for feature detection and LATCH for feature description

BOOST

class valis.feature_detectors.BoostFD(kp_detector=< cv2.BRISK 0x7ff69af5ecf0>, kp_descriptor=< cv2.xfeatures2d.BoostDesc 0x7ff69af5efb0>)[source]

Bases: FeatureDD

Uses BRISK for feature detection and Boost for feature description

VGG

class valis.feature_detectors.VggFD(kp_detector=< cv2.BRISK 0x7ff69af5ecf0>, kp_descriptor=< cv2.xfeatures2d.VGG 0x7ff69af5ef50>)[source]

Bases: FeatureDD

Uses BRISK for feature detection and VGG for feature description

Orb + Vgg

class valis.feature_detectors.OrbVggFD(kp_detector=< cv2.ORB 0x7ff69ae95750>, kp_descriptor=< cv2.xfeatures2d.VGG 0x7ff69ae95250>)[source]

Bases: FeatureDD

Uses ORB for feature detection and VGG for feature description

SuperPoint

class valis.feature_detectors.SuperPointFD(keypoint_threshold=0.005, nms_radius=4, force_cpu=False, kp_descriptor=None, kp_detector=None)[source]

Bases: FeatureDD

SuperPoint FeatureDD

Use SuperPoint to detect and describe features (detect_and_compute) Adapted from https://github.com/magicleap/SuperGluePretrainedNetwork/blob/master/match_pairs.py

References

Paul-Edouard Sarlin, Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. SuperGlue: Learning Feature Matching with Graph Neural Networks. In CVPR, 2020. https://arxiv.org/abs/1911.11763