Feature matching

Functions

Functions and classes to match and filter image features

valis.feature_matcher.filter_matches(kp1_xy, kp2_xy, method='RANSAC', filtering_kwargs=None)[source]

Use RANSAC or GMS to remove poor matches

Parameters:
  • kp1_xy (ndarray) – (N, 2) array containing image 1s keypoint positions, in xy coordinates.

  • kp2_xy (ndarray) – (N, 2) array containing image 2s keypoint positions, in xy coordinates.

  • method (str) – method = “GMS” will use filter_matches_gms() to remove poor matches. This uses the Grid-based Motion Statistics. method = “RANSAC” will use RANSAC to remove poor matches

  • filtering_kwargs (dict) –

    Extra arguments passed to filtering function

    If method == “GMS”, these need to include: img1_shape, img2_shape, scaling, thresholdFactor. See filter_matches_gms for details

    If method == “RANSAC”, this can be None, since the ransac value is a class attribute

Returns:

  • filtered_src_points (ndarray) – (M, 2) ndarray of inlier keypoints from kp1_xy

  • filtered_dst_points ((N, 2) array) – (M, 2) ndarray of inlier keypoints from kp2_xy

  • good_idx (ndarray) – (M, 1) array containing ndices of inliers

valis.feature_matcher.match_desc_and_kp(desc1, kp1_xy, desc2, kp2_xy, metric=None, metric_type=None, metric_kwargs=None, max_ratio=1.0, filter_method='RANSAC', filtering_kwargs=None)[source]

Match the descriptors of image 1 with those of image 2 and remove outliers.

Metric can be a string to use a distance in scipy.distnce.cdist(), or a custom distance function

Parameters:
  • desc1 (ndarray) – (N, P) array of image 1’s descriptions for N keypoints, which each keypoint having P features

  • kp1_xy (ndarray) – (N, 2) array containing image 1’s keypoint positions (xy)

  • desc2 (ndarray) – (M, P) array of image 2’s descriptions for M keypoints, which each keypoint having P features

  • kp2_xy ((M, 2) array) – (M, 2) array containing image 2’s keypoint positions (xy)

  • metric (string, or callable) – Metric to calculate distance between each pair of features in desc1 and desc2. Can be a string to use as distance in spatial.distance.cdist, or a custom distance function

  • metric_kwargs (dict) – Optionl keyword arguments to be passed into pairwise_distances() or pairwise_kernels() from the sklearn.metrics.pairwise module

  • max_ratio (float, optional) – Maximum ratio of distances between first and second closest descriptor in the second set of descriptors. This threshold is useful to filter ambiguous matches between the two descriptor sets. The choice of this value depends on the statistics of the chosen descriptor, e.g., for SIFT descriptors a value of 0.8 is usually chosen, see D.G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision, 2004.

  • filter_method (str) – “GMS” will use uses the Grid-based Motion Statistics “RANSAC” will use RANSAC

  • filtering_kwargs (dict) –

    Dictionary containing extra arguments for the filtering method. kp1_xy, kp2_xy, feature_d are calculated here, and don’t need to be in filtering_kwargs. If filter_method == “GMS”, then the required arguments are: img1_shape, img2_shape, scaling, thresholdFactor. See filter_matches_gms for details.

    If filter_method == “RANSAC”, then the required arguments are: ransac_val. See filter_matches_ransac for details.

  • Returns

  • -------

  • match_info12 (MatchInfo) – Contains information regarding the matches between image 1 and image 2. These results haven’t undergone filtering, so contain many poor matches.

  • filtered_match_info12 (MatchInfo) – Contains information regarding the matches between image 1 and image 2. These results have undergone filtering, and so contain good matches

  • match_info21 (MatchInfo) – Contains information regarding the matches between image 2 and image 1. These results haven’t undergone filtering, so contain many poor matches.

  • filtered_match_info21 (MatchInfo) – Contains information regarding the matches between image 2 and image 1. These results have undergone filtering, and so contain good matches

Classes

MatchInfo

class valis.feature_matcher.MatchInfo(matched_kp1_xy, matched_desc1, matches12, matched_kp2_xy, matched_desc2, matches21, match_distances, distance, similarity, metric_name, metric_type, img1_name=None, img2_name=None)[source]

Class that stores information related to matches. One per pair of images

All attributes are all set as parameters during initialization

__init__(matched_kp1_xy, matched_desc1, matches12, matched_kp2_xy, matched_desc2, matches21, match_distances, distance, similarity, metric_name, metric_type, img1_name=None, img2_name=None)[source]

Stores information about matches and features

Parameters:
  • matched_kp1_xy (ndarray) – (Q, 2) array of image 1 keypoint xy coordinates after filtering

  • matched_desc1 (ndarray) – (Q, P) array of matched descriptors for image 1, each of which has P features

  • matches12 (ndarray) – (1, Q) array of indices of featiures in image 1 that matched those in image 2

  • matched_kp2_xy (ndarray) – (Q, 2) array containing Q matched image 2 keypoint xy coordinates after filtering

  • matched_desc2 (ndarray) – (Q, P) containing Q matched descriptors for image 2, each of which has P features

  • matches21 (ndarray) – (1, Q) containing indices of featiures in image 2 that matched those in image 1

  • match_distances (ndarray) – Distances between each of the Q pairs of matched descriptors

  • n_matches (int) – Number of good matches (i.e. the number of inlier keypoints)

  • distance (float) – Mean distance of features

  • similarity (float) – Mean similarity of features

  • metric_name (str) – Name of metric

  • metric_type (str) – “distsnce” or “similarity”

  • img1_name (str) – Name of the image that kp1 and desc1 belong to

  • img2_name (str) – Name of the image that kp2 and desc2 belong to

Matcher

class valis.feature_matcher.Matcher(metric=None, metric_type=None, metric_kwargs=None, match_filter_method='RANSAC', ransac_thresh=7, gms_threshold=15, scaling=False)[source]

Class that matchs the descriptors of image 1 with those of image 2

Outliers removed using RANSAC or GMS

metric

Metric to calculate distance between each pair of features in desc1 and desc2. Can be a string to use as distance in spatial.distance.cdist, or a custom distance function

Type:

str, or callable

metric_name

Name metric used. Will be the same as metric if metric is string. If metric is function, this will be the name of the function.

Type:

str

metric_type

String describing what the custom metric function returns, e.g. ‘similarity’ or ‘distance’. If None, and metric is a function it is assumed to be a distance, but there will be a warning that this variable should be provided to either define that it is a similarity, or to avoid the warning by having metric_type=’distance’ In the case of similarity, the number of features will be used to convert distances

Type:

str, or callable

ransac

The residual threshold to determine if a match is an inlier. Only used if filter_method == {RANSAC_NAME}. Default is “RANSAC”

Type:

int

gms_threshold

Used when filter_method is “GMS”. The higher, the fewer matches.

Type:

int

scaling

Whether or not image scaling should be considered when filter_method is “GMS”

Type:

bool

metric_kwargs

Keyword arguments passed into the metric when calling spatial.distance.cdist

Type:

dict

match_filter_method

“GMS” will use filter_matches_gms() to remove poor matches. This uses the Grid-based Motion Statistics (GMS) or RANSAC.

Type:

str

__init__(metric=None, metric_type=None, metric_kwargs=None, match_filter_method='RANSAC', ransac_thresh=7, gms_threshold=15, scaling=False)[source]
Parameters:
  • metric (str, or callable) – Metric to calculate distance between each pair of features in desc1 and desc2. Can be a string to use as distance in spatial.distance.cdist, or a custom distance function

  • metric_type (str, or callable) – String describing what the custom metric function returns, e.g. ‘similarity’ or ‘distance’. If None, and metric is a function it is assumed to be a distance, but there will be a warning that this variable should be provided to either define that it is a similarity, or to avoid the warning by having metric_type=’distance’ In the case of similarity, the number of features will be used to convert distances

  • metric_kwargs (dict) – Keyword arguments passed into the metric when calling spatial.distance.cdist

  • filter_method (str) – “GMS” will use filter_matches_gms() to remove poor matches. This uses the Grid-based Motion Statistics (GMS) or RANSAC.

  • ransac_val (int) – The residual threshold to determine if a match is an inlier. Only used if filter_method is “RANSAC”.

  • gms_threshold (int) – Used when filter_method is “GMS”. The higher, the fewer matches.

  • scaling (bool) – Whether or not image scaling should be considered when filter_method is “GMS”.

match_images(desc1, kp1_xy, desc2, kp2_xy, additional_filtering_kwargs=None, *args, **kwargs)[source]

Match the descriptors of image 1 with those of image 2, Outliers removed using match_filter_method. Metric can be a string to use a distance in scipy.distnce.cdist(), or a custom distance function. Sets atttributes for Matcher object

Parameters:
  • desc1 ((N, P) array) – Image 1s 2D array containinng N keypoints, each of which has P features

  • kp1_xy ((N, 2) array) – Image 1s keypoint positions, in xy coordinates, for each of the N descriptors in desc1

  • desc2 ((M, P) array) – Image 2s 2D array containinng M keypoints, each of which has P features

  • kp2_xy ((M, 2) array) – Image 1s keypoint positions, in xy coordinates, for each of the M descriptors in desc2

  • additional_filtering_kwargs (dict, optional) – Extra arguments passed to filtering function If self.match_filter_method == “GMS”, these need to include: img1_shape, img2_shape. See filter_matches_gms for details If If self.match_filter_method == “RANSAC”, this can be None, since the ransac value is class attribute

Returns:

  • match_info12 (MatchInfo) – Contains information regarding the matches between image 1 and image 2. These results haven’t undergone filtering, so contain many poor matches.

  • filtered_match_info12 (MatchInfo) – Contains information regarding the matches between image 1 and image 2. These results have undergone filtering, and so contain good matches

  • match_info21 (MatchInfo) – Contains information regarding the matches between image 2 and image 1. These results haven’t undergone filtering, so contain many poor matches.

  • filtered_match_info21 (MatchInfo) – Contains information regarding the matches between image 2 and image 1.

SuperPointAndGlue

class valis.feature_matcher.SuperPointAndGlue(weights='indoor', keypoint_threshold=0.005, nms_radius=4, sinkhorn_iterations=100, match_threshold=0.2, force_cpu=False, metric=None, metric_type=None, metric_kwargs=None, match_filter_method='RANSAC', ransac_thresh=7, gms_threshold=15, scaling=False)[source]

Use SuperPoint SuperPoint + SuperGlue to match images (match_images)

Implementation adapted from https://github.com/magicleap/SuperGluePretrainedNetwork/blob/master/match_pairs.py

References

Paul-Edouard Sarlin, Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. SuperGlue: Learning Feature Matching with Graph Neural Networks. In CVPR, 2020. https://arxiv.org/abs/1911.11763

__init__(weights='indoor', keypoint_threshold=0.005, nms_radius=4, sinkhorn_iterations=100, match_threshold=0.2, force_cpu=False, metric=None, metric_type=None, metric_kwargs=None, match_filter_method='RANSAC', ransac_thresh=7, gms_threshold=15, scaling=False)[source]
Parameters:
  • weights (str) – SuperGlue weights. Options= [“indoor”, “outdoor”]

  • keypoint_threshold (float) – SuperPoint keypoint detector confidence threshold

  • nms_radius (int) – SuperPoint Non Maximum Suppression (NMS) radius (must be positive)

  • sinkhorn_iterations (int) – Number of Sinkhorn iterations performed by SuperGlue

  • match_threshold (float) – SuperGlue match threshold

  • force_cpu (bool) – Force pytorch to run in CPU mode

  • scaling (bool) – Whether or not image scaling should be considered when filter_method is “GMS”.

match_images(img1=None, desc1=None, kp1_xy=None, img2=None, desc2=None, kp2_xy=None, additional_filtering_kwargs=None)[source]

Match the descriptors of image 1 with those of image 2, Outliers removed using match_filter_method. Metric can be a string to use a distance in scipy.distnce.cdist(), or a custom distance function. Sets atttributes for Matcher object

Parameters:
  • desc1 ((N, P) array) – Image 1s 2D array containinng N keypoints, each of which has P features

  • kp1_xy ((N, 2) array) – Image 1s keypoint positions, in xy coordinates, for each of the N descriptors in desc1

  • desc2 ((M, P) array) – Image 2s 2D array containinng M keypoints, each of which has P features

  • kp2_xy ((M, 2) array) – Image 1s keypoint positions, in xy coordinates, for each of the M descriptors in desc2

  • additional_filtering_kwargs (dict, optional) – Extra arguments passed to filtering function If self.match_filter_method == “GMS”, these need to include: img1_shape, img2_shape. See filter_matches_gms for details If If self.match_filter_method == “RANSAC”, this can be None, since the ransac value is class attribute

Returns:

  • match_info12 (MatchInfo) – Contains information regarding the matches between image 1 and image 2. These results haven’t undergone filtering, so contain many poor matches.

  • filtered_match_info12 (MatchInfo) – Contains information regarding the matches between image 1 and image 2. These results have undergone filtering, and so contain good matches

  • match_info21 (MatchInfo) – Contains information regarding the matches between image 2 and image 1. These results haven’t undergone filtering, so contain many poor matches.

  • filtered_match_info21 (MatchInfo) – Contains information regarding the matches between image 2 and image 1.

SuperGlueMatcher

class valis.feature_matcher.SuperGlueMatcher(weights='indoor', keypoint_threshold=0.005, nms_radius=4, sinkhorn_iterations=100, match_threshold=0.2, force_cpu=False, metric=None, metric_type=None, metric_kwargs=None, match_filter_method='RANSAC', ransac_thresh=7, gms_threshold=15, scaling=False)[source]

Use SuperGlue to match images (match_images)

Implementation adapted from https://github.com/magicleap/SuperGluePretrainedNetwork/blob/master/match_pairs.py

References

Paul-Edouard Sarlin, Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. SuperGlue: Learning Feature Matching with Graph Neural Networks. In CVPR, 2020. https://arxiv.org/abs/1911.11763

__init__(weights='indoor', keypoint_threshold=0.005, nms_radius=4, sinkhorn_iterations=100, match_threshold=0.2, force_cpu=False, metric=None, metric_type=None, metric_kwargs=None, match_filter_method='RANSAC', ransac_thresh=7, gms_threshold=15, scaling=False)[source]

Use SuperGlue to match images (match_images)

Adapted from https://github.com/magicleap/SuperGluePretrainedNetwork/blob/master/match_pairs.py

Parameters:
  • weights (str) – SuperGlue weights. Options= [“indoor”, “outdoor”]

  • keypoint_threshold (float) – SuperPoint keypoint detector confidence threshold

  • nms_radius (int) – SuperPoint Non Maximum Suppression (NMS) radius (must be positive)

  • sinkhorn_iterations (int) – Number of Sinkhorn iterations performed by SuperGlue

  • match_threshold (float) – SuperGlue match threshold

  • force_cpu (bool) – Force pytorch to run in CPU mode

  • scaling (bool) – Whether or not image scaling should be considered when filter_method is “GMS”.

match_images(img1=None, desc1=None, kp1_xy=None, img2=None, desc2=None, kp2_xy=None, additional_filtering_kwargs=None)[source]

Match the descriptors of image 1 with those of image 2, Outliers removed using match_filter_method. Metric can be a string to use a distance in scipy.distnce.cdist(), or a custom distance function. Sets atttributes for Matcher object

Parameters:
  • desc1 ((N, P) array) – Image 1s 2D array containinng N keypoints, each of which has P features

  • kp1_xy ((N, 2) array) – Image 1s keypoint positions, in xy coordinates, for each of the N descriptors in desc1

  • desc2 ((M, P) array) – Image 2s 2D array containinng M keypoints, each of which has P features

  • kp2_xy ((M, 2) array) – Image 1s keypoint positions, in xy coordinates, for each of the M descriptors in desc2

  • additional_filtering_kwargs (dict, optional) – Extra arguments passed to filtering function If self.match_filter_method == “GMS”, these need to include: img1_shape, img2_shape. See filter_matches_gms for details If If self.match_filter_method == “RANSAC”, this can be None, since the ransac value is class attribute

Returns:

  • match_info12 (MatchInfo) – Contains information regarding the matches between image 1 and image 2. These results haven’t undergone filtering, so contain many poor matches.

  • filtered_match_info12 (MatchInfo) – Contains information regarding the matches between image 1 and image 2. These results have undergone filtering, and so contain good matches

  • match_info21 (MatchInfo) – Contains information regarding the matches between image 2 and image 1. These results haven’t undergone filtering, so contain many poor matches.

  • filtered_match_info21 (MatchInfo) – Contains information regarding the matches between image 2 and image 1.