Johan Edstedt · Qiyu Sun · Georg Bökman · Mårten Wadenbäck · Michael Felsberg
RoMa is the robust dense feature matcher capable of estimating pixel-dense warps and reliable certainties for almost any image pair.
In your python environment (tested on Linux python 3.12), run:
uv pip install -e .
or
uv sync
You can also install romatch
directly as a package from PyPI by
uv pip install romatch
We provide two demos in the demos folder. Here's the gist of it:
from romatch import roma_outdoor
roma_model = roma_outdoor(device=device)
# Match
warp, certainty = roma_model.match(imA_path, imB_path, device=device)
# Sample matches for estimation
matches, certainty = roma_model.sample(warp, certainty)
# Convert to pixel coordinates (RoMa produces matches in [-1,1]x[-1,1])
kptsA, kptsB = roma_model.to_pixel_coordinates(matches, H_A, W_A, H_B, W_B)
# Find a fundamental matrix (or anything else of interest)
F, mask = cv2.findFundamentalMat(
kptsA.cpu().numpy(), kptsB.cpu().numpy(), ransacReprojThreshold=0.2, method=cv2.USAC_MAGSAC, confidence=0.999999, maxIters=10000
)
New: You can also match arbitrary keypoints with RoMa. See match_keypoints in RegressionMatcher.
By default RoMa uses an initial resolution of (560,560) which is then upsampled to (864,864). You can change this at construction (see roma_outdoor kwargs). You can also change this later, by changing the roma_model.w_resized, roma_model.h_resized, and roma_model.upsample_res.
roma_model.sample_thresh controls the thresholding used when sampling matches for estimation. In certain cases a lower or higher threshold may improve results.
The experiments in the paper are provided in the experiments folder.
- First follow the instructions provided here: https://github.com/Parskatt/DKM for downloading and preprocessing datasets.
- Run the relevant experiment, e.g.,
torchrun --nproc_per_node=4 --nnodes=1 --rdzv_backend=c10d experiments/roma_outdoor.py
python experiments/roma_outdoor.py --only_test --benchmark mega-1500
All our code except DINOv2 is MIT license. DINOv2 has an Apache 2 license DINOv2.
Our codebase builds on the code in DKM.
If you find that RoMa is too heavy, you might want to try Tiny RoMa which is built on top of XFeat.
from romatch import tiny_roma_v1_outdoor
tiny_roma_model = tiny_roma_v1_outdoor(device=device)
Mega1500:
AUC@5 | AUC@10 | AUC@20 | |
---|---|---|---|
XFeat | 46.4 | 58.9 | 69.2 |
XFeat* | 51.9 | 67.2 | 78.9 |
Tiny RoMa v1 | 56.4 | 69.5 | 79.5 |
RoMa | - | - | - |
Mega-8-Scenes (See DKM):
AUC@5 | AUC@10 | AUC@20 | |
---|---|---|---|
XFeat | - | - | - |
XFeat* | 50.1 | 64.4 | 75.2 |
Tiny RoMa v1 | 57.7 | 70.5 | 79.6 |
RoMa | - | - | - |
IMC22 :'):
mAA@10 | |
---|---|
XFeat | 42.1 |
XFeat* | - |
Tiny RoMa v1 | 42.2 |
RoMa | - |
There are a few diffs in the current codebase compared to the original repo used to run experiments.
- The
scale_factor
used in thematch
method now is relative to the original training resolution of560
. Previosly it was based on the set coarse resolution (which might or might not be560
). - Newer PyTorch, original code used something like
2.1
. - Stochastic eval: both RANSAC and the chosen correspondences can affect results in
Mega1500
. - Matrix inverse in GP has been replaced with cholesky decomp.
That being said, if diff of results are
If you find our models useful, please consider citing our paper!
@article{edstedt2024roma,
title={{RoMa: Robust Dense Feature Matching}},
author={Edstedt, Johan and Sun, Qiyu and Bökman, Georg and Wadenbäck, Mårten and Felsberg, Michael},
journal={IEEE Conference on Computer Vision and Pattern Recognition},
year={2024}
}