OpenCV Orb not finding matches once rotation/scale invariances are introduced(引入旋转/缩放不变性后,OpenCV Orb 找不到匹配项)
本文介绍了引入旋转/
缩放不变性后,OpenCV Orb 找不到匹配项的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正在使用 OpenCV 2.3.1 中的 Orb 特征检测器进行项目.我找到了 8 张不同图像之间的匹配,其中 6 张非常相似(相机位置相差 20 厘米,沿着线性滑块,因此没有比例或旋转变化),然后从大约 45 度角拍摄的 2 张图像来自边.我的代码在非常相似的图像之间找到了大量准确的匹配,但对于从更不同的角度拍摄的图像,几乎没有.我已经包含了我认为代码的相关部分,如果您需要更多信息,请告诉我.
I am working on a project using the Orb feature detector in OpenCV 2.3.1 . I am finding matches between 8 different images, 6 of which are very similar (20 cm difference in camera position, along a linear slider so there is no scale or rotational variance), and then 2 images taken from about a 45 degree angle from either side. My code is finding plenty of accurate matches between the very similar images, but few to none for the images taken from a more different perspective. I've included what I think are the pertinent parts of my code, please let me know if you need more information.
推荐答案
通过更改过滤匹配的流程,我最终获得了足够多的有用匹配.我以前的方法是仅根据它们的距离值丢弃很多好的匹配项.我在 OpenCV2 计算机视觉应用程序编程手册中找到的这个 RobustMatcher
类最终运行良好.现在我的所有匹配都准确无误,通过增加 ORB 检测器正在寻找的关键点的数量,我已经能够获得足够好的结果.将 RobustMatcher
与 SIFT 或 SURF 结合使用仍能提供更好的结果,但我现在使用 ORB 获得了可用数据.
I ended up getting enough useful matches by changing my process for filtering matches. My previous method was discarding a lot of good matches based solely on their distance value. This RobustMatcher
class that I found in the OpenCV2 Computer Vision Application Programming Cookbook ended up working great. Now that all of my matches are accurate, I've been able to get good enough results by bumping up the number of keypoints that the ORB detector is looking. Using the RobustMatcher
with SIFT or SURF still gives much better results, but I'm getting usable data with ORB now.
class RobustMatcher {
private:
cv::Ptr<cv::FeatureDetector> detector;
cv::Ptr<cv::DescriptorExtractor> extractor;
cv::Ptr<cv::DescriptorMatcher > matcher;
float ratio;
bool refineF;
double distance;
double confidence;
public:
RobustMatcher() : ratio(0.65f), refineF(true),
confidence(0.99), distance(3.0) {
detector= new cv::OrbFeatureDetector();
extractor= new cv::OrbDescriptorExtractor();
matcher= new cv::BruteForceMatcher<cv::HammingLUT>;
}
void setFeatureDetector(
cv::Ptr<cv::FeatureDetector>& detect) {
detector= detect;
}
void setDescriptorExtractor(
cv::Ptr<cv::DescriptorExtractor>& desc) {
extractor= desc;
}
void setDescriptorMatcher(
cv::Ptr<cv::DescriptorMatcher>& match) {
matcher= match;
}
void setConfidenceLevel(
double conf) {
confidence= conf;
}
void setMinDistanceToEpipolar(
double dist) {
distance= dist;
}
void setRatio(
float rat) {
ratio= rat;
}
int ratioTest(std::vector<std::vector<cv::DMatch> >
&matches) {
int removed=0;
for (std::vector<std::vector<cv::DMatch> >::iterator
matchIterator= matches.begin();
matchIterator!= matches.end(); ++matchIterator) {
if (matchIterator->size() > 1) {
if ((*matchIterator)[0].distance/
(*matchIterator)[1].distance > ratio) {
matchIterator->clear();
removed++;
}
} else {
matchIterator->clear();
removed++;
}
}
return removed;
}
void symmetryTest(
const std::vector<std::vector<cv::DMatch> >& matches1,
const std::vector<std::vector<cv::DMatch> >& matches2,
std::vector<cv::DMatch>& symMatches) {
for (std::vector<std::vector<cv::DMatch> >::
const_iterator matchIterator1= matches1.begin();
matchIterator1!= matches1.end(); ++matchIterator1) {
if (matchIterator1->size() < 2)
continue;
for (std::vector<std::vector<cv::DMatch> >::
const_iterator matchIterator2= matches2.begin();
matchIterator2!= matches2.end();
++matchIterator2) {
if (matchIterator2->size() < 2)
continue;
if ((*matchIterator1)[0].queryIdx ==
(*matchIterator2)[0].trainIdx &&
(*matchIterator2)[0].queryIdx ==
(*matchIterator1)[0].trainIdx) {
symMatches.push_back(
cv::DMatch((*matchIterator1)[0].queryIdx,
(*matchIterator1)[0].trainIdx,
(*matchIterator1)[0].distance));
break;
}
}
}
}
cv::Mat ransacTest(
const std::vector<cv::DMatch>& matches,
const std::vector<cv::KeyPoint>& keypoints1,
const std::vector<cv::KeyPoint>& keypoints2,
std::vector<cv::DMatch>& outMatches) {
std::vector<cv::Point2f> points1, points2;
cv::Mat fundemental;
for (std::vector<cv::DMatch>::
const_iterator it= matches.begin();
it!= matches.end(); ++it) {
float x= keypoints1[it->queryIdx].pt.x;
float y= keypoints1[it->queryIdx].pt.y;
points1.push_back(cv::Point2f(x,y));
x= keypoints2[it->trainIdx].pt.x;
y= keypoints2[it->trainIdx].pt.y;
points2.push_back(cv::Point2f(x,y));
}
std::vector<uchar> inliers(points1.size(),0);
if (points1.size()>0&&points2.size()>0){
cv::Mat fundemental= cv::findFundamentalMat(
cv::Mat(points1),cv::Mat(points2),
inliers,
CV_FM_RANSAC,
distance,
confidence);
std::vector<uchar>::const_iterator
itIn= inliers.begin();
std::vector<cv::DMatch>::const_iterator
itM= matches.begin();
for ( ;itIn!= inliers.end(); ++itIn, ++itM) {
if (*itIn) {
outMatches.push_back(*itM);
}
}
if (refineF) {
points1.clear();
points2.clear();
for (std::vector<cv::DMatch>::
const_iterator it= outMatches.begin();
it!= outMatches.end(); ++it) {
float x= keypoints1[it->queryIdx].pt.x;
float y= keypoints1[it->queryIdx].pt.y;
points1.push_back(cv::Point2f(x,y));
x= keypoints2[it->trainIdx].pt.x;
y= keypoints2[it->trainIdx].pt.y;
points2.push_back(cv::Point2f(x,y));
}
if (points1.size()>0&&points2.size()>0){
fundemental= cv::findFundamentalMat(
cv::Mat(points1),cv::Mat(points2),
CV_FM_8POINT);
}
}
}
return fundemental;
}
cv::Mat match(cv::Mat& image1,
cv::Mat& image2,
std::vector<cv::DMatch>& matches,
std::vector<cv::KeyPoint>& keypoints1,
std::vector<cv::KeyPoint>& keypoints2) {
detector->detect(image1,keypoints1);
detector->detect(image2,keypoints2);
cv::Mat descriptors1, descriptors2;
extractor->compute(image1,keypoints1,descriptors1);
extractor->compute(image2,keypoints2,descriptors2);
std::vector<std::vector<cv::DMatch> > matches1;
matcher->knnMatch(descriptors1,descriptors2,
matches1,
2);
std::vector<std::vector<cv::DMatch> > matches2;
matcher->knnMatch(descriptors2,descriptors1,
matches2,
2);
int removed= ratioTest(matches1);
removed= ratioTest(matches2);
std::vector<cv::DMatch> symMatches;
symmetryTest(matches1,matches2,symMatches);
cv::Mat fundemental= ransacTest(symMatches,
keypoints1, keypoints2, matches);
return fundemental;
}
};
int numKeyPoints = 1500;
RobustMatcher rmatcher;
detector = new cv::OrbFeatureDetector(numKeyPoints);
extractor = new cv::OrbDescriptorExtractor;
matcher = new cv::BruteForceMatcher<cv::HammingLUT>;
rmatcher.setFeatureDetector(detector);
rmatcher.setDescriptorExtractor(extractor);
rmatcher.setDescriptorMatcher(matcher);
cv::Mat img1;
std::vector<cv::KeyPoint> img1_keypoints;
cv::Mat img1_descriptors;
cv::Mat img2;
std::vector<cv::KeyPoint> img2_keypoints
cv::Mat img2_descriptors;
std::vector<std::vector<cv::DMatch> > matches;
img1 = cv::imread(fList[0].string(), CV_LOAD_IMAGE_GRAYSCALE);
img2 = cv::imread(fList[1].string(), CV_LOAD_IMAGE_GRAYSCALE);
rmatcher.match(img1, img2, matches, img1_keypoints, img2_keypoints);
这篇关于引入旋转/缩放不变性后,OpenCV Orb 找不到匹配项的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!