在做了更多研究之后,我很确定这目前是不可能的。这就是为什么我实现了我在问题中给出的第一个命题 - 使用其中包含最多匹配数的匹配向量来确定轨迹栏的最大大小,然后使用一些检查来避免超出范围的异常。下面有一个或多或少的详细描述它是如何工作的。由于我的代码中的匹配过程涉及一些与手头问题无关的额外检查,因此我将在此处跳过。请注意,在我们想要匹配的一组给定图像中,当该图像(例如:卡片)当前与场景图像(例如:一组卡片)匹配时,我将图像称为对象图像 -匹配向量(见下文)并等于已处理图像中的索引(见下文)。我发现 OpenCV 中的训练/查询符号有些混乱。此场景/对象符号取自http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html。您可以根据自己的喜好更改或交换符号,但请确保在任何地方都相应地更改它,否则您可能会得到一些奇怪的结果。
// stores all the images that we want to cross-match
std::vector<cv::Mat> processedImages;
// stores keypoints for each image in processedImages
std::vector<std::vector<cv::Keypoint> > keypoints;
// stores descriptors for each image in processedImages
std::vector<cv::Mat> descriptors;
// fill processedImages here (read images from files, convert to grayscale, undistort, resize etc.), extract keypoints, compute descriptors
// ...
// I use brute force matching since I also used ORB, which has binary descriptors and HAMMING_NORM is the way to go
cv::BFmatcher matcher;
// matches contains the match-vectors for each image matched to all other images in our set
// top level index matches.at(X) is equal to the image index in processedImages
// middle level index matches.at(X).at(Y) gives the match-vector for the Xth image and some other Yth from the set that is successfully matched to X
std::vector<std::vector<std::vector<cv::DMatch> > > matches;
// contains images that store visually all matched pairs
std::vector<std::vector<cv::Mat> > matchesDraw;
// fill all the vectors above with data here, don't forget about matchesDraw
// stores the highest count of matches for all pairs - I used simple exclusion by simply comparing the size() of the current std::vector<cv::DMatch> vector with the previous value of this variable
long int sceneWithMaxMatches = 0;
// ...
// after all is ready do some additional checking here in order to make sure the data is usable in our GUI. A trackbar for example requires AT LEAST 2 for its range since a range (0;0) doesn't make any sense
if(sceneWithMaxMatches < 2)
return -1;
// in this window show the image gallery (scene-images); the user can scroll through all image using a trackbar
cv::namedWindow("Images", CV_GUI_EXPANDED | CV_WINDOW_AUTOSIZE);
// just a dummy to store the state of the trackbar
int imagesTrackbarState = 0;
// create the first trackbar that the user uses to scroll through the scene-images
// IMPORTANT: use processedImages.size() - 1 since indexing in vectors is the same as in arrays - it starts from 0 and not reducing it by 1 will throw an out-of-range exception
cv::createTrackbar("Images:", "Images", &imagesTrackbarState, processedImages.size() - 1, on_imagesTrackbarCallback, NULL);
// in this window we show the matched object-images relative to the selected image in the "Images" window
cv::namedWindow("Matches for current image", CV_WINDOW_AUTOSIZE);
// yet another dummy to store the state of the trackbar in this new window
int imageMatchesTrackbarState = 0;
// IMPORTANT: again since sceneWithMaxMatches stores the SIZE of a vector we need to reduce it by 1 in order to be able to use it for the indexing later on
cv::createTrackbar("Matches:", "Matches for current image", &imageMatchesTrackbarState, sceneWithMaxMatches - 1, on_imageMatchesTrackbarCallback, NULL);
while(true)
{
char key = cv::waitKey(20);
if(key == 27)
break;
// from here on the magic begins
// show the image gallery; use the position of the "Images:" trackbar to call the image at that position
cv::imshow("Images", processedImages.at(cv::getTrackbarPos("Images:", "Images")));
// store the index of the current scene-image by calling the position of the trackbar in the "Images:" window
int currentSceneIndex = cv::getTrackbarPos("Images:", "Images");
// we have to make sure that the match of the currently selected scene-image actually has something in it
if(matches.at(currentSceneIndex).size())
{
// store the index of the current object-image that we have matched to the current scene-image in the "Images:" window
int currentObjectIndex = cv::getTrackbarPos("Matches:", "Matches for current image");
cv::imshow(
"Matches for current image",
matchesDraw.at(currentSceneIndex).at(currentObjectIndex < matchesDraw.at(currentSceneIndex).size() ? // is the current object index within the range of the matches for the current object and current scene
currentObjectIndex : // yes, return the correct index
matchesDraw.at(currentSceneIndex).size() - 1)); // if outside the range show the last matched pair!
}
}
// do something else
// ...
棘手的部分是第二个窗口中的跟踪栏,它负责将匹配的图像访问到我们在“图像”窗口中当前选择的图像。正如我在上面解释的那样,我在“当前图像匹配”窗口中将轨迹栏“匹配:”设置为从 0 到 (sceneWithMaxMatches-1)。然而,并非所有图像都与图像集中的其他图像具有相同数量的匹配(如果您进行了一些额外的过滤以确保可靠匹配,例如通过利用单应性、比率测试、最小/最大距离检查等属性,则应用十倍.)。因为我无法找到动态调整轨迹栏范围的方法,所以我需要验证索引。否则对于某些图像及其匹配,应用程序将抛出超出范围的异常。这是由于一个简单的事实,对于某些匹配,我们尝试访问索引大于其大小减 1 的匹配向量,因为 cv::getTrackbarPos() 一直到 (sceneWithMaxMatches - 1)。如果轨迹栏的位置超出了当前选择的匹配向量的范围,我只需将“当前图像匹配”中的 matchDraw-image 设置为向量中的最后一个。在这里,我利用索引不能低于零以及跟踪栏位置的事实,因此不需要检查这一点,而只需要检查初始位置 0 之后的内容。如果不是你的情况,请确保检查较低的不仅限于鞋面。getTrackbarPos() 一直到 (sceneWithMaxMatches - 1)。如果轨迹栏的位置超出了当前选择的匹配向量的范围,我只需将“当前图像匹配”中的 matchDraw-image 设置为向量中的最后一个。在这里,我利用索引不能低于零以及跟踪栏位置的事实,因此不需要检查这一点,而只需要检查初始位置 0 之后的内容。如果不是你的情况,请确保检查较低的不仅限于鞋面。getTrackbarPos() 一直到 (sceneWithMaxMatches - 1)。如果轨迹栏的位置超出了当前选择的匹配向量的范围,我只需将“当前图像匹配”中的 matchDraw-image 设置为向量中的最后一个。在这里,我利用索引不能低于零以及跟踪栏位置的事实,因此不需要检查这一点,而只需要检查初始位置 0 之后的内容。如果不是你的情况,请确保检查较低的不仅限于鞋面。
希望这可以帮助!