我正在尝试从屏幕截图中匹配《部落冲突》中的“建筑物”。
考虑到一些建筑物会四处移动,我认为直接 matchTemplate 是行不通的(如果我错了,请纠正我)。
考虑下图:
当用于以下情况时它将不起作用:
如您所知,图像确实存在,但大炮面对的是不同的方向。
在另一个示例中,我能够获得在几个不同位置相同的防空单场比赛:
这是我正在使用的代码:
import numpy as np
import cv2
from matplotlib import pyplot as plt
imageToMatch=''
trainImage=''
MIN_MATCH_COUNT = 4
img1 = cv2.imread(imageToMatch)
img2 = cv2.imread(trainImage) # trainImage
# Initiate SIFT detector
sift = cv2.xfeatures2d.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1, des2, k=2)
# store all the good matches as per Lowe's ratio test.
good = []
#print(matches)
for m, n in matches:
if m.distance < 0.7 * n.distance:
#print(m.distance)
#print(0.69 * n.distance)
good.append(m)
if len(good) > MIN_MATCH_COUNT:
src_pts = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1, 1, 2)
dst_pts = np.float32([kp2[m.trainIdx].pt for m in good]).reshape(-1, 1, 2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
matchesMask = mask.ravel().tolist()
h, w = img1.shape
pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
dst = cv2.perspectiveTransform(pts, M)
img2 = cv2.polylines(img2, [np.int32(dst)], True, 255, 3, cv2.LINE_AA)
else:
print("Not enough matches are found - %d/%d" % (len(good), MIN_MATCH_COUNT))
matchesMask = None
draw_params = dict(matchColor=(0, 255, 0), # draw matches in green color
singlePointColor=None,
matchesMask=matchesMask, # draw only inliers
flags=2)
img3 = cv2.drawMatches(img1, kp1, img2, kp2, good, None, **draw_params)
plt.imshow(img3, 'gray'), plt.show()
我的问题是:
有没有办法将大炮图像与示例布局中显示的所有大炮图像匹配?
有没有办法确保不旋转的对象也匹配?
任何帮助或方向将不胜感激!
我的环境:Python 3.6、OpenCV 3.3.0、OSX。