How to use mask_kaze method in Airtest

Best Python code snippet using Airtest

keypoint_base.py

Source: keypoint_base.py Github

copy

Full Screen

...22 self.im_source = im_source23 self.im_search = im_search24 self.threshold = threshold25 self.rgb = rgb26 def mask_kaze(self):27 """基于kaze查找多个目标区域的方法."""28 # 求出特征点后,self.im_source中获得match的那些点进行聚类29 raise NotImplementedError30 def find_all_results(self):31 """基于kaze查找多个目标区域的方法."""32 # 求出特征点后,self.im_source中获得match的那些点进行聚类33 raise NotImplementedError34 @print_run_time35 def find_best_result(self):36 """基于kaze进行图像识别,只筛选出最优区域."""37 # 第一步:检验图像是否正常:38 if not check_image_valid(self.im_source, self.im_search):39 return None40 # 第二步:获取特征点集并匹配出特征点对: 返回值 good, pypts, kp_sch, kp_src...

Full Screen

Full Screen

VisualOdometry.py

Source: VisualOdometry.py Github

copy

Full Screen

1import os2import glob3import numpy as np4import math5import cv2 as cv6from matplotlib import pyplot as plt78##################plot graph#######################9position_figure = plt.figure()10position_axes = position_figure.add_subplot(1, 1, 1)11# error_figure = plt.figure()12# rotation_error_axes = error_figure.add_subplot(1, 1, 1)13# rotation_error_list = []14# frame_index_list = []1516position_axes.set_aspect('equal', adjustable='box')17####################################################18###############ground truth###################################19gt_file = glob.glob('./​poses/​01.txt')20ground_truth_exist = True21ground_truth = []22with open(*gt_file) as f:23 gt_lines = f.readlines()2425 for gt_line in gt_lines:26 pose = np.array(gt_line.split()).reshape((3, 4)).astype(np.float32)27 ground_truth.append(pose)2829#######################################################################30#######################################################################31# parameters for lucas kanade optical flow32lk_params = dict(winSize=(21, 21),criteria=(cv.TERM_CRITERIA_EPS |cv.TERM_CRITERIA_COUNT, 30, 0.03))33#######################################################################34#####################camera setting information ###############################35camera_matrix = np.array([[718.8560, 0.0, 607.1928],36 [0.0, 718.8560, 185.2157],37 [0.0, 0.0, 1.0]])38f_l = camera_matrix[0][0]39f_r = camera_matrix[0][0]40c_x_l = camera_matrix[0][2]41c_y_l = camera_matrix[1][2]42c_x_r = camera_matrix[0][2]43c_y_r = camera_matrix[1][2]44baseline = -3.861448000000e+0245##################################################################################4647prev_img = None4849current_pos_akaze = np.zeros((3, 1))50current_rot_akaze = np.eye(3)51current_pos_orb = np.zeros((3, 1))52current_rot_orb = np.eye(3)53current_pos_kaze = np.zeros((3, 1))54current_rot_kaze = np.eye(3)55current_pos_sift = np.zeros((3, 1))56current_rot_sift = np.eye(3)5758camera_matrix = np.array([[718.8560, 0.0, 607.1928],59 [0.0, 718.8560, 185.2157],60 [0.0, 0.0, 1.0]])6162output = len(glob.glob('./​image_0/​*.png')) # 사진 개수6364for index in range(output):65 img_file = os.path.join('./​image_0/​', '{:06d}.png').format(index)66 img = cv.imread('./​image_0/​{}'.format(img_file))67 # print("check: {} , img : {} ".format(index, img))68 # cvtColor는 안함 이미 변환되어 있으니까6970 # keypoint detection and feature description71 # 1. AKAZE72 akaze = cv.AKAZE_create()73 kp_akaze = akaze.detect(img, None) # find keypoints74 kp_akaze, des_akaze = akaze.compute(img, kp_akaze) # compute descriptors with AKAZE75 # kp_akaze, des_akaze =akaze.detectAndCompute(img, None)76 # print("Keypoints AKAZE : {}".format(len(kp_akaze)))7778 # 2, ORB79 orb = cv.ORB_create()80 kp_orb = orb.detect(img, None) # find keypoints81 kp_orb, des_orb = orb.compute(img, kp_orb) # compute descriptors with ORB82 # kp_orb, des_orb =orb.detectAndCompute(img, None)83 # print("Keypoints ORB : {}".format(len(kp_orb)))8485 # 3. KAZE86 kaze = cv.KAZE_create()87 kp_kaze = kaze.detect(img, None)88 kp_kaze, des_kaze = kaze.compute(img, kp_kaze)89 # kp_kaze, des_kaze =kaze.detectAndCompute(img, None)90 # print("Keypoints KAZE : {}".format(len(kp_kaze)))9192 # 4. SIFT93 sift = cv.SIFT_create()94 kp_sift = sift.detect(img, None)95 kp_sift, des_sift = sift.compute(img, kp_sift)96 # print("Keypoints SIFT : {}".format(len(kp_sift)))97 # kp_sift, des_sift =sift.detectAndCompute(img, None)9899 if prev_img is None:100 prev_img = img101 prev_keypoint_akaze = kp_akaze102 prev_keypoint_orb = kp_orb103 prev_keypoint_kaze = kp_kaze104 prev_keypoint_sift = kp_sift105 #106 prev_des_akaze = des_akaze107 prev_des_orb = des_orb108 prev_des_kaze = des_kaze109 prev_des_sift = des_sift110 continue111112 points_akaze = np.array(list(map(lambda x: [x.pt], prev_keypoint_akaze)), dtype=np.float32).squeeze()113 points_orb = np.array(list(map(lambda x: [x.pt], prev_keypoint_orb)), dtype=np.float32).squeeze()114 points_kaze = np.array(list(map(lambda x: [x.pt], prev_keypoint_kaze)), dtype=np.float32).squeeze()115 points_sift = np.array(list(map(lambda x: [x.pt], prev_keypoint_sift)), dtype=np.float32).squeeze()116117 #############################points matching ##################################118 # # BFMatcher with default params119 # # brute-force matcher120 # # 사이즈가 커지면 속도 저하121 # bf = cv.BFMatcher()122 # matches_akaze = bf.knnMatch(prev_des_akaze, des_akaze, k=2)123 # matches_orb = bf.knnMatch(prev_des_orb, des_orb, k=2)124 # matches_kaze = bf.knnMatch(prev_des_kaze, des_kaze, k=2)125 # matches_sift = bf.knnMatch(prev_des_sift, des_sift, k=2)126 #127 # # Apply ratio test128 # good_matches_akaze = []129 # good_matches_orb = []130 # good_matches_kaze = []131 # good_matches_sift = []132 #133 # pt1_akaze, pt2_akaze = [], []134 # pt1_orb, pt2_orb = [],[]135 # pt1_kaze, pt2_kaze = [],[]136 # pt1_sift, pt2_sift = [],[]137 #138 # for m, n in matches_akaze:139 # if m.distance < 0.75 * n.distance:140 # good_matches_akaze.append([m])141 # pt1_akaze.append(prev_keypoint_akaze[m.queryIdx].pt)142 # pt2_akaze.append(kp_akaze[m.trainIdx].pt)143 # for m,n in matches_orb:144 # if m.distance < 0.75*n.distance:145 # good_matches_orb.append([m])146 # pt1_orb.append(prev_keypoint_orb[m.queryIdx].pt)147 # pt2_orb.append(kp_orb[m.trainIdx].pt)148 # for m,n in matches_kaze:149 # if m.distance < 0.75*n.distance:150 # good_matches_kaze.append([m])151 # pt1_kaze.append(prev_keypoint_kaze[m.queryIdx].pt)152 # pt2_kaze.append(kp_kaze[m.trainIdx].pt)153 # for m,n in matches_sift:154 # if m.distance < 0.75*n.distance:155 # good_matches_sift.append([m])156 # pt1_sift.append(prev_keypoint_sift[m.queryIdx].pt)157 # pt2_sift.append(kp_sift[m.trainIdx].pt)158 #159 # pt1_akaze, pt2_akaze = np.float32(pt1_akaze), np.float32(pt2_akaze)160 # pt1_orb, pt2_orb = np.float32(pt1_orb), np.float32(pt2_orb)161 # pt1_kaze, pt2_kaze = np.float32(pt1_kaze), np.float32(pt2_kaze)162 # pt1_sift, pt2_sift = np.float32(pt1_sift), np.float32(pt2_sift)163 #################################################################################164165 pt1_akaze, st, err = cv.calcOpticalFlowPyrLK(prev_img, img, points_akaze, None, **lk_params)166 pt2_akaze = points_akaze167 pt1_orb, st, err = cv.calcOpticalFlowPyrLK(prev_img, img, points_orb, None, **lk_params)168 pt2_orb = points_orb169 pt1_kaze, st, err = cv.calcOpticalFlowPyrLK(prev_img, img, points_kaze, None, **lk_params)170 pt2_kaze = points_kaze171 pt1_sift, st, err = cv.calcOpticalFlowPyrLK(prev_img, img, points_sift, None, **lk_params)172 pt2_sift = points_sift173174 # Essenstial Matrix175 E_akaze, mask_akaze = cv.findEssentialMat(pt1_akaze, pt2_akaze, camera_matrix, cv.RANSAC, 0.999, 1.0, None)176 E_orb, mask_orb = cv.findEssentialMat(pt1_orb, pt2_orb, camera_matrix, cv.RANSAC, 0.999, 1.0, None)177 E_kaze, mask_kaze = cv.findEssentialMat(pt1_kaze, pt2_kaze, camera_matrix, cv.RANSAC, 0.999, 1.0, None)178 E_sift, mask_sift = cv.findEssentialMat(pt1_sift, pt2_sift, camera_matrix, cv.RANSAC, 0.999, 1.0, None)179180 pt1_akaze, R_akaze, t_akaze, mask_akaze = cv.recoverPose(E_akaze, pt2_akaze, pt1_akaze, camera_matrix)181 pt1_orb, R_orb, t_orb, mask_orb = cv.recoverPose(E_orb, pt2_orb, pt1_orb, camera_matrix)182 pt1_kaze, R_kaze, t_kaze, mask_kaze = cv.recoverPose(E_kaze, pt2_kaze, pt1_kaze, camera_matrix)183 pt1_sift, R_sift, t_sift, mask_sift = cv.recoverPose(E_sift, pt2_sift, pt1_sift, camera_matrix)184185 scale = 1.0186187 # ground truth를 기반으로 scale 계산188 if ground_truth_exist:189 gt_pose = [ground_truth[index][0, 3], ground_truth[index][2, 3]]190 pre_gt_pose = [ground_truth[index - 1][0, 3], ground_truth[index - 1][2, 3]]191 scale = math.sqrt(math.pow((gt_pose[0] - pre_gt_pose[0]), 2.0) + math.pow((gt_pose[1] - pre_gt_pose[1]), 2.0))192193 # 우선 ground truth가 없다면 scale 계산은 하지 않고194 current_pos_akaze += current_rot_akaze.dot(t_akaze) * scale195 current_rot_akaze = R_akaze.dot(current_rot_akaze)196197 current_pos_orb += current_rot_orb.dot(t_orb) * scale198 current_rot_orb = R_orb.dot(current_rot_orb)199200 current_pos_kaze += current_rot_kaze.dot(t_kaze) * scale201 current_rot_kaze = R_kaze.dot(current_rot_kaze)202203 current_pos_sift += current_rot_sift.dot(t_sift) * scale204 current_rot_sift = R_sift.dot(current_rot_sift)205206 # ground truth plot207 position_axes.scatter(ground_truth[index][0, 3], ground_truth[index][2, 3],s=2 , c='red')208209 # odometry plot210 position_axes.scatter(-current_pos_akaze[0][0], -current_pos_akaze[2][0], s=2, c='gray')211 position_axes.scatter(-current_pos_orb[0][0], -current_pos_orb[2][0], s=2, c='purple')212 position_axes.scatter(-current_pos_kaze[0][0], -current_pos_kaze[2][0], s=2, c='blue')213 position_axes.scatter(-current_pos_sift[0][0], -current_pos_sift[2][0], s=2, c='black')214 plt.pause(.01)215216 plot_img = cv.drawKeypoints(img, kp_akaze, None)217218 # cv.imshow('feature', plot_img)219 # cv.waitKey(1)220221 prev_img = img222 prev_keypoint_akaze = kp_akaze223 prev_keypoint_orb = kp_orb224 prev_keypoint_kaze = kp_kaze225 prev_keypoint_sift = kp_sift226227 prev_des_akaze = des_akaze228 prev_des_orb = des_orb229 prev_des_kaze = des_kaze230 prev_des_sift = des_sift231 ...

Full Screen

Full Screen

Blogs

Check out the latest blogs from LambdaTest on this topic:

QA&#8217;s and Unit Testing &#8211; Can QA Create Effective Unit Tests

Unit testing is typically software testing within the developer domain. As the QA role expands in DevOps, QAOps, DesignOps, or within an Agile team, QA testers often find themselves creating unit tests. QA testers may create unit tests within the code using a specified unit testing tool, or independently using a variety of methods.

LIVE With Automation Testing For OTT Streaming Devices ????

People love to watch, read and interact with quality content — especially video content. Whether it is sports, news, TV shows, or videos captured on smartphones, people crave digital content. The emergence of OTT platforms has already shaped the way people consume content. Viewers can now enjoy their favorite shows whenever they want rather than at pre-set times. Thus, the OTT platform’s concept of viewing anything, anytime, anywhere has hit the right chord.

Why Agile Is Great for Your Business

Agile project management is a great alternative to traditional methods, to address the customer’s needs and the delivery of business value from the beginning of the project. This blog describes the main benefits of Agile for both the customer and the business.

Options for Manual Test Case Development &#038; Management

The purpose of developing test cases is to ensure the application functions as expected for the customer. Test cases provide basic application documentation for every function, feature, and integrated connection. Test case development often detects defects in the design or missing requirements early in the development process. Additionally, well-written test cases provide internal documentation for all application processing. Test case development is an important part of determining software quality and keeping defects away from customers.

Test strategy and how to communicate it

I routinely come across test strategy documents when working with customers. They are lengthy—100 pages or more—and packed with monotonous text that is routinely reused from one project to another. Yawn once more— the test halt and resume circumstances, the defect management procedure, entrance and exit criteria, unnecessary generic risks, and in fact, one often-used model replicates the requirements of textbook testing, from stress to systems integration.

Automation Testing Tutorials

Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.

LambdaTest Learning Hubs:

YouTube

You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.

Run Airtest automation tests on LambdaTest cloud grid

Perform automation testing on 3000+ real desktop and mobile devices online.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Was this article helpful?

Helpful

NotHelpful