Best Python code snippet using Airtest
keypointMatching.py
Source: keypointMatching.py
...40 # æ£æµå¾çæ¯å¦æ£å¸¸41 if not check_image_valid(img_source, img_search):42 raise Exception(img_source, img_search, "空å¾å")43 # è·åç¹å¾ç¹éï¼å¹é
ç¹å¾44 kp1, kp2, matches = _get_key_points(img_source, img_search, ratio)45 print(len(kp1), len(kp2), len(matches))46 # å
³é®ç¹å¹é
æ°éï¼å¹é
æ©ç 47 (matchNum, matchesMask) = getMatchNum(matches, ratio)48 print(matchNum, len(matchesMask))49 # å
³é®ç¹å¹é
置信度50 matcheRatio = matchNum / len(matchesMask)51 if matcheRatio >= 0 and matcheRatio <= 1:52 return matcheRatio53 else:54 raise Exception("SIFT Score Error", matcheRatio)55# ---------------------------surf--------------------------#56def _init_surf():57 '''58 Make sure that there is SURF module in OpenCV.59 '''60 if cv2.__version__.startswith("3."):61 # OpenCV3.x, sift is in contrib module;62 # you need to compile it seperately.63 try:64 surf = cv2.xfeatures2d.SURF_create(hessianThreshold=400)65 except surf:66 print("to use SURF, you should build contrib with opencv3.0")67 raise Exception(68 "There is no SURF module in your OpenCV environment !")69 else:70 # OpenCV2.x, just use it.71 surf = cv2.SURF(hessianThreshold=100)72 return surf73def find_surf(img_source, img_search, ratio=FILTER_RATIO):74 '''75 åºäºsurfè¿è¡å¾åè¯å«76 :param img_source: èµæºå¾77 :param img_search: å¾
对æ¯å¾78 :param ratio: ä¼ç§ç¹å¾ç¹è¿æ»¤æ¯ä¾å¼79 :return: ç¸ä¼¼éå¼80 '''81 # æ£æµå¾çæ¯å¦æ£å¸¸82 if not check_image_valid(img_source, img_search):83 raise Exception(img_source, img_search, "空å¾å")84 # è·åç¹å¾ç¹éï¼å¹é
ç¹å¾85 surf = _init_surf()86 kp1, des1 = surf.detectAndCompute(img_source, None)87 kp2, des2 = surf.detectAndCompute(img_search, None)88 # When apply knnmatch , make sure that number of features in both test and89 # query image is greater than or equal to number of90 # nearest neighbors in knn match.91 if len(kp1) < 2 or len(kp2) < 2:92 raise Exception("Not enough feature points in input images !")93 # å¹é
两个å¾çä¸çç¹å¾ç¹éï¼k=2表示æ¯ä¸ªç¹å¾ç¹ååº2个æå¹é
ç对åºç¹:94 matches = FLANN.knnMatch(des1, des2, k=2)95 print(len(matches))96 # å
³é®ç¹å¹é
æ°éï¼å¹é
æ©ç 97 (matchNum, matchesMask) = getMatchNum(matches, ratio)98 print(matchNum, len(matchesMask))99 # å
³é®ç¹å¹é
置信度100 matcheRatio = matchNum / len(matchesMask)101 if matcheRatio >= 0 and matcheRatio <= 1:102 return matcheRatio103 else:104 raise Exception("SURF Score Error", matcheRatio)105def _get_key_points(im_source, im_search, ratio):106 ''' æ ¹æ®ä¼ å
¥å¾åï¼è®¡ç®ææçç¹å¾ç¹å¹¶å¹é
ç¹å¾ç¹å¯¹ '''107 # åå§åsiftç®å108 sift = _init_sift()109 # è·åç¹å¾ç¹é110 kp_sch, des_sch = sift.detectAndCompute(im_search, None)111 kp_src, des_src = sift.detectAndCompute(im_source, None)112 # When apply knnmatch , make sure that number of features in both test and113 # query image is greater than or equal to number of114 # nearest neighbors in knn match.115 if len(kp_sch) < 2 or len(kp_src) < 2:116 raise Exception("Not enough feature points in input images !")117 # å¹é
两个å¾çä¸çç¹å¾ç¹éï¼k=2表示æ¯ä¸ªç¹å¾ç¹ååº2个æå¹é
ç对åºç¹:118 matches = FLANN.knnMatch(des_sch, des_src, k=2)119 return kp_sch, kp_src, matches...
sift.py
Source: sift.py
...6 # 第ä¸æ¥ï¼æ£éªå¾åæ¯å¦æ£å¸¸ï¼7 if not check_image_valid(im_source, im_search):8 return None9 # 第äºæ¥ï¼è·åç¹å¾ç¹é并å¹é
åºç¹å¾ç¹å¯¹: è¿åå¼ good, pypts, kp_sch, kp_src10 kp_sch, kp_src, good = _get_key_points(im_source, im_search, good_ratio)11 # 第ä¸æ¥ï¼æ ¹æ®å¹é
ç¹å¯¹(good),æååºæ¥è¯å«åºå:12 if len(good) == 0:13 # å¹é
ç¹å¯¹ä¸º0,æ æ³æåè¯å«åºåï¼14 return None15 elif len(good) == 1:16 # å¹é
ç¹å¯¹ä¸º1ï¼å¯ä¿¡åº¦èµäºè®¾å®å¼ï¼å¹¶ç´æ¥è¿å:17 return _handle_one_good_points(kp_src, good, threshold) if ONE_POINT_CONFI >= threshold else None18 elif len(good) == 2:19 # å¹é
ç¹å¯¹ä¸º2ï¼æ ¹æ®ç¹å¯¹æ±åºç®æ åºåï¼æ®æ¤ç®åºå¯ä¿¡åº¦ï¼20 origin_result = _handle_two_good_points(im_source, im_search, kp_src, kp_sch, good)21 if isinstance(origin_result, dict):22 return origin_result if ONE_POINT_CONFI >= threshold else None23 else:24 middle_point, pypts, w_h_range = _handle_two_good_points(im_source, im_search, kp_src, kp_sch, good)25 elif len(good) == 3:26 # å¹é
ç¹å¯¹ä¸º3ï¼ååºç¹å¯¹ï¼æ±åºç®æ åºåï¼æ®æ¤ç®åºå¯ä¿¡åº¦ï¼27 origin_result = _handle_three_good_points(im_source, im_search, kp_src, kp_sch, good)28 if isinstance(origin_result, dict):29 return origin_result if ONE_POINT_CONFI >= threshold else None30 else:31 middle_point, pypts, w_h_range = _handle_three_good_points(im_source, im_search, kp_src, kp_sch, good)32 else:33 # å¹é
ç¹å¯¹ >= 4个ï¼ä½¿ç¨åç©éµæ å°æ±åºç®æ åºåï¼æ®æ¤ç®åºå¯ä¿¡åº¦ï¼34 middle_point, pypts, w_h_range = _many_good_pts(im_source, im_search, kp_sch, kp_src, good)35 # 第åæ¥ï¼æ ¹æ®è¯å«åºåï¼æ±åºç»æå¯ä¿¡åº¦ï¼å¹¶å°ç»æè¿è¡è¿å:36 # 对è¯å«ç»æè¿è¡åçæ§æ ¡éª: å°äº5个åç´ çï¼æè
缩æ¾è¶
è¿5åçï¼ä¸å¾è§ä¸ºä¸åæ³ç´æ¥raise.37 _target_error_check(w_h_range)38 # å°æªå¾åè¯å«ç»æ缩æ¾å°å¤§å°ä¸è´,åå¤è®¡ç®å¯ä¿¡åº¦39 x_min, x_max, y_min, y_max, w, h = w_h_range40 target_img = im_source[y_min:y_max, x_min:x_max]41 resize_img = cv2.resize(target_img, (w, h))42 confidence = _cal_sift_confidence(im_search, resize_img, rgb=rgb)43 best_match = generate_result(middle_point, pypts, confidence)44 print("[aircv][sift] threshold=%s, result=%s" % (threshold, best_match))45 return best_match if confidence >= threshold else None46def _get_key_points(im_source, im_search, good_ratio):47 """æ ¹æ®ä¼ å
¥å¾å,计ç®å¾åææçç¹å¾ç¹,并å¾å°å¹é
ç¹å¾ç¹å¯¹."""48 # åå¤å·¥ä½: åå§åsiftç®å49 sift = _init_sift()50 # 第ä¸æ¥ï¼è·åç¹å¾ç¹éï¼å¹¶å¹é
åºç¹å¾ç¹å¯¹: è¿åå¼ good, pypts, kp_sch, kp_src51 kp_sch, des_sch = sift.detectAndCompute(im_search, None)52 kp_src, des_src = sift.detectAndCompute(im_source, None)53 # When apply knnmatch , make sure that number of features in both test and54 # query image is greater than or equal to number of nearest neighbors in knn match.55 if len(kp_sch) < 2 or len(kp_src) < 2:56 raise NoSiftMatchPointError("Not enough feature points in input images !")57 # å¹é
两个å¾çä¸çç¹å¾ç¹éï¼k=2表示æ¯ä¸ªç¹å¾ç¹ååº2个æå¹é
ç对åºç¹:58 matches = FLANN.knnMatch(des_sch, des_src, k=2)59 good = []60 # good为ç¹å¾ç¹åéç»æï¼åé¤æå两åå¹é
太æ¥è¿çç¹å¾ç¹ï¼ä¸æ¯ç¬ç¹ä¼ç§çç¹å¾ç¹ç´æ¥çé¤(å¤ç®æ è¯å«æ
åµç´æ¥ä¸éç¨)...
DepthMapImage.py
Source: DepthMapImage.py
...3class DepthMapImage:4 def __init__(self, img1, img2): 5 self.img1 = cv.cvtColor(img1, cv.COLOR_BGR2GRAY)6 self.img2 = cv.cvtColor(img2, cv.COLOR_BGR2GRAY)7 def _get_key_points(self):8 sift = cv.SIFT_create()9 10 points1 = sift.detectAndCompute(self.img1, None)11 points2 = sift.detectAndCompute(self.img2, None)12 return points1, points213 def _get_good_matches(self, points):14 kp1, des1 = points[0]15 kp2, des2 = points[1]16 FLANN_INDEX_KDTREE = 117 index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)18 search_params = dict(checks=50)19 flann = cv.FlannBasedMatcher(index_params, search_params)20 matches = flann.knnMatch(des1, des2, k=2)21 matchesMask = [[0, 0] for _ in range(len(matches))]22 good = []23 pts1 = []24 pts2 = []25 for i, (m, n) in enumerate(matches):26 if m.distance < 0.7*n.distance:27 matchesMask[i] = [1, 0]28 good.append(m)29 pts2.append(kp2[m.trainIdx].pt)30 pts1.append(kp1[m.queryIdx].pt)31 return pts1, pts232 33 def _stereo_ratification(self, data):34 pts1, pts2 = data35 pts1 = np.int32(pts1)36 pts2 = np.int32(pts2)37 38 fundamental_matrix, inliers = cv.findFundamentalMat(pts1, pts2, cv.FM_RANSAC)39 pts1 = pts1[inliers.ravel() == 1]40 pts2 = pts2[inliers.ravel() == 1]41 h1, w1 = self.img1.shape42 h2, w2 = self.img2.shape43 _, H1, H2 = cv.stereoRectifyUncalibrated(44 np.float32(pts1), np.float32(pts2), fundamental_matrix, imgSize=(w1, h1)45 )46 img1_rectified = cv.warpPerspective(self.img1, H1, (w1, h1))47 img2_rectified = cv.warpPerspective(self.img2, H2, (w2, h2))48 block_size = 1149 min_disp = -12850 max_disp = 12851 num_disp = max_disp - min_disp52 uniquenessRatio = 553 speckleWindowSize = 20054 speckleRange = 255 disp12MaxDiff = 056 stereo = cv.StereoSGBM_create(57 minDisparity=min_disp,58 numDisparities=num_disp,59 blockSize=block_size,60 uniquenessRatio=uniquenessRatio,61 speckleWindowSize=speckleWindowSize,62 speckleRange=speckleRange,63 disp12MaxDiff=disp12MaxDiff,64 P1=8 * 1 * block_size * block_size,65 P2=32 * 1 * block_size * block_size,66 )67 68 disparity_SGBM = stereo.compute(img1_rectified, img2_rectified)69 disparity_SGBM = cv.normalize(disparity_SGBM, disparity_SGBM, alpha=255,70 beta=0, norm_type=cv.NORM_MINMAX)71 disparity_SGBM = np.uint8(disparity_SGBM)72 return disparity_SGBM73 def get_ratification_disparity_map(self):74 points = self._get_key_points()75 matchs = self._get_good_matches(points)...
Check out the latest blogs from LambdaTest on this topic:
Unit testing is typically software testing within the developer domain. As the QA role expands in DevOps, QAOps, DesignOps, or within an Agile team, QA testers often find themselves creating unit tests. QA testers may create unit tests within the code using a specified unit testing tool, or independently using a variety of methods.
People love to watch, read and interact with quality content — especially video content. Whether it is sports, news, TV shows, or videos captured on smartphones, people crave digital content. The emergence of OTT platforms has already shaped the way people consume content. Viewers can now enjoy their favorite shows whenever they want rather than at pre-set times. Thus, the OTT platform’s concept of viewing anything, anytime, anywhere has hit the right chord.
Agile project management is a great alternative to traditional methods, to address the customer’s needs and the delivery of business value from the beginning of the project. This blog describes the main benefits of Agile for both the customer and the business.
The purpose of developing test cases is to ensure the application functions as expected for the customer. Test cases provide basic application documentation for every function, feature, and integrated connection. Test case development often detects defects in the design or missing requirements early in the development process. Additionally, well-written test cases provide internal documentation for all application processing. Test case development is an important part of determining software quality and keeping defects away from customers.
I routinely come across test strategy documents when working with customers. They are lengthy—100 pages or more—and packed with monotonous text that is routinely reused from one project to another. Yawn once more— the test halt and resume circumstances, the defect management procedure, entrance and exit criteria, unnecessary generic risks, and in fact, one often-used model replicates the requirements of textbook testing, from stress to systems integration.
Learn to execute automation testing from scratch with LambdaTest Learning Hub. Right from setting up the prerequisites to run your first automation test, to following best practices and diving deeper into advanced test scenarios. LambdaTest Learning Hubs compile a list of step-by-step guides to help you be proficient with different test automation frameworks i.e. Selenium, Cypress, TestNG etc.
You could also refer to video tutorials over LambdaTest YouTube channel to get step by step demonstration from industry experts.
Get 100 minutes of automation test minutes FREE!!