XP Series Webinar

Transforming Test Automation: Power of GenAI in Reducing Maintenance & Enhancing Speed

In this XP Webinar, you'll explore how Gen AI revolutionizes test automation by reducing maintenance efforts and boosting speed. Uncover innovative strategies to enhance software quality and accelerate development cycles.

Watch Now

Listen On

applepodcastspotifyamazonmusic
Antony

Artem Golubev

Co-Founder & CEO, testRigor

WAYS TO LISTEN
applepodcastspotifyamazonmusicamazonmusic
Artem Golubev

Artem Golubev

Co-Founder & CEO, testRigor

Artem Golubev, one of the brains behind testRigor, is passionate about enhancing companies’ QA efficiency and speeding up their software delivery. His journey in the tech field began 25 years ago with software development for logistics firms. Artem’s career path led him through tech behemoths like Microsoft and Salesforce, where he gained invaluable insights into QA best practices and cutting-edge technologies. Leveraging this knowledge, he aims to level the playing field for other companies, enabling them to stand toe-to-toe with software giants by boosting their testing efficiency. With the help of testRigored, names such as Netflix, Cisco, and Burger King, among others, have revolutionized their approach to test automation, achieving faster development and reducing the time invested in test upkeep.

Mohit Juneja

Mohit Juneja

VP of Strategic Sales and Partnerships, LambdaTest

Mohit Juneja, VP of Strategic Sales and Partnerships, is driven by the ambition to eliminate interruptions in the test automation workflow. With technology partners viz. Cloud OEMs & Hyperscalers, Test automation SaaS platforms, CI/CD platforms, etc., we deliver seamless test automation experiences, eliminating the need for context switching. With over a decade of experience at go-to-market and sales for Developer Tools and Cloud AppDev Services, Mohit played a pivotal role in GitHub's growth across Asia. At Microsoft, he worked across various functions as Microsoft Consulting Services, Corporate Strategy, and later served as the Business Lead for the Azure Application Development portfolio.

The full transcript

Mohit Juneja (VP of Sales & Partnerships, LambdaTest) - Hello, everyone, and welcome to another exciting session of the LambdaTest Experience Series, which we also call it XP Series. You know already that on this forum, we dive into the insightful world of software testing automation. But we specifically go through it from the lens of change-makers and business leaders who are really revolutionizing software testing automation.

So in today's webinar, we will shed light on the transformative power of generative AI and the promises it holds to revolutionize the software testing world. I have one of the best guests to speak on this topic, my dear friend Artem Golubev, the Co-founder and CEO of testRigor.

Artem has been the driving force behind testRigor, and it's dedicated to elevating companies' QA efficiencies and streamlining their software delivery processes. With over 25 years of experience, Artem began his journey in software development from logistics firms and having navigated through industry giants like Microsoft, Salesforce, and many other enterprises, he brings a wealth of knowledge in QA best practices and cutting-edge technologies to fundamentally help enterprises on this very, very important objective.

So, without any further ado, I would love to welcome Artem to really introduce himself, but I would take one liberty since I know Artem is a dear friend. I remember our first conversation where Artem said, hey, I’m here to ensure that manual testers are able to do automation testing. And I think that's a very, very powerful vision.

You don't need rip and replace as a methodology to achieve efficiency. All you need is a pivot and the right tools to make it happen with the same resources, with the kind of people who have been building the enterprise over the years. So, Artem, please go ahead and introduce yourself to our audience.

Artem Golubev (Co-Founder & CEO, testRigor) - Hello, my name is Artem Golubev. I'm the CEO and Co-founder of testRigor. And Mohit, you are absolutely correct. A lot of organizations right now already have huge manual testing teams and they are struggling to be able to achieve higher levels of test automation.

And what we are helping our customers like Netflix, Cisco, Burger King, and many others achieve is to reuse that manual testing force in order to be able to build automation very quickly and effectively for end-to-end tests spanning web, mobile, native desktop, emails, two-factor authentication, phone calls, text messages and so on and so forth.

Anything that a human can do, you should be able to do it as soon as you can write it in English. That's good to go.

Mohit Juneja (VP of Sales & Partnerships, LambdaTest) - Great, and I think that's a great segue into the most curious question about generative AI, specifically in the context of testing. So here we go, Artem, with the first question.

How does generative AI specifically addresses the challenge of test automation and maintenance, and what makes it specifically significantly more efficient than traditional methods? All ears for your thoughts.

Artem Golubev (Co-Founder & CEO, testRigor) - So we use generative AI to enable people to build test cases using just plain English. As soon as something is expressible in English, you should be able to make it work on the testRigor platform. So we allow people to surprisingly even automate a lot of things that they thought are not automatable unless it's truly not automatable, like dealing with some physical things.

And it works actually by solving the largest problem of test automation. And the largest problem with that test automation that prevented people from being able to achieve on average 90% test coverage today is test maintenance followed by of course by a gigantic effort to even build tests in the first place.

But test maintenance, this is what truly stops it in its tracks because the challenge is that a lot of companies are struggling where our automation engineers are almost or 100% full-time on just maintaining existing tests, we have close to zero bandwidth or literally zero bandwidth to build any net new test automation.

So organizations just get by via for manual testers. Moreover, actually manual testers are extremely valuable resources because they hold the knowledge of how the system works by enabling manual testers to be able to participate in the process and product managers review the test and say, hey, this is exactly what we expect.

And no, no, no, this is not how it should work. That empowers organizations not only to move faster, but drop reliance on details of implementation and therefore makes the test more stable and encourage collaboration and so on and so forth. So why is test maintenance such a huge issue?

And that's mostly because of reliance on details of implementation. If you look into traditional tools like Selenium and Appium, what we do there is you write code to validate how engineers wrote the code yesterday, as opposed to how it should function from end user's perspective end-to-end.

And this is the largest issue that we're helping our customers to solve because we allow them to write tests in plain English expressing everything purely from the end user's perspective. So there is no need to hook into any details of implementation whatsoever.

And as soon as those new details of implementation like XPaths and CSS Selectors change, it doesn't matter. If your description did not involve using locators at all, then it doesn't matter that locators change as soon as your specification is still true this is where stuff should work.

Mohit Juneja (VP of Sales & Partnerships, LambdaTest) - I think, couldn't agree more with what you said, Artem. And, you very precisely explained that manual testers hold a lot of contextual knowledge of the user behavior and users expectations. Also, the contextual knowledge of the enterprise application they have. Right.

So enabling that, knowledge, enabling that, talent, enabling that capability with a tool which helps move faster towards automation. That's a great way to look at solving this problem. Now with all the promises of generative AI, there are also misconceptions, we all know that.

So Artem, I'm sure it sometimes sounds too good to be true. And that's what you would have heard from some of the customers as well. So all these promises, efficiencies, great, super exciting. But what can we see now? What can we see now as evidence which could really be insightful for our audience to see how they can use and read away as of today?

Artem Golubev (Co-Founder & CEO, testRigor) - Yes, I know this is number one concern from our people who hear what we're doing is that we don't believe that it's actually possible. So let me quickly show some simple demo like I'll do it on Best Buy. And basically we're creating a test suite from scratch. This is everything we will be providing the URL.

Usually we provide username and password. And here we will give the description of what this system under test does. So the second sentence I usually add to make sure it would prioritize testing without logging in because I didn't provide username and password here.

But of course, it's important here to provide a good description. You can copy paste usually what your product managers have. Let's create a test suite. We're asking it to generate us one end-to-end test and so it did but if i purchase let's say yeah sure let's build that test.

And going forward it is possible to generate test cases. This is a button, you can do the same thing. You copy paste the feature description here. This is how you would upload the wireframes, charts and diagrams. And however, the most important one is ability to copy paste test cases.

Let's say you have a manual test case and it will look something like this something I find and select a kindle, add it to the shopping cart or something like that and then you of course later on can add the validation so task a we don't know what it is can we use AI yes please use AI.

And while it's doing that, this is what I was talking about, right? So the name of the task is verify purchase of Kindle as a guest user, which is used as a prompt. And then AI based on this prompt tries to go through the application and figure out what steps specifically need to be taken to make it work.

In this particular example it started with entering Kindle into search bar in here and then it tries to click add to cart which is incorrect but because it's overlapped with this drop down here, so what it ends up doing is going to list of Kindles and so on and so forth, but we can correct that later on.

So as you can see, it did not do a good job here, and I'll show you how to deal with this in a second here when you use more specific prompts this is exactly what I was talking about find and select the Kindle it did a little bit better job it entered Kindle here and then it clicked on the search here so it got to the search results then when it got into specific Kindle.

So I found a specific Kindle and then it will go ahead and add it to cart and so on and so forth. But let's say, okay, so let's finish it and let's say, hey, we didn't like what it was doing. So we can easily correct it and say, hey, let's just click on any kindle and then go to the cart and we don't even need the rest of it in here.

And this is how you can correct the steps because as you can see AI sometimes goes on a tangent and does something weird you can also correct same things when you're dealing with more specific prompts like this one find and select the kindle. So you'll be able to, well, it's a single page application. So it opens up the whole load, the whole thing.

But point being, you can correct the steps on this specific, for this specific prompt. And you can, this prompt, it can be used as a function. I'll show you like, you can literally just say, hey, fine. Once you provide a prompt, it becomes a function and then they can reuse that function.

We've corrected steps if you corrected it in another test or in the same test and so on and so forth this is this you can uncheck AI and say hey what I want to do is I want to click on any kindle and then click add to cart and that's it and this is how you correct the steps here and this is all the functions in here which you can correct.

You can keep them if it's small enough you can keep them AI-based or you can correct the steps or you can fix the prompt to make it more clear so AI can understand what needs to be done. But once again this is how it works. You let AI do the job before you, then you review it and correct what did it do.

Mohit Juneja (VP of Sales & Partnerships, LambdaTest) - So the next question to you, Artem, is what are some of the common misconceptions about generative AI in test automation? And how do testRigor is planning to address those misconceptions?

Artem Golubev (Co-Founder & CEO, testRigor) - Yes, so misconception, what we most often see is AI would do everything for you and you don't have to do anything at all whatsoever. That is not true. If you oftentimes look into manual test cases, for example, the testRigor has that feature, you can import like manual test cases for execution.

Oftentimes, with manual test cases, things that are unnecessary, unclear, or require domain knowledge that AI in itself just does not have. Those oftentimes are written by people in that not only in that industry, but specifically working on those projects, familiar with the jargon, familiar with this internal communication and all the context that exists when people work on a project.

And of course, then you give it to AI, it's like giving it to a fresh new person, like either human or AI would not have that context. So, therefore, the test cases oftentimes need to be cleaned up. Moreover, if people are trying to build full end-to-end tests just from a small prompt, completely end-to-end, oftentimes you might need to correct certain things, like because I still have hallucinations.

It can go on tangent and do significantly more than you asked it to or other way around. It just wasn't able to complete all the steps you would want it to complete. And this is where manual testers come in and we have to clean it up, we review the steps, we see okay so here the system didn't do well, what it's supposed to do is very easy to do with the screenshots, you'll see immediately what's missing and just fix the steps.

My point is it will, AI will do a large portion of the job for you and significantly simplify the process, but it will not replace humans. It will just make your humans about 10, 20 times more efficient.

Mohit Juneja (VP of Sales & Partnerships, LambdaTest) - So even before somebody figures out how to use Generative AI, what you are suggesting is they need to know first where to use it correctly, identifying the areas where to use it. And of course, there would be places where it will not be applicable.

So how come later? First is figuring out where to use it. Great. I think the next question really is an extension to what we have been discussing so far. So can generative AI be customized to fit specific organizational needs? Every organization is different. Their tech landscape development is different.

And so can generative AI be customized for their specific testing requirements, or is it a one size fits all sort of a generic capability, which in portions might work in a similar way for all? What's your take on this?

Artem Golubev (Co-Founder & CEO, testRigor) - Yeah, like the Generative AI can be used, across the board anywhere. When we talking about our system, which uses generative AI. So organizations would be dealing with the same model, like across the board that we have prepared, and trained actually not one model, it's a set of models.

And it's not a fee tool in the sense that it's not 100% autonomous. Yes, there are certain things where the system can autonomously do testing. For example, it can automatically test public pages on the website.

But when we're talking about complex functionality, then yes, we can import the cases or copy paste the cases into the system, but then somebody would need to review them, somebody would need to clean them up, make sure they are doing exactly what you would expect them to do and so on and so forth.

Mohit Juneja (VP of Sales & Partnerships, LambdaTest) - Great. No, I think that's true that there is a possibility of really leveraging it for a specific needs. So thanks for sharing those insights. Let's switch gears to your journey with AI and generative AI. And I know that testRigor has been one of the early adopters of AI led capabilities in test automation.

So I would love your insights and learning experience that what were the most significant challenges or limitations you have encountered while implementing generative AI and test automation? And how did you go about overcoming that?

Artem Golubev (Co-Founder & CEO, testRigor) - Well, I guess the challenges are mixed misconceptions that people might have and overblown expectations. A generative AI will help you to build, I don't know, maybe anywhere between 30 to 80% of steps in your test cases on average, depending on how much context is required.

But then people would still need to come in and clean it up make sure that it is doing the right thing that's probably the largest challenge in there so and there is also certain techniques to deal with hallucinations so you can basically after AI generated you the steps you can say hey like this is how I would see going forward to avoid hallucinations this is one and second in some cases just make the prompts more specific to what to be able to address it.

Mohit Juneja (VP of Sales & Partnerships, LambdaTest) - Fantastic, I think yeah, and with the kind of areas you mentioned, let's dive right into some of the specific objectives that we have been driving, you have been driving with enterprise customers, especially reliability and accuracy. These are the two most important pillars of test automation.

So specifically in the context of complex or critical functionality, how do you fare or how do you see generative AI ensuring the reliability and accuracy of test automation?

Artem Golubev (Co-Founder & CEO, testRigor) - Yeah, it actually helps to significantly improve it. Because as I mentioned, let me share I guess the screen. Like on this screen, there is a comparison between the same tests in Selenium and same exact test in testRigor on the right. In Selenium, there are reliance on details of implementation, specifically with XPath IDs, names, and so on and so forth.

This is what basically drives inaccuracies and makes tests unreliable because those details of implementation constantly change, which makes the test fail. Whereas from the end user's perspective, everything might be working all right. Compare that with the test on the right.

So there is a login, right? And a login would work here regardless of how it is implemented. Maybe today it is a one step login where you entering username and password and pressing login button. Tomorrow it might be two step where you first provide a username, click continue when providing your password and click login. It doesn't matter.

If you can log in as a human, then the system will be able to figure it out for you and log in on your behalf. That's the whole point. And this is what significantly elevates, we're talking about 300 times the reliability of test cases.

Mohit Juneja (VP of Sales & Partnerships, LambdaTest) - Great, I think there were clear examples and I see a direct impact of these kind of capabilities on some of the metrics that we measure. For example, bugs caught by automation versus bugs caught manually or reducing the bugs in production, etc. So there is a clear impact which can be inferred from the examples that you just provided.

But let's talk about beyond reducing or beyond increasing the efficiency on these metrics. What are the other areas where you think generative AI can actually improve the product quality? Are there any indirect benefits that one should consider as they invest in this technology?

Artem Golubev (Co-Founder & CEO, testRigor) - So basically, yes. So one of the benefits that I have mentioned is better processes, right? It's not just like literally exactly who write tests. It's about efficiency of the whole organization overall, including product managers and engineers, not just QA automation engineers or manual testers. Moving the test cases into the main of plain English would basically allow everyone to chime in and everyone to be able to participate in the process.

First of all, we have a feature where product managers can copy paste their specifications, upload wireframes and diagrams to generate test cases, which then most probably will be taken over by a QA organization, which would clean them up to make sure they are exactly doing what they are supposed to before handing it over to engineering and most importantly product managers can review the results and say yes this is exactly what i meant or no no like let's correct that.

And then engineers are writing features sometimes we want to add a new test that's no problem or sometimes we need to modify the test to keep the test with green it's still very much possible with plain English plain text. Moreover, it actually even enables a significantly more efficient overall end-to-end process.

So the traditional process, if you look into how people are usually doing it today is that product managers write specifications, engineers write the code, then QA automation can actually write automation, and then there is a testing, and then they finally release.

What we enable our customers to do is use these specifications that product managers are writing anyway to generate test cases to help to speed up test coverage. But most importantly, QA is working right where before engineers write the code.

Additional benefit being it reduces misunderstanding between product managers and engineers so that the likelihood of engineers creating a feature which a product managers didn't intend to produce is significantly reduced. Right? There is always misunderstandings and so on and so forth and especially if you write too large of a specification, engineers might not even want to read it all. Right?

They might skim through, miss some details and invent some stuff instead based on how the system works and how we feel like as opposed to how product managers exactly specify the things and specifications.

So we help to reduce the probability of this happening and delivering something to production that then later on would need to be corrected by providing this end-to-end test in advance so that engineers, when we write the code, can run the test and see if the test turned from red to green to make sure, like, yes, the feature was delivered as expected. And this is how you can become significantly more efficient as an organization overall.

Mohit Juneja (VP of Sales & Partnerships, LambdaTest) - Great insights and clearly very significant and big promises and who would not want these kinds of capabilities unless the ROI is justified, right? So while we talk about all these big promises and these efficiencies, right? There is always a cautious call on what's the ROI of implementing generative AI? So what's our take on the lens that one should put while analyzing the ROI of such investments.

Artem Golubev (Co-Founder & CEO, testRigor) - Well, there are multiple ways to calculate ROI and it's hard sometimes. But one of the ways to do it is to just calculate the efficiency of overall organization, how quickly you can deliver features with less bugs to production from quality assurance perspective.

The probably number one should be how much less the organization is losing because of bugs in production, or because bugs in production lead to churn or lack of sales and upsell, and you can actually measure that and get an estimate and calculate basically the price of bugs.

If you then reduce this number of bugs that is linked to production, this is how you can estimate the price of the bug. And now the number of bugs that are linked to production has decreased, let's say, two times. Therefore, we expect that we will lose two times less revenue compared to last time.

It might be a little bit more nuanced and complex because as organization grows, the impact of the same bug can be actually amplified. If the whole company grew two times over the course of a year, then the number of users is 2x and the price of the bugs will most probably double as well. So it's a little bit more dynamic, but you can start from calculating the ROI based on the price of bugs.

Mohit Juneja (VP of Sales & Partnerships, LambdaTest) - So great insights, Artem, and as you rightly said, the midterm, long-term ROI analysis have to be as per the nuances which enterprises are looking for from this investment, as per what matters to them with respect to midterm, long-term aspirations that they have. And this brings us to the end of today's podcast, Artem. Any parting thoughts before you leave us and before you leave our audience today?

Artem Golubev (Co-Founder & CEO, testRigor) - No, I probably want to share one of the interesting insights from our customers that leverage generative AI features. So usually, we saw that people are developing test cases on our platform. Manual testers would be able to build around 1,000 end-to-end test cases then allocated full time.

However, recently, we started to see people who significantly outperform that. So there are a couple of people doing that, and within two months, we automated 4,000 test cases, two months, two people. So we asked them, hey, what's going on? How were we able to achieve that? That's unprecedented. I have not seen that before.

And I said, hey, we just followed the process of leveraging the AI, we copy-pasted our manual test cases and we just cleaned it up. And that's it. It's just kicked off from there out of the box. That is amazing.

Mohit Juneja (VP of Sales & Partnerships, LambdaTest) - Wow. I'm sure there are many such stories, Artem, many such customers who are taking advantage of the great work that is happening back there. So, as we wrap up today's session, Artem, like always, it's delightful to talk to you. Your vision of how generative AI can transform testing is truly inspirational. So more power to you, more power to testRigor.

I'm sure our audience would pick some of the takeaways or many more takeaways from today's talk, and they will implement it in their day-to-day jobs because Generative AI is here; it's right here already. So, stay tuned to our audience; many more such insightful conversations are lined up for you.

And if you haven't already, go Subscribe to LambdaTest YouTube Channel where you can find all our XP episodes. Thank you once again. Thanks a lot for joining us. It's been a pleasure hosting you all. Thanks, bye-bye.

Artem Golubev (Co-Founder & CEO, testRigor) - Thank you, bye-bye.

Past Talks

Optimize Issue Tracking: Integrating SpiraTeam with LambdaTestOptimize Issue Tracking: Integrating SpiraTeam with LambdaTest

In this XP Webinar, you'll discover how integrating SpiraTeam with LambdaTest optimizes issue tracking, enhancing your QA workflow. Learn about seamless collaboration, improved efficiency, and advanced features that streamline test management and defect tracking for superior software quality.

Watch Now ...
Innovation Accelerated: The Intersection of AI and Quality EngineeringInnovation Accelerated: The Intersection of AI and Quality Engineering

In this XP Webinar, you'll discover how AI and Quality Engineering intersect to drive innovation. Learn about intelligence-driven approaches that enhance testing methodologies, boost productivity, and redefine user experiences.

Watch Now ...
Impact and Potentials of GenAI to the IT EngineersImpact and Potentials of GenAI to the IT Engineers

In this XP Webinar, you'll discover how GenAI empowers IT engineers, revolutionizing their approach to software development with unparalleled efficiency and potential. Explore the transformative impact of GenAI on testing in the digital landscape of tomorrow.

Watch Now ...