February 20th, 2025
43 Mins
Listen On
Saurabh Mitra (Guest)
Vice President & Global Head of Testing,
Ramco SystemsMurli Mohan (Host)
MD & VP - APAC,
LambdaTestThe Full Transcript
Murli Mohan (MD & VP - APAC, LambdaTest) - Hi, everyone. So with the introduction of GenAI, the quality engineering landscape is undergoing a tremendous shift, right? And traditionally, automation frameworks require extensive manual intervention. But nowadays, these frameworks have evolved into intelligent AI-driven systems capable of delivering unmatched reliability, scalability, efficiency, and all that.
This transformation is not just about enhancing automation but actually redefining how QA professionals approach testing in today's era, where speed and precision are paramount. In this session, we'll explore how AI is revolutionizing test automation, ensuring better reliability, scalability, and ROI as we prepare for a future where AI is an integral part of QA.
Welcome to another session of the ever-popular LambdaTest XP Podcast Series, the show where we deep dive into the latest trends and innovations shaping the world of software testing. I'm your host, Murli Mohan, recently appointed as Managing Director and Vice President for the Asia Pacific Region at LambdaTest.
And today we are joined by a senior executive from the world of software testing. And should I use the cliche, a Veteran, Saurabh Mitra, Vice President & Global Head of Testing at Ramco Systems. I've had a few conversations with Saurabh as a run-up to this podcast, and I can tell you he is a true engineering evangelist with over two decades of rich experience across multiple domains.
He holds two US patents for his groundbreaking contributions in the area of automation frameworks, which is really the core of what we'd like him to explain to us today. And no surprise that he's an avid speaker in a lot of international conferences. So in today's session, Saurabh will hopefully shed some light on how AI can enhance the reliability and robustness of test frameworks, analyze automation suite effectiveness, and, if time permits, predict some future trends in AI-driven testing.
So the title of today's discussion is Building AI-Driven Test Automation Frameworks for QA Excellence. So whether you're a seasoned tester or just exploring AI's impact, this session promises tremendous value to you. So let's jump straight into it. Thank you once again for joining us from your very busy schedule and having to travel to so many cities across the world. So thank you once again for making the time.
Saurabh Mitra (Vice President & Head of Global Testing, Ramco Systems) - Thank you, Murli. I think it was great to connect with you as well as the LambdaTest team. And it would be great to share my experience, and my bit of experience on the testing and test automation front. So looking forward to this podcast.
Murli Mohan (MD & VP - APAC, LambdaTest) - Excellent. So let's get straight into it, Saurabh. So let's start with the big picture. So we are surrounded, as you know, by a lot of deep tech influences today, the blockchains, automation, this artificial intelligence across the board. And these influences are no different to our field of testing.
So if you can just unravel this puzzle in terms of how are these influences coming together, how will the testing fraternity, in fact, benefit from some of these? So just help us put into context where we are in this whole myriad of tech influences if you can start with that.
Saurabh Mitra (Vice President & Head of Global Testing, Ramco Systems) - Yeah, sure. So I think I believe in a very fundamental philosophy that necessity is the mother of all invention, right? So I think that's the same with automation coming into the picture around two decades or two and a half decades back, automation coming into the picture.
Prior to that, people were doing testing manually and prior to that, people were not even testing and releasing software. Then we saw the advent of multiple ways of test automation, test automation tools, frameworks blockchain, IoT, big data, data science, AI and GenAI, now known as agent PKI and so on and so forth.
So, all of those technological advances happen as we speak and it's also happening as we are speaking. But I think we have to always look into it that how can we leverage them for our advantage. Now, and to solve a specific business problem, right? Now in this case, we are talking about test automation.
Now from my experience, some of the major challenges of test automation across the industry, I've worked across eight industries or sectors to say, the common challenge of test automation, and it's not now, it has been 20 years back, and it's even today, right, is how do we maximize the return of investment of test automation, right?
And in most cases, what I've seen is that we get an initial return on investment, but in the long run, the ROI fizzles out, and it reduces the confidence of test automation to the SDLC stakeholders in terms of its effectiveness. So I think this is exactly where, with the advent of AI, we can leverage AI to maximize and increase the ROI and take it to the next level, right? So that's what I feel AI can help in test automation.
Murli Mohan (MD & VP - APAC, LambdaTest) - So let's feel that one level at a time because I would be very keen to understand the what, the how, the why, some deeper insight on the challenges, what the key message to our testing fraternity, whether it's an organization or an individual tester.
Let's just take this one at a time. To extend the earlier questions, Saurabh, what does an AI-driven test automation framework actually look like from an enterprise context? And how does it adapt to diversity across different domains? What's your thought?
Saurabh Mitra (Vice President & Head of Global Testing, Ramco Systems) - Yeah, so if you see from my experience, have seen test automation is in general, a domain agnostic. And in order to prove this point, I have, on purpose jumped organizations and worked across eight to nine domains just to see. And I've seen that it's actually truly domain-agnostic.
Now, leveraging test automation involves multiple aspects. And I will talk about five major aspects. The first one is how do we script the test or code the test in the first and effective manner? The second one is how do we execute the scripts in a very efficient way. The third one is how do we analyze the script failures, okay, to know if it's a false positive or a false negative.
And the fourth one, which is a pretty challenging one, is how do we keep the scripts updated so that we can leverage the same level of ROI as we keep on running them and avoid the script flakiness. And last but not least is the coverage of test automation. So these are the five pillars on which test automation fundamentally relies on, and if we can apply AI, fortunately, we can apply AI in all of those five areas.
So if you can apply AI on all of these five areas, and which is again domain agnostic, it doesn't matter whether it's a healthcare or a pharma, whether it's a telecom domain or it's like a gaming domain, it can work across. If you can apply AI on all of those five pillars, we can reap significant benefits in terms of return of investment.
Murli Mohan (MD & VP - APAC, LambdaTest) - Right. So it's very interesting you say that. you know, now that we kind of get a general idea of what automation frameworks are in the enterprise context, let's go one level deeper and let's explore the how, right? How do we go about this?
How should organizations leverage AI to enhance their testing processes? There's a lot of documentation out there, a lot of points of view, but specific to automation frameworks, how would you advise organizations to leverage AI?
Saurabh Mitra (Vice President & Head of Global Testing, Ramco Systems) - Sure, so I can go about one by one on the five areas which I talked about, which is the scripting part, the execution part, the analyzing part, the maintenance and updation part, and last but not least, the coverage part. So let me go one by one. So as far as scripting is concerned, which is the primary part of any automation, right?
So we have an automation tool which normally works specific to an application, and we create a framework on top of it. And then we have to use a code or a node code kind of a way to script. Now, scripting takes quite a bit of time, right? Now, with the advent of AI, we can generate the test cases automatically to a great extent.
So, I'm not talking about even scripting. I'm saying that the test cases can also be automatically generated because mostly the domains are finite in number, right? The domains are not huge. The only thing within a particular domain, the features may vary, right?
If you compare, let's say, flight booking in IndiGo or MakeMyTrip versus Goibibo, there's not much of a difference on a flight booking. Maybe one or two fields here and there. The whole intent is that whether you are booking a flight or you booking a railway ticket, at the end is you are selecting your journey, you selecting a date, and then you are getting a PNR number, right, which is the end part of it.
So we can leverage AI to generate these test cases automatically for our domain. And like similar to co-pilot, we can also use AI to script the test cases. So automatically, the scripts are created. We don't have to do BDD or a no code automatically the scripts get created, and once the test cases are created, the scripts can be it can run fine on a specific tool and the specific framework, right?
So basically, we might have to create a particular agent in AI which works on Selenium, one which works on Cypress, one which works on Playwright because my underlying, once the script is created, I need a runtime environment, right, to run the scripts.
And the runtime environment is specific to an automation tool in question, which can and we sell any site, press play, write and so on, or maybe an API like a SOAP API or a REST API or a GQL kind of thing. So basically, once I have the test creator, I have an agent AI, which kind of creates that automatically, and I run that specific to the tool or the environment in question.
So this reduces a huge turnaround in terms of the time it takes to create the script. So that's the first pillar I talked about. Let me move on to the second pillar, which is the execution of the scripts. Now, while execution of the scripts, a couple of things we have to keep in mind. One is the data variations in terms of, like every test case, when it is automated, it runs with specific data, right?
As I mentioned, I was giving an example of travel. If you're traveling from Hyderabad to Delhi, so your Hyderabad and Delhi are your data. And let's say you're traveling over the weekend, then your specific date, let's say the first of March, is your travel date. So those are the data, and the data varies.
Murli Mohan (MD & VP - APAC, LambdaTest) - That’s Right.
And the behavior, even if you see production for any enterprise application usage, the customers, when they're using it, will be using different data, and different data will have different outcomes once they go through the branch and the conditions of the code, right?
So basically, data plays a very crucial role in the test case as well as in the test automation, right? Now, the other part of it, apart from the data, is the sequencing of the scripts. How do you sequence the scripts in the most effective way so that they can run concurrently?
It doesn't have to run sequentially; it can run concurrently through multi-threads on multiprocess, and it can save a lot of time. The next aspect of execution is how it can run with low hardware cost because, as you know, the number of tests increases exponentially over time for a legacy code base, and you have to, the time to market is kind of getting shrunk day by day.
So you have to have your automation test runner the sooner, the better. And so you need high-end hardware but also hardware costs. So how do you balance the cost and the execution so that you can get the test running? You don't keep the hardware lying idle because you are losing out money.
If you're running on the cloud, if you keep it on, you are going to go bankrupt because AWS is going to repute. So how do you balance all of that is the very important thing. And that's where testing AI plays a very important role. Another part of it is the pesticide paradox, right? As we all know that, you know, not all tests are effective, right?
When we create test cases, certain test cases lose their age over a period of time, and we need to replace them with better scripts. So we can leverage AI and data science together to, you know, find out the effectiveness of the scripts, the scripts which generate a regular bug, a script which has not even, or a test case which has not even generated a bug in the last six months.
Probably that portion of the code is so rock solid. It's not going to give you much value, right? So kind of how do we know which scripts are effective, and which scripts mimic more of a customer kind of scenario? That also analysis and the AI and DS can play a very good role in this execution aspect and get those insights.
The next pillar which I was talking about is analyzing the failures. Now this is the most neglected area in my experience, right? So we all get very hyped in terms of creating the script and running them and seeing that green and red results in whatever reporting tool that we leverage. But then, in most cases, the tests that fail are not given much attention. We are very happy 90% pass percentage and 98% pass percentage.
But let's take an example we are running a regression suite comprising over 10,000 automated tests, which is very normal for a legacy code base, primarily an ERP kind of application, enterprise application. And 5% of the automation tests failed, which means 95 % is the past percentage, which means that 500 test cases which are failed have to be analyzed and run manually, right, to rule out their cases of false positives and false negatives, right?
This becomes a tedious manual exercise. In most cases, neglect leads to defects in production. People think, okay, 90% has passed, this 5% is fine. But then those have to be done, right? So we can leverage AI easily to create a classification algorithm, which is a very standard AI approach so that it can classify the failures into what are valid failures and what are invalid failures.
What are the failures that have happened because of a product defect, and what are the failures that have happened due to a product behavior change, or what are the failures that have happened due to a flaky script or due to an environment configuration, right?
So if I can use a packetizer to bucket this into or classify this into multiple buckets, and AI can easily do the classifier, and I don't have to spend any manual effort in analysis, automatically for the genuine failures, the bugs can be logged in whatever bug management system or defect management system we are using.
So basically we have saved the entire time on analysis and we can probably invest that somewhere more meaningful. So that's where AI can play an important role in the analysis of the failure and it can completely take the manual effort that I would say probably 95% of the organizations out there are right now analyzing manually, right, as we speak.
Then the next pillar is how do we keep the scripts updated? So we have created the scripts which is great. We are very happy they are stabilized, they are running fine. But the product is not a fixed or a static product. The product will change due to customer enhancements, due to bug fixes, due to taking pictures from competitors and so on and so forth, right?
So how do we keep the scripts up to date, right? And this is also another area which I have seen is also neglected sometimes that whatever, you know, whenever scripts are running fine is fine, whatever has failed because analysis has not done. We also don't know that it is because of a real change, which is an intended change. And how can I update the change?
In the earlier thing I was saying that classifying that it is because of product behavior change. Now I am saying that can I update the change, right? This is actually called an auto-heal feature in the automation world, right? So with AI, I can implement an auto-heal feature so AI can look into my application and see whether it is a valid one or an invalid behavior, right?
Let's say that if I'm getting a pop-up, which I was not getting earlier, right? And the pop-up screen using natural language processing, it looks pretty normal, right? Valid popup then now the script is failing because the popup has to be okay clicked on okay to disappear, and I can go to the next step right but then I can do that the script can get automatically updated and I can add that okay button, not the world is not going to fall apart right now.
But if I see that a popup is coming with an error message right which says error code or something like that, again using an AI and NLP natural language processing, it knows that it's not the right behavior at any condition it should be reported as a bug. So it can automatically update the script or not update the script on a case to case basis. So the cost of updating the scripts is actually gone, right?
So that's where we can have an AI because as and when we grow the automation suite, which is a perennial problem in every enterprise organization. If you see, I've been working in Oracle, I've been working in Amadeus and other organizations where we have seen automation regression sources of 25,000 tests running overnight across multiple VMs.
Murli Mohan (MD & VP - APAC, LambdaTest) - Right.
Saurabh Mitra (Vice President & Head of Global Testing, Ramco Systems) - And now 25,000 tests if you have to update, it is a nightmare, right? So we can leverage AI to solve this problem in a very effective way. The last but not the least, the fifth pillar is the coverage of test automation. What I mean by that is we cannot capture test coverage on static code.
So lot of organizations, what they try to do is that they try to run, if it is a Java-based enterprise application, we run JaCoCo. So Java code coverage analyzer, we run it on the code and we try to see how many of the test cases have actually hit the code. Great. But then most cases, the code base is a legacy code base. And I have seen this is also a common trend across industry that they don't remove the dead code. The dead code just lies there. They create new code on top of it.
So maybe run a code coverage analyzer on top of that, it gives that test coverage is 30% or 40%, which is shocking. But actually it is much higher because if I consider the dead code and the new code, it is 30%. If I consider only the new code, it probably will be 80 or 90%, right? But that's a challenge. You cannot take off the old code and run the code coverage analyzer.
So then how do we know that what is the coverage of test automation, right? I think this is another area where we can use AI. Okay, based analytics tools to give accurate percentage of code coverage as automation test gets executed. So this is a very, very important area. And this is applicable to every enterprise organizations that they always kind of think that, okay, we will create a requirement traceability metrics.
What is requirement traceability metrics? It's a static table where we have our requirements and we say we have created five test cases against an APCOR story. It doesn't give any sense of the code coverage, right?
And we are tend to be happy about it saying wow we have done we have a RTM and the RTM is great but that doesn't give you any showcase so we need to have a code coverage but we cannot directly go ahead with the code coverage standard tools because of the problem I stated the dead code and so basically we can actually leverage AI with automation to figure out that what are the testing executed and does it mimic the exact you know customer usage.
And also, in terms of what are the different probability cases, which can also need to be covered based on the front end of the application. So, manning both an AI can tell me, okay, sorry, why are you not running this particular flow, which is available at the end of it, an end user can only interact from the interface.
So, if the interface showcases 100 permutation combinations, which is possible, and if I see my automation only covered 75 or 80, the AI will tell me this 20 permutation combination is pending, why you're not trying it, right?
And then automatically using an auto-scripter using an AI, I can just write scripts, and that gives me 100% coverage because nobody's going to go back and then do that a normal user is only going to play around with whatever is given in front of them as an interface.
To summarize, these are the five areas I can repeat scripting the test how do we can leverage AI in the scripting test we can leverage AI in the execution part of it we can leverage AI very much in analyzing the failures and make it completely automated with zero manual intervention, we can keep the scripts updated again using AI, have auto-heal mechanisms and all of that.
And most importantly is that as a key organization before releasing software for a go or no-go, we can know the coverage of this automation through AI. So these are all the five areas where we can leverage AI.
Murli Mohan (MD & VP - APAC, LambdaTest) - Excellent. I think that was a masterclass in taking us through all the key stages and where specifically, AI can play a very proactive role in enhancing your ability to go through all those risks and challenges that otherwise have slowed you down or created more defects as the case may be.
So actually, while you were speaking, was scribbling some of the notes and incidentally across this entire journey that you mentioned, even LambdaTest over a period of time has come up with a lot of innovations. You mentioned scripting and authoring. So lot of AI-infused innovations that we have brought KaneAI is a classic example of that.
So an exciting time I'm so glad that we are completely aligned in the way you believe that these are the essential stages for excellence in automation. Wonderful. It's incredible how far we have come. Clearly, with every advancement, there will be challenges, right?
So, I mean, this is something that I've also wondered about. Could you provide some insights on specifically what challenges organizations face when they leverage AI and how can AI kind of help overcome some of these challenges? So let's focus this segment purely on the challenges associated with AI adoption.
Saurabh Mitra (Vice President & Head of Global Testing, Ramco Systems) - In most organizations, there are two major testing and test automation challenges, what I have seen. One being impact-based test selection, due to time constraints. And let me explain that in bit. So we have any enterprise architecture, which is kind of a spaget. I think that's the best way to describe it.
There are so much of intricacies and dependencies that I have never seen any organization, and I'm openly saying, I've never seen any organization where automatically people can say, okay, I have changed this many lines, these are the lines of code automatically these are the corresponding tests that need to be run. I've never seen that. Okay.
So it's been like, you know, unholy grail, right? So in most organizations, now what happens is that you have a hot fix, right? For a customer who has faced an issue, you have changed a bunch of areas, you have to roll it out. Now everybody starts biting their nail because they don't know what's gonna regress, what's going to fail, which was working fine.
So basically, you have to select and you don't have sufficient time to run the entire regression suite. Your regression suite mostly runs overnight for 10, 12 hours. You don't have 10, 12 hours. You have maximum one to two hours before the customer pulls the plug, right?
So in this one to two hours, how do you know which are the right tests you need to run, which is going to give an impact, right? So that's very, very challenging thing, right? For any enterprise organization that how do we know under the time constraint, what are the right tests to run? Okay.
The second is in spite of executing regression suite that we have, the in-house developed regression suite that we have, we observe that customers logging defects in production. And it is extremely frustrating for the QA team to see that, right? Now, for identifying impact rather than manually identifying the impact areas.
So let's talk about the first part, right? You how can we identify the first challenge which I was talking about that impacts test selection? So for identifying the impact rather than manually identifying the impact areas, AI can play a more accurate role in selecting the right test cases or the right automated test cases that need to be done to identify the degradation.
So just think about it that we have the enterprise architecture, and I have an AI on top of it, which kind of understands the implications of each and changes on one over the other and the entire system together and then married that we have married that with my automation system and my automation repository so it knows that if I if you change ABC lines of code across three modules you have to run 50 tests from here automatically.
So the bottom line I can have a CI/CD pipeline developed but the moment I change these three lines of code, those automatically 50 tests are dynamically selecte,d and they are run. Whether it has to be 50 or 5 or 500 or 5,000 is all dynamically generated which can be using an AI, right? This is one of the major challenges which I have seen across organizations.
Okay, now we talk about the second challenge is AI can also be leveraged to analyze production customers usage pattern and data because as I was mentioning that all organizations they create their own regression suite but they get extremely frustrated when after running that they see customer on the day one of live or day two of live they come and log three deflation so they my god what the hell right.
So why it's happened is because we sometimes are not paying attention to the customer usage scenarios and the data that they are using in the production so now it is not possible now most some cases I have seen people they talk to customer support they try to see the production down, they take it, they do a data mining.
All of that is mostly I've seen people do once or twice or thrice. Once the noise dies down, once the war room is closed, everybody goes back to their own mode. It is not a self-sustainable approach, right? Now what we can have is AI can be leveraged to automatically analyze production customer usage scenarios, okay, as well as the data variations.
And we can compare against our automation run where also we are executing specific test scenarios against test data. And they can kind of do a hit map or an overlap or a Venn diagram kind of a map that what is being covered and what is not being covered and what is not being covered again we can go back and leverage an auto scripter to automatically create the scripts to reinforce the automation regression.
So in an automatic manner no manual intervention based on production right we can even take it further if you have, let's say, enterprise platinum and gold customer right who we cannot even have our SLA bodies so high we cannot even have a blocker or critical issue.
We can even use this to create custom automation suits based on an automatic production customer usage pattern and data analysis using AI, as well as the automation suit and create a custom automation suit which runs on a new environment for the customer. The customer is also happy. We are very confident that we won't have to wake up at midnight to fix a bug, right?
And customers will be experiencing the world class quality that they have been anticipating. So I think these are the two major challenges which I've seen again at across industries, and I feel that we can definitely leverage both the impact-based selection as well as having a regression suite which mimics real customers' behavior and data using AI.
Murli Mohan (MD & VP - APAC, LambdaTest) - With your permission, let's take that one step even further. Can we actually go down to help us line up the steps involved here? So especially for the benefit of those in early stages of their journey in AI-driven testing, what steps would you recommend they should implement now to implement in their existing QA process? So let's start with the steps.
Saurabh Mitra (Vice President & Head of Global Testing, Ramco Systems) - 25 years ago when test automation was a buzzword, right? Everybody say, you I know in my organization we are doing test automation. I think today AI is a similar buzzword, right? At that time, we also have seen organizations jumping into test automation without brainstorming and realizing what business outcome they want to achieve out of the test automation, right?
And they were jumping and they were burning their fingers and they were saying, know, test automation is good for nothing. Similarly, today, organizations need to identify what business problems or productivity challenges they are facing. And it can be only solved using AI or through non-AI means, right?
Now once the problem statement has been finalized, the next goal is to train the team or reinforce the team, whichever is applicable, so that the AI solution can be envisioned either in-house or leverage through a commercial solution. We solve the specific challenge or the problem the organization has been facing. Right?
Now this is very, very crucial because if we don't do that, we end up jumping and we end up grabbing a particular AI or a particular LLM and then we say, okay, let's be a wrapped area. But what, what problem are we trying to solve, right? Does it really require a probabilistic approach or a kind of procedure-oriented deterministic approach, right?
You know, that if we don't know, as I said in the beginning, necessity is the mother of all invention. First we need to identify the problem. So once we identify the problem and we are very confident that AI can solve this problem, then the next step is how do we train the team so that they can know what thing, AI also there are, you know, now we see there are so many options available, right?
So many customized options, you know, and then you have to decide to make or buy an analysis. Do you want to make it in-house? Now for each of those calls to be made, your team needs to be trained enough to take the call, right? So I think training is the most important thing. How do we train the team to make them ready?
And then take the call that should we want to make it or buy it, right? And then whether that particular solution that we envision when we compare multiple AI solutions which best fits it, which is budget-friendly, which is effective. So those kind of choices and those kind of selections we have to do.
Now once we are good with that, then like any project, the AI implementation has to be done like a software project, right? Because project management is very, very crucial because there is a lot of funding going in AI. AI is not cheap, you know, for implementation, I'm saying that, you know, I'm not talking about deep seek or open AI kind of thing. I'm saying that implementing in an enterprise context is not a cheap endeavor, right?
So we need to track the investment of the AI implementation project like any other software project with proper project plan as well as thorough testing of the precision and the accuracy of the AI framework because the AI framework will take time to be better. It is not going to be day one. It may have a 55, 60, 65 percent accuracy.
And so we have to constantly calculate the precision and the accuracy of the AI framework using metrics like confusion metrics. And then we have to ensure that the AI solution that we have selected truly meets and solves our business need. So basically, this would be the steps which I would definitely recommend for any enterprise organization if they are one to adopt AI in their existing QA process.
Murli Mohan (MD & VP - APAC, LambdaTest) - You know, lot of these steps require significant investments in tools, in training, in time, right? So from your experience, I mean, it all has to be down to what metrics are we looking to use to evaluate the ROI of while implementing AI in test automation. So how, what kind of metrics would you recommend organizations should use to evaluate this ROI?
Saurabh Mitra (Vice President & Head of Global Testing, Ramco Systems) - Again, as I was mentioning that while implementing AI in test automation, okay, first we need to identify what business problem or productivity challenge we want to address, right? Then based on that, the AI solution has to be envisioned. Now the effectiveness of the AI in automation can be measured using the metrics by comparing an AI-based test automation framework with a non-AI-based test automation framework, right?
And I will talk about a few of the metrics which people normally follow. For example, the percentage of false positives and false negatives of the automated suite in comparison to one with AI and one without AI right now. If we really want to see that okay, we see that yes number of false positives and all negatives has reduced the efficacy of the automation suite.
In terms of valid bugs identified, which I also mentioned some time back, the test-reset paradox thing right that now we have to see whether my suit was earlier 75% effective now with AI coming where we are constantly checking the production customer scenarios and the data and bringing that back into the suite.
And enhancing and reinforcing the suite has increased my efficacy of the suite to maybe 90-95%. That is a very good indicator. It's been an increase from 70% to 95%, something like that, right? Then another metric can be percentage of less effort required to develop the suite if we compare an AI-based framework to a non-AI-based framework, which I was mentioning that the script development, right?
If I can develop the test cases automatically based on a domain and if I can develop the scripts automatically, there's a huge amount of cost saving that we can do. We can reinvest in other areas, right? So what is the amount of effort and time that we can save and it's boiling down to how fast we are releasing the software to the market, right? Which is the, you know, financial number, right?
You know, how soon we can monetize our products, soon we can make the revenue and the profitability, right? So that is again a direct indicator or metric based on how much effort saving we can do and how it's boiling down to reinvestment of those efforts in some other areas or faster time to market and quicker monetization, all of that.
Another metric is the ratio of time taken to heal a suite, is AI created using AI versus non-AI, right? As I mentioned, we create the suite, and we spend a lot of effort updating the script as the product behavior changes, right? Or maybe because the script has become flaky for whatever reason, maybe the application is taking a little more time and there is a timeout happening and all of that, right?
So now, manually you have to spend effort constantly, like maintaining a garden, right, to make it read free, right? Whatever we are spending to keep the automation running and effective, right? Now, if we can get it completely replaced rather than a manual effort with an AI-based system, an agentic system, then again, it's falling down to how much of time it is taking to heal the suite, right?
Because only when the suite is running fine can I leverage the suite. If the suite is broken, it is non-leverageable. So it's also a very good indicator to see that AI-based framework, the time or ratio of time taken to feel versus a non AI based framework. Now, the most important metric is the precision and the accuracy percentage of the specific AI algorithm.
Now, as I mentioned across the five pillars, which I mentioned of automation, we are using multiple AI algorithms for specific needs, right? Each of these AI algorithm will have its own precision and accuracy. So we have to constantly capture the precision and the accuracy and see whether it is biased or non-biased because there's a very high chance of the model being biased to certain cases.
So how do we calculate the bias? How do we make sure using the confusion matrix the precision and accuracy and make sure that, yes, it kind of meets the target that we have been anticipating right? Last but not least, there can be more metrics, but last but not least, what I would like to capture is, again, I said the percentage of automation coverage with regards to test cases and the code executed, right, which is not possible to cover otherwise.
Now, if I can get a direct sense that based on the production customer usage, based on the different paths my interface can provide, by which a user can play around with the system based on my automation scripts if I can marry them all together, and it tells me that of your automation scripts cover 70, maybe 100% of customer cases and 75% of the interface cases, I think I'm in a very comfortable place to roll out software.
So it gives the confidence of rolling the software. And if I keep a bar that only when 100 % of production customer use cases and data variations are covered and 75 % of my interactions in the front end, all the companies are covered, then only I release it. Automatically, once I do the testing, it will give a numeric indicator of whether we are good to go, or not good to go.
It is not subjective anymore that somebody will say, OK, let's release nail biting. If it is made, it has made it. If it's not made it, probably we are still not ready. We need more time to test. So these are very crucial metrics around AI and the adoption of AI, which, if we can capture them, it will be fantastic to know whether the investment and the funding that we have done on AI has given us the return on investment that we have anticipated in the first place.
Murli Mohan (MD & VP - APAC, LambdaTest) - That's invaluable advice. You've taken us through all the phases and the influence that AI has, the good and the bad, and the challenges and what organizations need to look out for. And, of course, the metrics that one needs to be keeping a very keen eye on to make sure that you're not deviating from the purpose and the investment case.
This session is not over unless I ask you for your specific advice on the practitioner community. We heard about what organizations can do, but our viewership has a lot of individual practitioners, not just from India but around the globe.
So based on your decades of experience and expertise, you could lay out some recommendations on what our practitioner community should focus on, whether it's on the learning side, skill building, or which aspect of AI they should sharpen their knives on. So what is your advice to them?
Saurabh Mitra (Vice President & Head of Global Testing, Ramco Systems) - My advice would be that, basically, testing practitioners' goals would be very clear. How do we roll out a key cash-quality software that customers love, right? Now there are two parts to it. One is that how do we release software that has less number of bugs? I will not say bug-free. I don't think any software is bug-free.
The second part of it was when I was working for Esports, which is a gaming company, one of the largest gaming companies in the world, we used to have something called an X factor. How do we wow the customer? Because there's nothing called bug-free gaming, right? We have not heard that term.
So, the only thing is you're going to buy the game if you love it if you're addicted to the game. So how do we create an addiction? And that's where Apple has nailed it, right? We see long queues whenever Apple launches a new product, know, serpentine queues, right?
Because their QA team, their product team, and their development team all work together to create an addiction, right? Starting from the way you are unpacked, the product is also tested out the actual product coming out, and it's working seamlessly without any hassles that's all about it.
So I would say that all the test practitioners, technically it is definitely one side of it but don't lose your DNA of a quality guy, right? Think about it you have to be the consumer of the same. If you are consuming your own software, which you are testing, how you can increase the bar and do whatever it requires, invent something like the next level of AI or a new model, a new way that nobody has ever envisioned and constantly challenge yourself.
Don't get into the mode that is OK, a complacency mode. Constantly learn, you know, try out other software, see what is happening. You know, I have seen DIPSEC coming into the picture. Try it out, compare it, and figure out how you can leverage it. Do we really need it?
OK, so I think this constant learning, constant upgrade of what is happening and that constant eye for quality is what is very, very important to have a quality guy who is going to create, be the best person to ensure that the customer gets the best quality product, right?
So I think that's something, you know, at one side, don't lose your quality DNA. And the second side is don't lose your technical hat, no matter how much of years of experience you have, right? Whether you are on the verge of retirement or whether you have just passed out from grad school, don't be a technical person, remain technical, constantly research, and don't just get, I know a lot of organizations will hate me for saying that, don't just spend your entire 70 hours or 90 hours or all of that hours on your work.
Keep a certain time every week so that you can spend it for your own benefit, for your own learning, for your own upgradation. Out of 100 experiments, 90 might fail, 10 might pass, but those 10 might be a game changer, might be patentable, might transform your company in a way, might give you that insight which probably you wouldn't have even received in 10 years, right?
So be, stay hungry, stay foolish kind of thing, stay hungry, stay, don't stay foolish, stay smart, stay hungry and constantly learn and constantly try to adapt, apply, and experiment and see what works for you, for your organization, for your product, because it's not going to be the same. And as you keep doing it, you will see that you have already raised your bar and you will be an expert in your field.
Murli Mohan (MD & VP - APAC, LambdaTest) - That's gold advice for all our listeners, those who are tuning in to learn from. I will also leave it here that, know, should there be any areas that you feel that you would like a little more detail on and, you know, seek Saurabh's advice, feel free to drop in a text or speak to one of our representatives around the globe.
And I'll be happy to take that to Saurabh for an expanded answer. In any case, Saurabh, as we decided, you owe me a cup of filter coffee when I'm in Chennai next time. Thank you so much, Saurabh. This is a great learning for me as well.
Saurabh Mitra (Vice President & Head of Global Testing, Ramco Systems) - Absolutely.
Murli Mohan (MD & VP - APAC, LambdaTest) - I have a whole book of notes, and I'm sure our listeners, our viewers, have had a session of great value as we promised in the beginning. Thank you again so much to all our listeners and all those who have tuned in, and look forward to seeing you in our next episode. Until then, thank you. Stay healthy, take care. Thank you so much. Thank you, Saurabh!
Saurabh Mitra (Vice President & Head of Global Testing, Ramco Systems) - Thank you, Murli. Thanks for giving me the opportunity to speak. I really appreciate that. It was great to connect with you guys and great to share my experience. Thanks a lot. Thank you. Bye.
Murli Mohan (MD & VP - APAC, LambdaTest) - Thank you. Bye!
Guest
Saurabh Mitra
Vice President & Global Head of Testing, Ramco Systems
Saurabh Mitra is a seasoned test engineering evangelist with over two decades of experience leading QA teams across multiple domains in startups and MNCs, including Yodlee, Amadeus, Electronic Arts, Oracle, Envestnet, and Ramco in the USA, Europe, and India. He has expertise in functional testing, test automation, performance engineering, and tool development. A certified PMP and CSM, he has also filed two solo U.S. patents on automation frameworks. Passionate about advancing test engineering, Saurabh frequently speaks at international conferences, sharing insights on automation and quality assurance to drive innovation in the industry.
Host
Murli Mohan
MD & VP - APAC, LambdaTest
Murli Mohan is a seasoned technology leader with over 25 years of experience in scaling B2B SaaS companies across India, the Middle East, and Africa. As the Managing Director & Vice President - APAC at LambdaTest, he drives go-to-market strategy, revenue growth, and adoption of software testing solutions across the Asia Pacific region. Previously, he led businesses at CoreStack and UiPath, shaping multi-cloud governance and automation strategies. With expertise spanning computing evolution, sales, and operations, Murli is known for his transparent leadership, disciplined execution, and ability to drive business success in competitive technology markets.
January 31st, 2025
29 Mins
January 17th, 2025
39 Mins