XP Series Webinar

Quality First: Implementing Shift-Left Testing for Future-Ready Products

April 03rd, 2025

48 Mins

Watch Now

Listen On

prodcastspotifyMusicyouTube
Srinivasan Santhanam

Srinivasan Santhanam (Guest)

Senior Principal Technologist, Singapore Airlines

Srinivasan Santhanam
Mudit Singh

Mudit Singh (Host)

Director of Product Marketing, LambdaTest

LambdaTest

The Full Transcript

Mudit Singh (VP of Product & Growth, LambdaTest) - Hello, everyone. Welcome to another episode of the LambdaTest XP Podcast Series. Hope you are having a great week and a great day wherever you are. And we bring to you another episode in the XP Series. So if you are watching this first time, our XP Series is our way to connecting and bringing industry leaders with community folks like you.

Ensuring that we are creating a dialogue and helping understand how big industries are doing their quality assurance and engineering is our core aim behind the XP series. We have been doing this for more than a year now, a lot of episodes, so feel free to check our past episodes as well.

But joining us today is Srinivasan. He is the Senior Principal Technologist at Singapore Airlines Group of Companies, not just Singapore Airlines. I just realized a lot of other responsibilities he has on his shoulders as well. And he has a combined experience of more than 25 years.

For the past five and a half years, he has been working with Singapore Airlines. And he's an expert in quality assurance and, at the same time, accelerating the development processes and that's why we are an area of interest today as well for the conversation. How exactly in the world of AI first development shift left testing happening, and how companies like Singapore Airlines are accelerating their product roadmaps to all of these technologies as well?

First of all, these things, AI-based technologies again, is not a new thing. If you have been following me over LinkedIn, you will see that I've been traveling across the US, meeting a lot of folks and one very cool insight I had is that this year, everybody has not adopted or is planning to adopt AI-first technologies.

And the discussion that was happening in 2024 was very theoretical in nature. People were still exploring InfoSec was still happening. People were still exploring how to add AI-based technologies. But this year, people have started to adopt. A lot of cool technologies are part of the developer ecosystem, and people are using them to accelerate their development velocity.

And Shift-left testing, again, it's not a new topic. People have been doing these areas of development for some time. But when this intersection of AI and how to accelerate testing using AI-first development and bring in shift left testing in that ecosystem that creates new challenges and a new paradigm in place, which has to be definitely discussed. And Srinivasan would be helping us out with that.

So Srinivasan, first of all, thanks for taking the time out, to join us in this XP series today and share your valuable insights. And I really started to dive into the conversation. And my first, I'll say the point of discussion would be how exactly the AI first product development is accelerating the development processes and what are the biggest challenges that are coming specifically across quality assurance because of this today.

Srinivasan Santhanam (Senior Principal Technologist, Singapore Airlines) - Mudit, thank you so much for the intro. Thank you so much to you and LambdaTest for having me on this call. It's a pleasure to interact with the audience at any point in time, at any opportunity. So yeah, so thanks so much for that. OK. Let's dive into your question.

So I'll take it into two parts, So first is, how is AI-driven or AI-centric development accelerating some of the outcomes for a product development lifecycle? The second we can touch upon, while it accelerates, what are some of the challenges you can typically see that we need to just take into account and counter them as we go along as well. So we are at an interesting point in time in the software engineering life cycle.

So two plus years back, ChatGPT got introduced, very much like a conversation model, et cetera. So if we really look at you know, where to put it to code Gartner's like, you know, the hypercycle, where we are like, you know, we have gone past that, you know, inflated expectations.

That means we are moving very closer toward reality or the slope of enlightenment that they call, right? So that means we are seeing more and more practical applications of use cases, and more value coming out of this whole AI-centric product development as well, right? So the question is, what is it doing that it's accelerating time to market to the product side? When I say time to market products, that includes broad agility and quality. There's no point putting in a product that the users have issues with.

So both these dimensions, it's something that we need to take into account. So if you just want to the product level, or something like an SDLC, the product development life cycle itself, right? The first phase is like ideation. What is happening with ideation? A lot of interesting things are happening in ideation.

People are ideating using AI as their assistant as well. A lot of these things used to happen around whiteboards, stickies, etc. While this is still happening, one little change that I have noticed is that these are being recorded in digital formats. The reason being, if you record that, then there is a very good opportunity for us to just feed that into AI to just generate those transcripts, generate lateral ideas.

You can keep asking, this is the discussion, what are the requirements that you think are relevant? These are our customer base, is it relevant to the discussion? So right from ideations, requirements, refinement, everything is getting accelerated, right? The core of the acceleration where we have seen the most impact to start with is the software engineering piece. That is from design, development, and testing.

This is where a lot of acceleration is happening. So where is acceleration happening in design? If you look at it like, let's say, UX, UI people, they give you a wireframe. Previously, developers take that, try to just see, OK, I'll try to get the CSS, I'll try to code, et cetera, try to match that expectations of the design.

All that probably will go into history either now or in the future. The reason is, can just throw in a wireframe and say, OK, get me a code that matches this. Some of the commercial tools have been designed to code. OK, you have a wireframe. Here is the code for you, directly given to you.

So that's absolutely like, there you go. You get started. OK, after that, what happens? What is happening in the development lifecycle? You have this coding assistance. A lot of them are in the market right now. And quite a good number of models have been fine-tuned for engineering as well.

So you can combine these two. So previously, in the RAG kind of a modern AI, we would just ask, OK, generate me this. This is my requirement. Please give me a code. And then you copy, paste, and ID. All that is also going into history because agent mode is available within ID for some of these commercial tools as well.

So what it is doing is. You just have to, instead of giving it a story, we can give it a story plus an implementation plan and say, go ahead and start writing code, and start writing the test cases for this, and then come back to me, check with me. So now we are seeing even another level of acceleration to the agentic process that's getting embedded in the IDE as well, right?

So you are now getting better unit tests, better test cases, because of course, that goes along with how you feed your input is a very important aspect. We'll a little bit talk about prompt engineering later on. But as you fine-tune these things, the cycle of development and testing with sufficient guardrails, will accelerate.

Otherwise, you will get a bunch of code that you don't understand and you cannot work with. So that's the caution as well. That's also a challenge part as well. How do you develop code that is ready for production as well? That is fit for production use, is secure, and no compliance with your organization's standards is also incorporated.

All that goes into them. Of course, when it comes to deployment, it is also accelerating because today most of them are cloud-native applications. That means, or even, you know, container applications as an example, which means you will be generating, you know, infrastructure as code or you will be generating configurations, but all that can be generated, you know, using either industry standards or if your organizations as other standards that can be leveraged as well, right?

So in short, in short, you know, AI is just slotting in nicely across all these phases of the product development life cycle and it's trying to accelerate for you. That's the first part, right? So you ask me about like, okay, what are the challenges, right? Okay.

Mudit Singh (VP of Product & Growth, LambdaTest) - Yep, right, so let me unpack, in fact, what we discussed so far, and that will also give a little bit of context to the challenges. So as you mentioned, AI is helping us accelerate across all parts of the software development lifecycle, but there is a catch in that, right? Why? Because AI, for example, let's say AI tools help designers. They have to design one feature or design one sprint.

Developers have to develop that one sprint, and even the department has to deploy that one sprint but the life of a tester is a little bit difficult, why? Because it does not just have to test that one sprint, but they also have to test all the rest of the sprints as well to ensure that everything is there.

So if the life of a developer is being accelerated or their productivity is being increased through AI-first development technologies, something similar must come for the testing part of the cycle as well, right? So at least that is what I feel is now another layer of challenge in the overall shift left ecosystem.

So we talked about AI first development accelerating the life cycle, but then there is a little bit of tweak that is required for the testing part of things. We also discussed unit testing and all of this, but end-to-end testing is a challenge of its own. So how AI is first helping in overcoming these challenges and before overcoming what more challenges are actually coming into the ecosystem that we have never seen before?

Srinivasan Santhanam (Senior Principal Technologist, Singapore Airlines) - Fair point, right? So this is where, what we call it, a single context is going to help, right? So if you look at how testing has been done in the past, right? People would have written their own frameworks. Testing is, the test code itself is independent of your source code in most organizations, right? So if we have that paradigm shift, then your overall acceleration is very much reduced. So we have to get to the point where everybody works on a single source of truth.

In other words, of a mono repo kind of scenario, right? So when we have the requirements, when we generate the code, our action should be such that a unit test is also generated as part of that, and an end-to-end test can also be generated. So you can ask an assistant to say, OK, give me a playwright test for this. Of course, if the web, that's the fastest and the easiest to just get started.

Or you can say I'm using a WebDriver, go Selenium, I know this is my framework. Okay, this is the beauty of this, right? If you have custom frameworks, right? Then you can say, okay, look at my framework here and generate code in line with that. that otherwise it will generate a vanilla Selenium code and you will have to integrate that into your framework.

So that becomes difficult as well, right? And one of the things that I see as a paradigm shift that could happen is like, know, which I'm also seeing, you know, more of a reality is that maintaining all these heavyweight frameworks and other things would give way for combined test code, end-to-end tests also generated as part of this whole AI-centric process as well.

So that the product maintains that code base both for development and testing everything. So we have to get into that mindset because that is when something changes, then you can tell AI, go and update the code, go and update the test. That is when the acceleration comes. If you keep it offline, and then, OK, AMI does something for the code, and you'll have to look at it and say, I have to update this test or thing.

So if you want a little bit of an autonomous cycle, your automation and your testing should be part and parcel of this OL or AI-centric development cycle as well, right? So that's where the challenge is, right how do testers or what we call engineers in general get used to it, right? So previously, that's where skills also play a very important role. Keeping up to date with the market developments is also a very important skill.

So from being a developer or a tester, you have to graduate to being an AI engineer, meaning you keep asking AI to do the task until you perfect it so that when you want to change something, you just tell AI, to go and change, and it changes everything. So that's one fundamental mind shift change that people have. There's a temptation for me to write the code, but I don't. Even if the code is a little bit imperfect, I keep asking it, and I keep perfecting it using patience and my improved prompt engineering to get a better output of it as well.

That's one challenge. Other challenges I did mention during the code generation as well. If you do not have sufficient guardrails, your code could be anything. That will work, but that is not of enterprise quality or it will not meet your organization's security guidelines as well. So that is where you need to bring all that into a context. And that is very, important if you think that's a part of the QA's role as well.

That's very important as well to just bring in that context as well. So, okay. And the last thing I just want to say, that this might be something that people might drop the guard of, while AI can generate very good test cases. If you say, okay, include negative testing, including edge cases, it does produce that. But there is still, I don't see that replacing exploratory testing as a whole.

So, that part is something, while it is accelerating, please don't drop the car. That is a very important piece of testing and keep it alive and going as well. That's the value of humans in that whole loop.

Mudit Singh (VP of Product & Growth, LambdaTest) - Definitely, that makes sense. again, coming to the prompt engineering side as well, a very interesting part I heard during my US travels recently is that humans are also not very good at writing prompts. You need AI to write those prompts as well. And in fact, that's something a lot of companies are doing right now.

So combining high-level prompts and expanding those high-level prompts into a more articulated prompt that can be used for the final AI. But there is a guardrail element involved. Whatever you are building, also has to be tested before it gets fetched to the AI. So another layer of quality assurance comes into the place that now you also have to do a quality assurance of prompts as well. That whatever prompt you are writing is also of a quality standard, right?

So that's another layer in the quality assurance space. Coming back, shifting again a little bit to the process side of things, right? So now we are talking about that, yeah, people have been accelerated here, their development processes, and of course AI enables them to do that. I will wait upon this fact that the leaders will be prioritizing more aggressive timelines just because there is a new technology to help them meet those timelines.

But then sometimes people lose focus on the quality aspect of things because of this accelerated speed. This is not a new thing. I'll say this is something that has been happening again and again. New technology is excellent timelines. People start to lose focus from the speed, but then that balance has to come back. So your suggestions or your experience, how the leaders can prioritize meeting aggressive timelines and at the same time ensuring that the quality is being met.

Srinivasan Santhanam (Senior Principal Technologist, Singapore Airlines) - Okay, yeah, thanks for that. Yeah, this is also, you can put this as a challenge even before AA also. This was a little bit of a challenge. It has become more of a challenge now because a lot of output is being generated using AI, right? Okay, so this brings us to the fundamental first principles, right? So continuous testing is something that is the only way to be able to keep up with the volume of work that you're doing.

So few things, right when I talk about continuous testing, is testing at every phase and embedding quality into every phase of that life cycle as well. So right from requirements, take a requirement for example, Use UI to just refine it thoroughly, get the acceptance criteria very, very clearly so that we get the work done right the first time as much as possible, So where all you can just recover some of the time to just cater to this increased velocity is avoid rework, right?

So build quality gates with every stage of the life cycle and whether an artifact or a process doesn't move until it meets the quality gate. So that's something we have to be very, very clear. As much as possible, you have automation and other things in your pipelines. You have to build quality at every phase of the life cycle.

For example, let's say testing in an organization if it's treated as an, you end of the life cycle activity, that's not good enough in this kind of a high-velocity mode, right? You have to engage the developers as quality gatekeepers as well. For example, As and when they write the code, while AI is also writing the code, you can do a few things.

You can put in your quality requirements as a guardrail and say, okay, use this as a context and check the code, whether it already meets that. That's one part. That is for the AI part. But do not give up on the traditional tools. You have linting tools. You have SAS tools. The beauty of this now is that the code quality, how you can just ensure that the quality of the code coming out of the developers, even at a very high velocity, ensures that you combine AI with the output of the SAS and the linting tools.

Because now, in the agent mode, AI can read your terminal very well. And it can understand the errors that are coming in and you can go ahead and say, okay, go ahead and just fix this. These are the, and make it compliant to the standard rules that you're doing. So that's the part. Then unit test cases are also getting generated, right?

This is where it gets interesting, right? So unit tests are generated, but how do you know what is the quality of the unit test? No, you can run, it runs. So that is where you need to be the testers or the developers or engineers, if you want to put it in general, right how to get creative as well.

So I will thoroughly test my unit test using mutation testing to see whether the unit test generated by AI is of sufficient quality that it'll catch my regression because until you get to a good state where regression is of high quality, you cannot carry on with that kind of a mode. Okay, maybe the MVP will go very well after that is going to slow down. So that's very important as well. So getting that unit test case and also end-to-end test, withstand, know, false seeding and mutation testing is very, important as well.

So that gives you the confidence, okay, even if something AI or human does breaks the application, my testing is just going to cast this as well, right? That's one part. The other part is like as much as possible, you know, try to incorporate as much testing in the pipelines as possible, DASH is still time-consuming, so the approach could be, that DASH could run every day in your new build offline, off the pipeline, that's absolutely okay. ISD, okay, maybe similar things.

You can do similar things, but one thing very important to do is have some sanity test cases as part of, most teams, even if they have automation, what I realized they would run end of the day automation or once in a week on the main branch, that's probably not good enough in these days. Have a subset of sanity tests that will run as part of the pipeline that will break the build if they test the doesn't flow through correctly as well.

So, few things, if you want me to summarize, build quality, which I might repeat this, but start building quality at every stage in the life cycle process. Start introducing automation at every stage in the life process and have confidence that whatever guardrails and whatever automation or regression that you're in is good enough even if you are able to generate that volume of code as well. Those are some of the things to take care of.

Mudit Singh (VP of Product & Growth, LambdaTest) - Awesome, so that's pretty much gold. In fact, there's, I'll say a lot of things to unpack just from this. And I think it opens way for at least three more questions in just this standard, this part. Let's start with three things. Like we talked about code quality. We talked also about like, in fact, the better question would be coding standards. So we talked about coding standard, test coverage automation of automation pipeline.

So let's take the coding standard first. Again, AI is helping us create more code, but then that means the coding standard-based quality assurance aspects come more into play because it could be garbage in garbage out, right? Like that's a word people use a lot. And there is no other time, let's say, than today of having more rigorous coding standards or industry coding standards just so that AI does not mess up everything, right?

So what coding standards we should be really worried about or we should really focus on, that's one. And how do enterprises evolve as the AI evolves to ensure that their coding standards are, let's say, industry standards as well?

Srinivasan Santhanam (Senior Principal Technologist, Singapore Airlines) - Okay, a very good question. Thanks for asking that, If you look at standards as a whole, Most of the organizations would have it in a Word document or in binary files, in PDFs and other things, right? So when we look at a centric development, we should start to also feel that everything that AI can understand better is the format that we want to be.

So, first thing before we even define what those standards are, you have to define how you are going to make those standards available. So it's like getting the metadata right, right? So first principles, everything is in a markdown file because that is when AI is able to read, in my opinion, very, very well and make some correct decisions. also it's easier to just feed that as well.

A lot of legacy could be there in Word documents or binary formats. Your architecture diagrams could be stored as images, whatever. Or some of these common, popular platforms that people have used that. Today, some coding assistants can understand an image. If you throw in that and say, is my architecture, what are the different components that I will try to answer that. And if you want to make some adjustments.

It's very hard to make this adjustment because it will not that easily generate an updated image. So my second point here is have everything as code or something as readable text format. For example, how do I do that for my diagrams? There is plant UML format. So I convert all my diagrams to plant UML. The reason I'm saying is my own experience working with this whole set is I have everything in a markdown file or as a plant UML format for the diagrams.

It is easier to get that into a context from the repository directly, either to generate code or generate test. That's first point. Second thing, if you want to make any changes to the architecture, I just have to tell you, go and just add this in between this and this and update my plant UML. That's all. In two minutes, my architecture diagram is done and I can just visualize the plant UML in any of the UI. My diagram is up to date as well so that is the key.

The third key part is, okay, on the metadata front, if you have these two, your documentation that the AI is generated is going to be up to date as well. There is no manual intervention. There is no manual over it. Okay, you know, it has generated something. I copy paste this, and put it into my central repository of coding standards, and nothing like that.

So let's look at it this way. If you want to have, in the new age, if you want to have coding standards, have a code repository, just a code repository all your standards are in markdown files, whether it is your coding standards, security standards, or architecture standards, get them into this central repository.

So what could happen for the enterprises that everybody could just link this repository to their code repository and tell AI, hey, my coding standards are in this repo, go and look at the reference and then generate code into my workspace. So that is the most important kind of preparatory work that you have to do before you come to say, okay, now, you know, I'm ready to just put in the guardrails, know, feed the standard for the enterprise, right?

So certain principles don't change. Whether you have AI or no AI, certain coding standards like dry principles, clean code principles, or even your testing principles are not going to change at all, So those could be the baseline for the organization. But there are certain standards, rules, and guidelines which will vary with technology. For example, if you are using React, then I would probably do something.

If you are using Java, then I might have a certain set of rules. Those are all something you need to segregate. So people are using that technology can apply over and top of the base coding standards or assurance standards, and then they will try to derive that level of compliance. So this is one thing. The second and most important thing, even if you have all this do not write standards in a way that are long sentences, open to interpretation, very ambiguous. you feed that to AI, the result could not be very optimal.

So let's say as an example, you want to check the variables. So you just say, ensure all variables are in KML case in the source files. That's all. A simple, very direct sentence very action-oriented. If you start writing, you'll have to also restart looking at your coding seminars, and begin to rewrite them. That makes it a little bit actionable.

Like, you start at the verb and say, do this, do this, do this. And very simple sentences. It could be a number of different, you have to break it up, but that's fine. You know, that is when, you know, you are fully leveraging the, you know, the power of AI. And the last thing I want to add is don't throw away conventional wisdom that comes with, you know, your traditional tools as well.

For example, some of the code review tools, etc. They have well-built rules as well. Kindly include them. As I said, you can include them as linting tools or you can include them as context if you can export the rules. All these are possibilities. And when you have this alone, then it becomes very clear. Anybody who's developing code, they just borrow this repository and they use this as guardrails and they start developing them.

Mudit Singh (VP of Product & Growth, LambdaTest) - Awesome. Awesome. This was the very awesome advice. I thought of this before create MD files of all the standards which will become context for the AI. valid guardrails, let's say industry-wide or team-wise or even company-wise just for specific rules that these are the rules that any AI-based code that is being generated, they have to follow this.

And I think this is something that can also be added not just on a team level, but maybe on a horizontal level, like all the AI that is being created, this context will come first, no matter who is using it, right? So it also makes it easier for the team to implement that guardrail at a higher level.

Now coming back to the second point, test coverage, which we were discussing earlier as well. So how do you increase test coverage using AI-based technologies without, again significantly slowing down the development processes?

Srinivasan Santhanam (Senior Principal Technologist, Singapore Airlines) - Okay, you know, with AI assistance, Discovery is, you don't have to even write a prompt to get it. This is like a context menu these days. Okay, go, click, generate this. That's all is the menu, you know? It takes away even the prompt out of it. You just have to select the code base, which you want. That is how fast it is, right? Great, well and good. You get the unit test, you run it, you know, it runs fine.

But as I said, How good are your unit tests? I've seen this a lot of times in my past experience as well. There's a great amount of test coverage, but the quality is still bad. Why? They say, oh, developers, I have 70% code coverage. Of course, 100% is something that we'll never achieve. We aim for 70 to 80 as a good practice. This is great.

That's because even at the statement level, let's assume there's 70% or, you know, code coverage is available for the test, right? Great, but then the overall product is bad. Of course, there are many other dimensions, but there are also coding errors and other things, bugs within this. How can this happen? Very simple reasons. The automated tests or the tests that we have written is not good enough and it's not catching, right?

So this brings to me an important topic like, well, test coverage is very, important mutation coverage is also something that we should take into account along with the test coverage as well. So these two parameters have to be looked at in unison. So my test coverage could be 70%, my mutation coverage is 20%, which literally means if I introduce a bug in the code, your unit tests are not catching it at all.

That is what it says when the mutation test is 20%, only 20 % of the time the mutant is killed. The rest of the time, it didn't even observe that it's an error that is sitting in the program as well, right? how do you, okay, this is the fun part is you can get to these coverage very quickly in these days with all these coding assistance. But also the good part is there are tools like Striker, which can automate your mutation tests as well.

So you give it a code base, it will try to develop, it will try to fall seed that and try to feed it to see if it really does catch us or not. The output itself, can ask AI to say, okay, it is not catching this mutant. Look at this and what more test needs to be there or what is the improved assertion I need to make within the existing test so that my mutation test gets better as well, right? So while it has become easier to do more coverage, it has become challenging to just validate that that is good enough for you as well.

Mudit Singh (VP of Product & Growth, LambdaTest) - So how does one go about strategizing this mutation testing? what parts or how to doing, like which areas in the code or in the tests we have to introduce mutations first and then do other parts. So idea is I'm new to it. We just want to get started with this mutation testing aspects, how to get started with it.

Srinivasan Santhanam (Senior Principal Technologist, Singapore Airlines) - You could get started with it manually as well. But coming back to this question, where do we start? It's a little bit of some intensive process. I've tried making it as a part of the pipeline. It doesn't work because some of these automated tools on a very long code base takes quite a bit of time to just generate some output and there could be some false positives also.

You have to be very aware of that as well. That means this is a process that may not be very well, if it's a small code base, yes, it's like a code base that has been there for some time, putting this as part of your build process is a little bit difficult and cumbersome as well. And this overall thing takes a little bit of effort from the developers as well as the testers to just see how the results are, what are the improvements that meet and make.

So we'll take a very risk-based approach. We will introduce the faults in the areas that we think are critical and see whether that is catching it as well. So you can prioritize by the criticality of the functionality, and you also prioritize by the type of the system that you are having as well. Example, if it's external facing, or if it's a system that is having PIA data, or if it's something that is being consumed by B2B process, et cetera, which is very critical.

So those are some of the areas you can just really get started. So you can probably start small. You can try to deliberately introduce a bug in the code. You don't have to use any tools. Just plant a bug and see and run the unit test and see whether that gets detected almost slowly. And slowly you can start to use, there are free commercial tools that are, sorry, free open source tools are available.

There could be some commercial tools as well in this space. So you can ask the organization may choose go into the higher order things and you can start to invest in these tools, etc. But this is like, if you want to put it like mutation tests, equivalent in the quality assurance space, these are your negative tests. If this is happened, this passes. But if this is not there, does it fail? That is what it is trying to catch actually.

Mudit Singh (VP of Product & Growth, LambdaTest) - Thanks. Got it. That's awesome. And coming back to the last point of the earlier things that we were unpacking is continuous testing at every stage. In fact, including testing or including even automation tests or unit tests or all type of tests at every stage of the pipeline. And what you were mentioning as well, it's not just about end of the day or end of the week. It's about, let's say, after every commit, we should be running those tests.

And this is also pretty interesting for us as well because when we started to build our hyper execute platform internally here at LambdaTest, that was one of the principles what we wanted to target. We did a big survey with nearly 1600 plus enterprises. We got to know from that survey was that nearly 89% of, more than 89% of those organizations were using CI/CD tools, but only 46% of them were triggering their automation tests through CI/CD.

And here I'm talking about everybody had automation tests in place, but only 46% of them were triggering their automation tests to the CI/CD pipeline. So in fact, the pipeline process was just developers and DevOps guys building and deploying. The testing happened in between as a file load process where somebody either in some ID or some cloud-based platform was just triggering the test. So the automation of automation was not there.

And that helped us trigger and build the HyperExecute platform more as a CI layer over the top. But now when the AI is adding more productivity to it, how important this becomes, how important the automation of automation part becomes, and how soon the company should start thinking or building stuff with the AI first or with the sorry, testing first mentality?

Srinivasan Santhanam (Senior Principal Technologist, Singapore Airlines) - As I said, code generation has become pretty quick, and it is also easy to generate from various sources. can generate the source code. That's not an issue. How good it is becomes now the fundamental question. I think the concept of server-side executions, think LambdaTest has that. I don't have to interact with the client-server for every element call, etc.

So you just throw it, it just executes the server and it comes back. This is great, okay? So this is one part, right? The second part, you are very right, only 40 plus percentage of the enterprise has integrated that. And even if you look at it, right, it's far from optimal. They are not exploding the tests in parallel. They are not getting that feedback in a very timely manner, et cetera, right?

So one of the things, let's say, you can do is, let's say for mobile as an example, if you want to really speed up, hitting that with the real devices is very time consuming. So you can have simulators and emulators that you can run the exquitas, it'll give you pretty quick feedback. So these are the things that we need to think about. It may not be very ideal, but here is the other thing. Here is where you can employ a concept called service virtualization also extremely well.

So we are trying to validate our application. So don't get bogged on, my, know, this end user system is not available, I can't test it. My payment gateway cannot handle more than 20 users in UAT, how will I performance test this? All these are not excuses in today's world, right? You have this virtualization concept, right? You have virtualization platforms that can support a large number of TPSs as well, and then parallel tests as well.

So what you need to do is, do not run every regression against the target UAT or a pre-fraud environment, not required. Think of this way, right? Where can I include simulations and emulators? Where can I employ virtualization so that I can quickly validate my component along the pipeline and give you a very good insight as to how good the build is. So these are some of the accelerators that people can think about.

And other thing is also is about if you have a, like, you know, I've seen our regression tests, like 800 automation tests, don't throw everything into the pipeline. If you want to throw that into the pipeline, you can do one thing, you know, let it trigger as an offline job so that you don't block the developers further builds as well. You can break the build if that fails as an offline entity. That's absolutely fine. But please don't throw into that. That will disrupt the overall development velocity.

Mudit Singh (VP of Product & Growth, LambdaTest) - Awesome, awesome. So we are at the, in fact, just hitting the time. We just have, I will say the last question to discuss. And this is also very broad question. And I will not just say it's a question. It's more of what your outlook looks like about the next phase in AI first development.

As we talked, we talked a lot about a lot of things, what AI is doing right now, how people can in fact, practical use cases that people can do right now to explain there. But then what's in store for us as a future? What more is happening in the world that, let's say, cutting-edge teams like yours are exploring?

Srinivasan Santhanam (Senior Principal Technologist, Singapore Airlines) - Okay, I'll just scope it to just quality assurance alone so that we can keep this going and we can discuss it. Okay, this is one important thing that people have to realize that with AI getting better and better, right? What I see is manual and mundane tasks will go away. Whether it is in testing or in other areas, expect it to just give way or not to be disrupted by AI or superseded by AI.

Mudit Singh (VP of Product & Growth, LambdaTest) - Yep.

Srinivasan Santhanam (Senior Principal Technologist, Singapore Airlines) - That's something we need to be very, very clear, right? That's one trend that I'm seeing that, know, let's say like, simple testing that is happening. Those are all, you know, either giving weight automation or other mechanisms to just validate that as well. Let's look at test automation. How is it changing, right? It is, okay, one part is generating a code, et cetera, which is fine.

It is also generating the sum to some extent automated test, which is fine, but this is not so much disruptive what disruption is going to, or is already happening is like, know, AI has reduced, you know, the bar for automation tester, so much so that in the next, I'm thinking the next six months, even business users can become automation testers as well. Why I'm saying this with confidence is that today I can use natural language to drive automation.

I don't need any scripts. It's still a little bit rough in some areas, but it is a concept that is beginning to work as well, right? So I don't, I know, I know. This one trend, okay, along with you, some of your competitors are also launching something similar. This is a disruptive piece. I'm not sounding an alarm for the manual testers. Manual testers, please graduate to expert and exploratory testing where you can definitely add value. Please be in that kind of a space, definitely, it's a good space to be.

Mudit Singh (VP of Product & Growth, LambdaTest) - We have a product for that called KaneAI, very recently we launched that. Yeah.

Srinivasan Santhanam (Senior Principal Technologist, Singapore Airlines) - Automation testers don't just rely on testing. You just become AI engineers. You have to graduate out to AI engineers. At some point in time, if not today, some point you'll be forced to graduate that. That is all the industry is pushing you towards. So what is happening is, along with my acceptance criteria, let's say I write some steps which I proceed to automation, automation engineer to automate. That is not required.

You just feed it to the platforms that you have as well. And that is going to drive both the browser and the mobile. Let's say, this is my input. I just do all the action that you specify. Here is your report. Great. OK, what is also happening is, OK, it gives me some failed test. Now, as a business person, I'm thinking, is it an error in the code or in my test is bad? Don't have to worry about it.

AI is giving you a lot of insights in that space as well. It is looking at it and saying, OK, this failed because this control didn't come up in a timely manner or something happened. This icon is not visible. All those insights is already generating even for a business users to look at and say, and I know what the problem is. Sorry for developers say, know what the problem is. The self-healing capabilities are getting better and better. The execution by NLP is getting better and better.

So that is actually lower. It has lowered the bar to little bit and in future expect that scriptless natural language automation to take over your complete testing. I won't be surprised. It is just my prediction, but you see, the industry is going in that direction as well. That's one part. The other part is based on historic defects. is going to give you lot of insights.

So today, let's say I have 1,000 test cases. I am blindly running everything for every regression because I don't have a mechanism to say what is more important for me to run in this space, whether this area is impacted. So put AI in the middle and it's going to say that if you are changing these classes, these are my, you you just run 20 % of the regression that is good enough. If that passes, I have full confidence to say that, you know, other parts you have not touched it. Don't worry about regression.

So one thing that is going to get faster and better is you are going to have AI, we can choose the tests that need to be run for you. Of course, you those guardrails, but I'm saying it is going to give an initial subset, okay, this is good enough. There are also commercial tools that are coming up which can reduce your volume of tests and say, this is the impacted object. This is good enough for you to run. That's one.

The second thing that I'm seeing organizations like GitHub is trying is, if you put a bug in the if you raise a defect, okay, we are trying some autonomous ways. Okay, you don't even talk about, I have a defect, where do I fix the code? Don't even ask the problem. You drop the defect and A is thinking, okay, it's like a process being monitored. Okay, new defect is created, what I need to do? What does the defect say?

Where is the code? What is the bug? It is trying to analyze to an extent it is trying to fix that as well. are getting to, it's almost there, not exactly there. We are getting there eventually as well. So the process of raising defects, the process of writing test cases, AI has turned everything upside down, right? which means the value that the human is going to add is very at the very top end of that testing space where still intelligence plays a vital part as well. So this is my take on quality assurance.

Mudit Singh (VP of Product & Growth, LambdaTest) - Yes, so little bit more to add on this is, in fact, the analysis part that we were saying. So RCA, or root cause analysis, like of what the defect is happening, why it is happening, and what can be done to fix that defect. That becomes another good AI use case. And this is something that we have, again, built up. Another aspect which I'll say is intelligence or test analytics in place.

So you run thousands and thousands of tests multiple times, but finding insights out of what is breaking. Having a red and green dashboard does not necessarily always tell what is breaking and why is it breaking, but AI can help you analyze that.

And again, thank you for taking the time out, sharing your insights about AI X-rated development and AI X-rated quality assurance. I think we really learned a lot to summarize. We talked about code quality. We talked about coding standards, test coverage, very interesting discussion on mutation testing and continuous testing.

And overall, we looked at how shift left or accelerated quality assurance is happening in the AI space. And I think we've learned a lot. And thank you again, taking the time out and joining us today for XP Series.

Srinivasan Santhanam (Senior Principal Technologist, Singapore Airlines) - Thank you so much, Mudit. It's a pleasure discussing these topics with you and the audience. So fantastic. Thank you so much for the opportunity.

Mudit Singh (VP of Product & Growth, LambdaTest) - Yeah, thank you. And also validating a lot of concepts about what we are building. I do not want to make it a sales pitch here, but a lot of things that we discussed are something that we have been building here at LambdaTest. So the KaneAI platform, that was the NLP-based testing feature that we have very recently built up, and hyper execute platform, the continuous testing.

And the last part that we were discussing is analytics. So the first use case of AI was, I'll say it's not in GenAI, but on the analytics aspect finding out insights from last set of data. is something we also have. And for the users who are listening again, thanks you for joining us so far. We have a lot of other episodes as well. Do check it out.

But if you want to keep yourself updated on these episodes, these insights that we share regularly, feel free to hit that subscribe button. We are also looking for your feedback and your stories. So if you have any feedback for us related to what more you want to listen or any questions for us, feel free to drop it in the comments and we'll be happy to address them. All this, you have a similar story to tell to the audience.

Do reach out to us. We'll be happy to give you a platform for your stories and hoping to create another video pretty soon. And looking forward to your support in all of this. Again, Srinivasan, thank you for taking the time out joining us today and I'm looking forward to meeting you again.

Srinivasan Santhanam (Senior Principal Technologist, Singapore Airlines) - My pleasure. Thank you everyone.

Mudit Singh (VP of Product & Growth, LambdaTest) - Alright, have a great day everyone, bye-bye.

Guest

Srinivasan Santhanam

Senior Principal Technologist

Srinivasan Santhanam specializes in enhancing software engineering capabilities, improving developer experience, and accelerating software delivery for large enterprises. His current areas of interest include AI-driven SDLC, DevSecOps orchestrations, code quality & security, system testing, observability, and performance engineering.

Srinivasan Santhanam
Srinivasan Santhanam

Host

Mudit Singh

VP of Product & Growth, LambdaTest

Mudit is a seasoned marketer and growth expert, boasting over a decade of experience in crafting and promoting exceptional software products. A key member of LambdaTest's team, Mudit focuses on revolutionizing software testing by seamlessly transitioning testing ecosystems to the cloud. With a proven track record in building products from the ground up, he passionately pursues opportunities to deliver customer value and drive positive business outcomes.

LambdaTest
Mudit Singh

Share:

Watch Now

WAYS TO LISTEN

SpotifyApple Podcast
Amazon MusicYouTube