XP Series Webinar

Testing Tomorrow: Unravelling the AI in QA Beyond Automation

In this webinar, you'll discover the future of QA beyond automation as he explores the realm of AI in testing. Unravel the potential of AI to revolutionize QA practices beyond conventional automation.

Watch Now

Listen On

applepodcastrecordingspotifyamazonmusic
Pradeep

Pradeep Soundararajan

Founder & CEO, Moolya & Bugasura

WAYS TO LISTEN
applepodcastspotifyamazonmusicamazonmusic
Pradeep Soundararajan

Pradeep Soundararajan

Founder & CEO, Moolya & Bugasura

Pradeep Soundararajan helps organizations grow their customers through testing and by helping them prevent bugs and tech debt. He serves as CEO of Moolya (Services) and Bugasura (Product). He is the author of 2 books on Testing i.e., Buddha in Testing and Growth Driven Testing. He focuses on holistic growth and helps people around him grow holistically too.

Kavya

Kavya

Director of Product Marketing, LambdaTest

At LambdaTest, Kavya leads various aspects, including Product Marketing, DevRel Marketing, Partnerships, Field Marketing, and Branding. Previously, Kavya has worked with Internshala, where she managed PR & Media, Social Media, Content, and marketing initiatives. Passionate about startups and software technology, Kavya excels in creating and executing marketing strategies.

The full transcript

Kavya (Director of Product Marketing, LambdaTest) - Hi, everyone. Welcome to another exciting session of the LambdaTest XP Series. Through XP Series, we dive into a world of insights and innovation, featuring renowned industry experts and business leaders in the testing and QA ecosystem.

I'm your host, Kavya, Director of Product Marketing at LambdaTest, and it's a pleasure to have you with us today. Before we begin, let me introduce you to our guest on today's show, Pradeep Soundararajan Founder & CEO, Moolya and Bugasura.

Pradeep is a distinguished figure in the testing realm, renowned for his expertise in helping organizations prevent bugs and tech debt. With a focus on holistic growth, he's authored two influential books on testing, shaping the future of QA with this innovative approach.

In today's session, Pradeep will unravel the future of testing as he does into AI-driven strategies, reshaping the testing landscape and exploring real-world examples of successful AI implementations.

Pradeep will also share insights into how AI is reshaping traditional methodologies and the challenges and opportunities it presents. Furthermore, he'll talk about context-driven and shared new exciting developments in the testing space by Moolya and Bugasura.

So, let's jump into the heart of today's show. Pradeep, why don't you tell our audience more about yourself and then we can jump onto the questions.

Pradeep Soundararajan (Founder & CEO, Moolya & Bugasura) - Well, I'm very happy to be here thanks to LambdaTest for continuously prioritizing the community. And I have continuously seen posts from Asad about how both LambdaTest and he stands for the community of software testers.

And I'm really excited about being a part of the community initiative of LambdaTest. Okay, one second, I think you've given sufficient introduction to me and I'd just like to leave it there. May my work speak the rest.

Kavya (Director of Product Marketing, LambdaTest) - Absolutely. I just want to tell the audience that Pradeep is a great speaker. So you are here for a ride. So, jumping onto the very first question, how can AI-driven strategies enhance traditional software testing methodologies, and what are some key challenges associated with their implementation?

Pradeep Soundararajan (Founder & CEO, Moolya & Bugasura) - Great, so I think all of us by now have realized that AI, in one form or another, has been here for a while. Now, any technology that has been here for a while, like AI, is people across the globe, especially software engineers.

They have tried bringing in those technologies to solve some of the problems that we are continuously facing. And by the virtue of being engineers, we are problem solvers. So when we encounter a problem at our own work, we can't live with that obstacle.

So essentially, we have been solving problems. We have been solving some of our customers' problems, some of our company's problems, and some of our own problems too. While this journey has been there, we are also continuing to create problems, and sometimes we become complacent living in these problems.

So over the years, a lot of innovation that has happened has also created a new set of problems. For instance, if there's a new framework that people are working on, the new framework creates both new opportunities and new problem statements. So when we look at it from that journey, the new technologies are constantly, what's it, emerging.

And for us as engineers, we are scouting what's the use case to apply, what's the real use case to apply this? Like if there is generative AI, which we are talking about today all over the world, what's the real use case? What's the life-changing use case?

I recently spoke at a conference, and I asked the audience there of saying, what is the life-changing experience they're looking for from generative AI? And most of them realize that their personal lives are not going to change much. I mean, the work, the way their work changes.

For instance, today, if you want to come out with some content that was difficult for a lot of people, you know today AI has made it possible for us to craft nicer emails and put out nicer content And all of that stuff right, but all of these things have sufficient training material, and in the testing space in order for us to address a use case we have been scouting for problems that are worthy to be solved using AI.

In my journey, what I've realized is when we look at a use case to be solved through AI, 50% of the time, it doesn't require AI to solve those problems. It could have been solved even before AI came into the picture or even without AI being addressed.

And the second thing is about the training data in itself. We have an absolute lack of training data. So I have seen and even what we have done in the past is to first create that training data. And only then we will be able to make some use of this AI and address real use cases. So to me, there have been some problems traditionally.

And like if you look at it, the regression testing problem has remained over the decades, even after a great improvements have happened in the infrastructure space to run tests. Great improvement has happened in the efficiency of those, you know, running those tests and all of that stuff.

But the problem of regression testing is plaguing a lot of organizations even today. Now, now if we go deep dive into this, the problem may not just be about regression testing; the source of the problem, okay, could be something else.

And I think for me, what I'm really excited about the next few years is that people are going to use AI as an excuse to end up solving problems that should have been solved even before AI came into picture.

Kavya (Director of Product Marketing, LambdaTest) - Great insights, Pradeep. I mean, I am definitely going to ask our audience to think what is that interesting problem statement that they should think about the life -changing use case that you mentioned, right? That they're looking to address using AI. I think absolutely a great question for our audience to sort of think about and come back.

Pradeep Soundararajan (Founder & CEO, Moolya & Bugasura) - Absolutely. I would also like for the audience to go out onto LinkedIn and other places and post real use cases for AI. One, okay, second is also about talking about what problems should have been solved even before AI came in and what's the source of those problems. I think that will be quite a good progressive conversation that our industry absolutely needs.

So, you know, great point by you as well for asking the audience to talk about it, you know, think through it.

Kavya (Director of Product Marketing, LambdaTest) - Yes, absolutely. So moving on to the next question that we have, how does context-driven testing differ from traditional testing approaches?

Pradeep Soundararajan (Founder & CEO, Moolya & Bugasura) - I just can't, okay. So the traditional testing approaches, I mean, it's today, the world, the way, I mean, where the world is today, using the word traditional and context-driven testing, it's a little bit controversial in itself, right?

So traditional testing focuses on looking at the requirements and assuming that the requirements are good enough, somebody has really crystallized it, and then deriving the tests out of it.

Whereas a context-driven testing approach says that the context depends on too many things beyond just the requirement of what the product is supposed to be doing. And it depends on, there is a product context, there's a tech context, there's a customer context, there's a user context, there's a business context.

So many of these contexts drive what tests should be done, when, and how much should they be tested. So essentially, the difference between a traditional approach to testing is that the traditional approach assumes that a tester needs a requirement document to derive tests out of it, or let's say a piece of code.

But if you look at what customers are facing, their context if it's different from the testing that we are doing. And let's say we have a hundred percent code coverage, we have 90% test coverage. And if the customers are unable to do a web check -in, for example, I was really struggling with this web check-in thing, I'm sure a lot of the audience listening to this will also be relating to it. Things like that.

And in those companies, if they're doing great test coverage, code coverage, and all of that stuff, but if the customers, as customers, as users, we are facing a different set of challenges, that context, not driving the testing is the traditional testing. That context, driving the testing, driving what tests to retire, what new tests to add, what kind of production data to add.

Okay. You know, all of these things matter. That's the context. Okay. Driven testing approach. So, so the context driven, you know, testing is, is, it's, it's actually just like, you know, driving a car in a place like Bangalore, right? There are so many things happening. So many things happening. I've driven both in the US and in India, right in the US, if somebody jumps the red signal, there's a 3% chance that they will survive that.

Here it's not that way so far, at least, right? So there are so many things happening and being able to process that is the reason why the population of a country like India is still huge, is absolutely huge. It's because we're able to process, we're able to process so much of different varying contexts.

There might be an auto rickshaw that comes in between, and there's somebody jumping the red signal; all of these things continue to happen. Now, if we drive thinking that, oh, there's nobody who's going to jump the red signal, right?

And can we drive that's the traditional approach to testing. If we think that, oh, there's so many other things that's going to come in between us, and how do we, you know, you need to know how much to accelerate when to break, all of this stuff. That's context-driven driving.

So context-driven driving context driven, you know testing they're not very too different It's just that I just give a parallel of the driving and the you know, the testing part.

Kavya (Director of Product Marketing, LambdaTest) - No, absolutely a gem of an analogy, to be honest. What it brings to my mind is context-driven approach has more breadth. There are more variables involved into it. So yeah, very interesting to hear that. Adding on to that, what are the advantages of adopting a context-driven mindset in today's testing landscape?

Pradeep Soundararajan (Founder & CEO, Moolya & Bugasura) - Yes, so there have been a plethora of new startups who are in the space of data, big data. And there is so much more insights that businesses are deriving out of one simple transaction that we as customers do. So for instance, several years ago, several years ago, a flip card now owned by, okay, Walmart.

You know, the founders say, oh, we have so much data that we know who is dating who. Right? Now, if you look at the kind of data that companies like Flipkart, Zomato, or Swiggy have, they have a tremendous amount of data. Swiggy today can tell.

What's Pradeep's favorite food? Right? And when does he get the craving? When does he not order? How much is his weight? Is he doing a diet? All of this data, right? This data, there is a plethora of data right now that the companies a decade ago did not have, even though they were in the same business.

Now, the point is this, right? The point is, what are we really doing with this data? And today, most of the testing is happening minus all of this data, minus all of these insights, minus all of this. Even today, 90% of the world, not 10% of the world, the 90% of the world still tests like the way they used to test 10 years ago.

There has been plenty of data that's come out right now, plenty of insights the business people have, and plenty of product analytics that is there. So if a tester were to look at mixed panel, what's the analytics, the kind of tests that they would come up with will be totally different from just looking at the requirement document.

So the biggest benefit of the context-driven testing approach is that it gets as close as possible to reality. Now, the words are as close as possible. It's very difficult to get like real users. It is very difficult. It's possible if maybe if I'm testing Swiggy and I'm also a customer of Swiggy, I know certain context, but even within Swiggy, there are so many different personas that are there.

Like, you know, for me, I can't, you know, what's a predict, what do women like, what do they have during their periods and all of that stuff. So there is so many different, you know, persona. But if you look at the traditional approach to testing, they just take it like there's no persona at all.

Somebody orders a biryani, and it gets delivered and they just do the functional, you know, testing of that. That's like ignoring the complete context of why people love the product. What do people not like about the product? And looking into that, the other thing, right?

If we are frustrated as users, we go onto Twitter and say, hey, this is shitty. This is not working, and all of that stuff. Now, if our testing is the whole purpose of why we are hired to test, let's say to prevent bugs. If we are letting those bugs out and catching something else, okay.

What's the purpose of testing? So I think what the context driven testing, if applied in a way that it really brings out the value, it brings to us a value that gets us closer to the reality of users. One, second, is that it adds more value to the business. It helps people.

It helps testers become more influential and guides the product team and the tech teams in the right direction to make relevant changes that benefit the customers. Ultimately, ultimately, we are all serving customers. We are all serving the paying customers. If they're not taken care of, they're going to go away. Today, today, there are two things that are happening in this world.

One is that the attention span has come down significantly second, there is absolutely no loyalty unless Somebody is a monopoly So there are certain products that I use that are which are, let's say, monopoly products, and even if I don't like the product even if the quality is not good, I have no other choice. I have to use it. I have to use it, but if there were choices that people are going to switch to a better experience, to a better service, to a better value and all of that stuff.

So in the world which has least attention span and a reducing loyalty, an approach to testing that does not connect to customers and users and their context extremely well is going to be producing very negative results to the business.

The reason why 90% of the world still does it is because they're not measuring the value out of traditional testing, or even they're not measuring the value of context-driven testing, which is why I am passionate about trying to come out with an approach called growth-driven testing, which I've also documented in the book, where I talk about how can testing that you do be mapped to the growth of customers and growth of the business.

Kavya (Director of Product Marketing, LambdaTest) - Thank you so much. I mean, I was about to ask why the 90% of the population rate I mean that we have, why are they still not adopting the context-driven mindset? But yeah, you answered that. And interestingly, you also mentioned about growth-driven testing would be great if we can quickly throw some highlight on that as well.

Pradeep Soundararajan (Founder & CEO, Moolya & Bugasura) - Cool. Cool. I'm going to just use that opportunity. This is the book. This is the book that I have authored and co-authored by DS. And so growth-driven testing. So as I mentioned to you, right, I was looking at why the world still consumes not just the traditional testing, any low-value testing, any low-value testing, why does the world still consume it?

Why are people still obsessed with a lot of documented, you know, test cases? Why is the world, why hasn't the testing world moved at a slower pace than the rest of the world? Why are testers the least adopters of the new technology? And why is it that the developers are way better adopters of the new technology?

Why is it that the testers are less influential as compared to the developers and the product team and all of that stuff. Why is this happening in the world? And that's when I realized that the source of the problem, the source of the problem is that there are some tangibles and there are some intangibles. Again, I'm going to try to give you an analogy for this, right?

So we're going on a flight, and the flight takes off, and you have a nice mid-flight experience. There's no turbulence, and we land smoothly, and then we just get out and we just go out of the plane and go to our destinations. Now, here's another scenario.

So the flight takes off, there's a lot of turbulence, and the pilot says, oh, there's an emergency, we have an engine out, and we're trying to do our best. Now the plane lands safely. Now what do all the people, what do all the passengers do? They clap for the pilot and say, great job, you saved my life.

Now, even on the first flight, the same thing would have happened. Just that the pilot did not make the announcement and landed the plane safely. Everybody thinks it's just business as usual and moves on. But here in this flight, just because the people and the passengers got to know that there was some issue, and then that pilot used all the great skills and saved my life.

There's an immense amount of gratitude that comes. So this is about the tangibles and the intangibles. Sometimes the intangibles are visible to us. Sometimes the intangibles are not visible to us. And the little understanding that I have from seeing testing, from doing testing, from leading testing teams and interacting with a lot of C-level folks.

I see that you know what's testing is largely intangible, they're not able to put a dollar figure in terms of ROI, even if they're doing automation, even if they're doing, you know, whatever. So nobody knows how much more to invest to get how, okay, extra ounce of an ROI.

Nobody knows, like for instance, right? Okay, people invest in stock market, people invest into mutual funds. They get to see the number growing or decreasing. So let's assume somebody put money into a particular fund and the fund says I let you know after three years how your stock is performing.

Nobody wants to invest in that because they want to at least know if it's going up or going down. So, unfortunately, testing is a sub-function of the whole business, and nobody at the C-level has any visibility into what kind of testing is happening. Is it helping my growth happen? Is it not helping my growth?

So the idea of growth-driven testing is to go back and map everything that we are doing. You know, what's everything that we are doing in testing to the growth of the company and use the growth of the company, growth of the customers, improve value to customers, lifetime value of customers, lesser churn of customers, better retention of customers as a driving factor to testing.

So if the concept of growth drives testing, the people who are focused on growth, who are the C-level folks, want to care more about testing. Otherwise it continues to be a sub sub sub sub sub-function. And even in multi -billion dollar companies, there will be absolutely testers struggling without a test environment, without the necessary tools that they need and all of that stuff.

And so that's the problem I want to get into address. So over the years, wherever we have done growth-driven testing, I've documented those things in the book and tried to share about what does it take to build a culture of growth-driven testing? What kind of approaches do we need in the growth-driven testing?

And so that's what is put out. And the reason I put that out is because if just a few people do it, if the 10% does it, the world doesn't change. The 90% need to do it. And, you know, just like this example, right?

This world is where it is today because 90% of this world knows or pretends to know how to drive a car. Right? We're all moving around. We're all enjoying holidays and all of that stuff because there are sufficient pilots taking us to places, you know, safely. If there were just five pilots, you don't need so many, so many planes. You don't, you kind of have so many people travel and all of that stuff.

So, what really changes the world is when 90% is able to do things. It's not the 10%. The 10 % will always keep doing it. But what really matters is that 90%. So I put this book out, I put this whole concept out so that the 90% can benefit from it.

Kavya (Director of Product Marketing, LambdaTest) - Absolutely, great points Pradeep, really appreciate that. And yeah, we would definitely like to share the link of this book as well with our audience for sure. Coming to the next question, what role does impact -oriented testing play in ensuring the quality of software products? And how can AI technologies be leveraged to optimize this approach?

Pradeep Soundararajan (Founder & CEO, Moolya & Bugasura) - Great. So in the lines of what I just spoke, we are working towards an impact. Each one of us want to be in a place where we are delivering impact. And we looked into this and in the product that we're building, which is, you Bugasura, we wanted to build, we wanted to use AI for the use case of trying to map the tests that we're doing to the impact that it's having either on the customer, on the user, on the business, on the product, on the timelines, on the quality of the software on the pro on in terms of the schedule of the teams so when this impact is mapped I think that people will start making very different decisions from then on since this impact is not mapped.

And we need a project manager to keep telling us, oh, this, oh, by the way, this is what's happened. Okay. By the way, this is what's happened. By the way, this is what's happened. And then, you know, doing the course correcting, we all will move slow. We all will move slow. So, you know, again, I'm going back to the traffic part and it's 1130, right? No, I know none of us want to be stuck in traffic.

We all want to move fast. We all want to move fast. So this nature is there both in driving and at work. So if we get frustrated in a traffic jam, and similarly, there are traffic jams that happen at work.

It could be it could either be dependencies. It could either be you know, something else and we all want to have a nice nice ride reach a beautiful destination and Spend some good time and take some nice Instagram pictures over there. All right Now, okay for that what's what's a similar thing? Okay to work for that right the similar thing is that we are all in the business of delivering impact impact to all of these things, right?

And every small decision that we take changes that impact that we actually are able to deliver. This today is not visible to the world. This today is not visible to the world. So when we started to think about use cases of AI, life changing use cases, life changing use cases.

It's not just about like, for instance, even right now, we have some components of AI, we have a product, which is like you write the summary of the bug, and our AI fills in the description and your expected results and the impact and all of that stuff, which is great, which is, which is good, better than what people used to do previously.

Because for me, I was frustrated every time I had to review a bug report that was poorly written. So, for me, it was a kind of a relief that somebody can write a summary, and then you know everything is populated, but this use case, well, it's very exciting it's not life-changing, it's not something that moves the industry, it's not like the industry woke up and said oh my god Pradeep you built this it's so fantastic.

I want to know adopt this no right so we having done you know those things we started to scout for a really industry changing use case and the industry changing use case cannot come by automating the automation that's already done. Right. We have to go, we have to look beyond the frontier. We have to look beyond the horizon that we can see. And you know, that's when we realized that what is the horizon.

The 99% of the world is not able to see, and the 99% of the world that's not able to see is that they don't know the impact they're creating. Okay, for instance, right? You can just say, you know, what's a good morning to somebody every day.

The impact that it creates on them, you will never be able to know unless they tell this to you that, oh my God, you know, the nice, you know, good morning that you say every day brightens up my life every single day. Right. So this is an impact you don't know you're creating, but you're, but you're continuously creating.

What if someone came and told you this is the impact that you're creating you would then have a wider smile You will feel more good about yourself. You will feel more proud about yourself in and you will have the best good morning that you will be saying to everybody in your office, everybody, you know, in the house, so that is the kind of impact that we want to bring out, showcase to people and tell them that your tests create this kind of impact.

This bug in production creates this kind of impact. And use that positive and negative impact to drive the testing. I'm going to give you an example of that. Let's say somebody wants to run a test in a certain device.

And they get notified that the last time there was a revenue loss that happened on a similar use case or a similar test happened on a different device. They are more likely to pick up that device and test. That's a good thing.

Now, if they don't know this context, now it's not that people deliberately ignore this context. It's not that deliberately people don't want to create a positive impact. It's just that it's way too overwhelming. It's like, you know, like for each of us, we are overwhelmed with things that we already know. We are overwhelmed with what information that we already have in our head.

Like I'm 43 years old right now. I have 43 years of Data garbage sitting in my head right now, and it's absolutely overwhelming I can only Process it if I can simplify it, and the way for me to simplify it is to you know is to one get triggers, is to one have it visual in such a way that I can look at and say, oh my God, this sector of mine is garbage.

I need to just junk this sector and focus more on this sector. There is, unfortunately, no visualization. One more thing about, one more example that I can share is that these smartwatches have changed the way we look at our health.

I continuously monitor my heart rate, right? And you know, if my father or my grandfather wanted to monitor the heart rate, they would go to the hospital every day and get the heart rate checked, right?

So now how has this changed? This has changed in making me a more and more active person, which has changed some part of my biology, my body mass index, my levels of energy, everything. Now, if I did not have this, I can't imagine how my life will be.

Now, what is the smartwatch and the data and the metrics equivalent in the testing space? This is a fascinating problem. This has been there for a while, and that's the problem that we picked up, we wanted to focus on impact driving the whole testing, growth driving the whole testing, and context driving the whole testing.

So that's where we are. We have moved from a concept phase to a POC phase, and it's progressing quite well.

Kavya (Director of Product Marketing, LambdaTest) - Absolutely. I mean, great to hear this, Pradeep. So the gist of it is, if I were to say it in one single line for our audience, it is basically leveraging AI to just focus on high impact areas. So that seems to be the approach or the way forward. Yeah.

Pradeep Soundararajan (Founder & CEO, Moolya & Bugasura) - Thank you. I'm sure the audience will love your summary better than my explanation.

Kavya (Director of Product Marketing, LambdaTest) - No, absolutely. I mean, the analogies that you've been giving, I'm hooked to it for sure. So moving on, can you discuss the ethical considerations surrounding the use of AI in software testing, particularly with respect to data privacy, bias mitigation, and so on?

Pradeep Soundararajan (Founder & CEO, Moolya & Bugasura) - Yes, so this has been a very important topic when it comes to AI because it continues to learn, and it's just like a kid, right? It's a, you know, what I've discovered about AI is AI is like our kid. If we teach them something wrong, it doesn't know how to differentiate right and wrong.

And it will just go out and speak those words. For instance, many parents discover, oh my God, how did my child use this bad word? Well, it was because it was used in the house, and they picked it up from there.

Now, so we have to look at AI like our new child who is continuously learning everything that is, that is being, that it's, you know, surrounded by, and it does not know to differentiate between the good and the bad.

Given this, the child is going to say things that are going to be controversial sometimes. The child is going to say some things which are going to be offensive sometimes. But just because it's a child, we all know to be compassionate, you know, towards the child, right?

Similar but with respect to AI which is built by adults there is no excuse there right AI still behaves like a child but the people building AI are what say adults now if okay if these adults are oblivious to these kind of you know biases that might actually what's it come out?

Okay, that's an offense now what the world is talking about when it comes to the You know comes up, you know, about the ethics and the bias of AI they're not talking about the people building AI.

AI is just a front interface so I would say that it's important these leaders in the industry these influential leaders in the industry to come out with guidelines for what should not be trained or how should this training be assessed? How should this training be tested for for a you know biases?

Okay to see if there is any of your racism, you know, for instance, right? You know, here is an interesting thing that happened. I was conducting a leadership workshop, which had a few men and women. And I asked, what should, what's your, who's your, you know, okay, who's your ideal leader? So describe your ideal leader both men and women use he for a leader.

Okay, I waited for everybody to finish, and I asked all of them, hey, Why is it that the moment you talk about a leader you are saying? It's a he now when such people build software they and will be oblivious of what they're doing to it. So which is which is why good parents create good kids.

But good parents, it's not a hard and fast rule. The kid can still become bad, but parents can still become bad. Right? So good parents have a 50% chance to create good kids. Right? And also a good parent with a good kid in a bad environment will still create a bad kid.

Like AI needs good parents with good kids in the right kind of environment to remove as much as possible of cases of, uh, you know, biases okay as you know possible, but now in terms of the privacy part of it right even before AI came in even before AI came in, most of our data is leaked.

One of my company’s colleagues showed me the list of pizzas I've ordered at Domino's, and I was shocked by that then he told me oh, this was whole, and everything is out there what are you talking about it had my phone number it had my address it had the pizzas that I ordered.

So we are already, there is, you know, privacy even before AI has been a fundamental problem that the world has not solved. Now AI is just coming in and multiplying that problem statement. So if we want to solve a privacy problem with AI, we have to go back and look at the source of this problem come from and have more highly secure practices of coding and infrastructure and all of that stuff to be able to prevent those things from coming into AI.

But also that said, there is a lot of training data that's fed into that is where I think people need to be conscious of what they're feeding in, you know, because it can take from the public web and upload, you know, saying, okay, what are the six pizzas Pradeep ordered? Here are the top six pizzas you know Pradeep ordered. Boom, it's gone. Right. So like that.

It's just made it a lot more easy for people to extract it out. It doesn't need technologists to go hackers to go and get this data. And I think, okay, that's where, so, so what AI has done to privacy is it just multiplied and showcased to people. This is how screwed up the world was even before I came in. So that's, that's all that I know of on.

Kavya (Director of Product Marketing, LambdaTest) - No, of course, great points. You know, it is of course important to navigate the ethical concentrations. And yes, the ones who are creating AI, a lot of onus also depends on them for sure, as you said. So moving on to the next question, could you elaborate on the exciting new project that Moolya and Bugasura are developing? I know you touched upon a few of those on a high level basis, but.

When it comes to context-driven testing space, there are a lot of exciting activities happening in your orgs. And how does it aim to address the current testing challenges?

Pradeep Soundararajan (Founder & CEO, Moolya & Bugasura) - Great. Okay, thank you. So we thought, I don't want the audience to consider this as a promotion of Mulya or a Bugasura. I'm just sharing this as a fellow practitioner. I love to solve testing problems. So what I'm continuing to do along with my colleagues is to continue to build these things. Like for instance, how do I know the impact?

Kavya (Director of Product Marketing, LambdaTest) - Thank you.

Pradeep Soundararajan (Founder & CEO, Moolya & Bugasura) - AI can have on the testing space by building a product on it. So for me, again for my teams in, you know, Bugasura, we are actually looking into how human behavior, to what extent AI can replicate it.

AI solving, for example, there is an autopilot that drives the car and the same autopilot that drives the car in one context fails in another context. Okay. So humans are always going to be the best when it comes to solving problems. This is the foundation of how we are going to solve problems using AI.

We're not saying AI is going to replace people. We're gonna say humans are going to forever be the best problem solvers. But the best problem solvers need the best tools to become the best problem solvers. What are those tools?

So for an Iron Man to succeed, there needs to be that jet pack and all that super duper technology that needs to come in, and then the Iron Man okay is an Iron Man okay otherwise the Iron Man is a fractured man, right so for us the foundation so with that being a foundation we are looking at what do the smartest software testers need?

They need a system that helps them think deeper, which helps them go deeper, which helps them understand, first of all, understand the requirement more deeper. So one of the things, one of the pieces that we are building into Bugasura is a system that can help testers understand requirements way better than what people are doing today or what the best of the testers are actually doing today.

So we begin with that and then now already chat gpt and other things and there are also a lot of other players you know trying this out to generate test cases using AI. But most of these test cases lack context, which is where we are bringing in the context-driven testing approach. So first is the analysis of requirement. Second is the context-driven approach by connecting to different.

Okay, sources of data. We say, hey, here is a requirement. Here is your understanding of requirement. Here is the context and combining all of these things to generate test cases out of it. And then highlighting the risks related to these highlighting events that happen or related to this particular test case highlighting the criticality of the test case highlighting the value if executed.

Okay, if not executed, okay, pulling insight from the production of and Mapping it to an executed, you know, test case which gave a false positive or a false negative and you know, doing this as an analysis and then finally, okay, measuring the impact the testing has created to what production data is actually, you know, trying to tell us.

This as an end-to-end, you know, the system is the vision that we have to use AI and amplify the value of, you know, testing visualize the value of testing to product heads, to technology heads, and to business heads. That's a little high level of what we are working on.

Kavya (Director of Product Marketing, LambdaTest) - Thank you so much, Pradeep, that I'm pretty sure gives a lot of insight to our audience. So as we come to the end of the episode, Pradeep, do you have any final thoughts or key takeaways to share with our audience on the intersection of AI and quality assurance and how testers can best prepare for the ever-evolving landscape of software testing?

Pradeep Soundararajan (Founder & CEO, Moolya & Bugasura) - Okay, none of us can be prepared for anything that's going to come. So, if you ask the founders of a company, were you prepared for all of this? The answer is absolutely no. Now, while there is no such preparation, there is a concept of adaptability. There is a concept of adaptability.

I think what people are not doing well the 90% of the world I look I'm not really bothered about that, you know 10% of the world they You know, they're not the ones to be bothered about the 90% of the world the 90% of the world are becoming more and more rigid and And hence less and less adaptable.

I'll give an example of that I mean people who say, I want to learn this tool, and I want to build my career out of this tool. That's a very bad idea. That's quite rigid. You can't base a career based on a framework or based on a tool. Or people say, oh, this is the programming language of my choice. This is what I do. They're different tech stacks.

Different tech stacks and different problem context needs different kinds of frameworks, different kinds of programming languages, and all of that stuff, right? So I think people becoming rigid in their thought process and boxing themselves in is actually killing multitudes of possibilities. And I understand why. And the reason is that this world so far has celebrated, you know, specialists that the person is a specialist at this, right?

But the way the world has evolved, the world has evolved to the point where that it's also demonstrating that you need, you certainly need some level of specialization, but you also need some level of generalization. And I think what we have seen in the last 10 years is the rise of generalists and by the notion of generalists.

They adapt to new things at a faster pace, and they are good across things, and they're very good at something. Now, if people are taking a very long time to become good at one thing, they will find it extremely hard to become even average at the rest of the things.

So I would say that not being rigid, being flexible, being adaptable is going to become a core differentiator for people. Okay, one. Okay, second thing is this, right? The world is constantly going to change. Everybody is going to change. Okay, for example, right? If somebody's dream phone was iPhone 13, right? Today, it's very affordable.

Today it's very affordable. At that time, it was a dream phone because it was unaffordable. Okay. Today, many people have a dream of, I want to have an S23 Ultra, which is let's say, about a lakh and a half and about some, okay. Thousand, uh, you know, $2,000 and stuff like that. It's going to become very affordable four years from now because there's going to be an S45 at that time. And S23 is going to become very cheap.

So, ultimately everybody is going to have a Samsung Galaxy S23, But it depends on when they have it But do they have it when the Samsung Galaxy S23 is launched, or do they have it after it's completely outdated? Everybody will be pulled rocket towards a Samsung Galaxy S23 or an iPhone 15 Or an apple vision pro is the question of when when when?

I think this world progresses when a lot more people move on faster. Okay, otherwise the world anyway has a plan to pull all of them towards it. I'll beat just a few years later. So that's what I would say about it.

I wouldn't call this as an advice. I would say that, you know, that's the nature of the world now It's a choice everybody makes of where do they want to be. Do they want to be? Do they want to be? Somebody who has a Samsung Galaxy S23 When Samsung Galaxy S23 is launched or when it's, you know, still new or five years later, But the good news is that they will still have a Samsung Galaxy S23. That's the best that I can say right now. And I thank you for that.

Kavya (Director of Product Marketing, LambdaTest) - Thank you, Pradeep. So adaptability seems to be the motto that you've sort of shared with the audience. I'm sure whatever you've, you know, all the insights that you've just shared, our audience is going to find it super useful. So as we wrap up today's episode, firstly, I would like to thank Pradeep for joining us today and sharing his insights on the impact of AI in the future of software testing.

Pradeep has been a master of analogies and a great storyteller. It has been a very interesting episode, Pradeep. So thank you once again for being a part of the XP Series, where we bring podcasts on the latest trends, innovations, and conversations from the world of testing and quality assurance.

And those who are listening, please Subscribe to LambdaTest YouTube Channel for more XP episodes. Thank you once again, Pradeep, for joining us today. It's been a pleasure hosting you. This is Kavya signing off. Happy testing. Have a great day, everyone. Thank you, Pradeep.

Pradeep Soundararajan (Founder & CEO, Moolya & Bugasura) - Thank you.

Past Talks

Shifting Accessibility Testing Left with LambdaTest and EvincedShifting Accessibility Testing Left with LambdaTest and Evinced

In this webinar, you'll learn to pioneer accessibility testing's evolution by shifting left with LambdaTest and Evinced. Uncover strategies to streamline workflows, embed inclusivity, and ensure a comprehensive approach for user-centric testing.

Watch Now ...
Building Products that Drive Better Results with ShortcutBuilding Products that Drive Better Results with Shortcut

In this XP Series Episode, you'll explore the keys to creating impactful products with Shortcut. Unlock strategies for enhanced results, streamlined development, and innovative approaches to building products that drive success.

Watch Now ...
How Can We Speed Up Our Work during Web Application Test Automation?How Can We Speed Up Our Work during Web Application Test Automation?

In this XP Series Episode, you'll learn the strategies to enhance speed in web application test automation with JetBrains Aqua. Boost efficiency, optimize workflows, and master precision for a new level of testing agility.

Watch Now ...