XP Series Webinar

AI-Readiness: Are You Building the Future or Falling Behind?

In this XP Webinar, you'll discover strategies to leverage AI for future success, explore innovative approaches to automation, and learn how to stay ahead in the rapidly evolving tech landscape.

Watch Now

Listen On

applepodcastspotifyamazonmusic
Abilash

Abilash Narahari

VP - Head of Technology & Digital Natives, QualityKiosk

WAYS TO LISTEN
applepodcastspotifyamazonmusicamazonmusic
Abilash Narahari

Abilash Narahari

VP - Head of Technology & Digital Natives, QualityKiosk

At QK, he specializes in empowering digital natives with cutting-edge AI-augmented testing solutions. His focus is on delivering robust testing strategies that enhance product resilience, accelerate time-to-market, and elevate user satisfaction. With a proven track record of success, he has led teams in developing innovative testing methodologies that drive business outcomes. His experience spans across diverse industries, from startups to large enterprises, where he has collaborated with clients to deliver tailored solutions.

Mudit Singh

Mudit Singh

Head of Marketing, LambdaTest

Mudit Singh, Head of Growth and Marketing at LambdaTest, is a seasoned marketer and growth expert, boasting over a decade of experience in crafting and promoting exceptional software products. A key member of LambdaTest's team, Mudit focuses on revolutionizing software testing by seamlessly transitioning testing ecosystems to the cloud. With a proven track record in building products from the ground up, he passionately pursues opportunities to deliver customer value and drive positive business outcomes.

The full transcript

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Artificial intelligence and AI-based tools have been making waves across the whole software development industry. It's changing the way people build and ship applications. But a lot of people are still wondering how enterprises and even high-tech startups are tackling this innovative new technology and how they are adopting AI tooling in their dev processes.

Most importantly, how are the quality engineering rules getting defined due to artificial intelligence? Today, we're going to find out all about this in our session with Abilash. Hello, everyone! Welcome to LambdaTest XP Series. Through XP Series, we bring you the latest innovations and industry best practices in the field of quality engineering and software development as a whole.

Our aim is to connect with industry experts and business leaders in the ecosystem and learn from them. I'm your host, Mudit, Head of Growth and Marketing at LambdaTest, and it's a pleasure to have all of you with us today. Joining me today as a guest of the show is Abilash. Abilash is the Vice President and Head of Technology at QualityKiosk.

He has more than a decade of experience in the software quality engineering space. Right now, he's working with digital-first companies, specifically in integrating AI-augmented testing in their processes, with the aim to enhance product resilience, improve user satisfaction, and build up software development pipelines to accelerate software development pipelines.

Now, in this XP Series today's session, we'll learn from Abilash how big organizations and startups alike are adopting these emerging new technologies, specifically enhancing productivity, optimizing costs, and improving operational efficiencies. So Abilash, first of all, thanks for being on the show. Thanks for taking the time to join us today. I did get a chance to tell you a little bit about yourself, but it would be great if you could introduce yourself to our audience.

Abilash Narahari (VP - Head of Technology & Digital Natives, QualityKiosk) - Thanks, Mudit, for this opportunity. It's a nice way to connect with the community, especially since I know LambdaTest has huge connectivity and the community is coming together to contribute to quality engineering and a lot of AI stuff happening. So that's exciting. But yeah, so thanks to you and your team.

Let me probably spend a little bit of time boasting about myself. So I'm Abilash. Head of Technology and Digital Media vertical for QK. QK is a 23 year old organization focusing on hillarbility engineering, leveraging AI. Our idea is to look at areas where we can plug in AI different SDLC processes and not just limit to that, but just go beyond left and also very right in the life cycle of product engineering and product development.

I come with a lot of experience, having worked with a lot of unicorns in India and companies that will sort of now be an example of how we build great products. And it's been a pleasure there having worked there. Now, I'm looking to leverage my experience in AI to enable a lot more companies in the coming days across the globe. I'm looking forward.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Awesome, so let's talk about AI, right? So everybody has been using some kind of AI-based tooling, for example, ChatGPT, one of the most popular ones. But let's talk about the use of artificial intelligence-based technologies, specifically in the engineering life cycle. How the modern product development looking like using, in the AI era, and why is it fact important for enterprises to at least if not start thinking about using AI?

Abilash Narahari (VP - Head of Technology & Digital Natives, QualityKiosk) - Yes, I think AI has become accessible to a lot of people, and that makes it easier for us to adopt it in a lot of different ways, including enterprises. But I just want to step back a little bit and say, is AI in terms of what or how we should look at AI? For me, AI is more of a knowledge graph. Would say AI is further away from getting matured in a lot of ways.

But for us, how are we going to break the silos of a lot of teams in a company or a product engineering life cycle? Let's imagine we have the development team, the quality engineering team, and then you have the ops team, and then you have the support team, and a lot of other teams coming into the depth.

But everybody uses their own tools and everybody uses all comes up with their own KPIs and metrics and the way they collect data and utilize data. For me, where AI is going to help is when we try to leverage all of this and build a bridge between all these different teams to gather data and make sense of it. And that's what we call it knowledge graph.

That's what is going around in the OpenAI and other LLMs to build knowledge graphs of the data. And that's where an enterprise is going to leverage a lot of these techniques to be utilized and use their data very specifically and train their data, is, I mean, make sure it's secure in a lot of other ways, but then make it enterprise ready. So, what we are saying here is basically a knowledge graph that is customized to specific enterprises.

And here we are going to you don't make a leap. Imagine you're a tester, and the only way you're testing right now is that you have PRDs, and then you write your test ideas or test cases, and then you start testing it, and then you push it out. But then you end up, I mean, we see a lot of leaks in the production, and then customers complaining about things going wrong, and the customer experience sort of comes down. We are always looking at ways to improve customer experience.

One of the examples that we have been recently exploring is how we can leverage customer behavior data to do testing. So, the way we are looking at it is if we are able to understand how customers are utilizing the product or things that are breaking the journey of having to utilize the product, we could do better testing.

And how it enables us to bring in observability and a lot of other aspects. But the overall idea is how we can utilize this data link that is being generated and then leverage it for testing. And we could do much better testing is what I believe.

Because we become extremely powerful because you have the very left data, and then you have the very right data, and then you're putting them together and then coming up with test strategies. That's going to change the way you potentially test products and potentially the way you see the products.

And that's where I feel Knowledge Graph is very powerful and how it is going to drive us in the future. But yeah, mean, the potential is tremendous and unlimited is what I see at this point in time.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - So that's a pretty interesting proposition here. We have been talking about AI, and people have been talking mainly about GenAI, how to use GenAI technologies like OpenAI, ChatGPT, and all of this. The thing that you talked about is something that AI has been helping out with since 2016, since the IBM Watson days, right?

Cognitive AI, finding out the right key insights from a large set of data. Imagine a product that has millions of users, millions of people using the product. We always talk about user stories when we start our quality assurance processes, and it's always a predictive thing in nature. That means somebody goes buzz, goes back to a drawing board, assumes that these could be the user stories, and then creates test cases and test scenarios.

But if you look at it the other way around, as you mentioned, like actual user stories, and then build up test cases from that would be a game changer, right? Usually, it's a challenge because there are so many user stories in a company's website view or app views. So finding out the right user stories, categorizing them, and building up test cases through them, that's pretty, that AI can definitely help out, right?

Another very important point that you highlighted was around observability, which is a very big point for enterprises right now, i.e., we have a large set of data, either in the form of user stories, clicks, views, events, like thousands of different metrics that are usually there as graphs in Grafana or Datadog or these kinds of dashboards.

But how to make sense out of it? That is something that AI can definitely help out. So, in your industry experience, like, is this something that people have started to adopt to use finding out those kinds of insights from this huge set of data, how is it going about? Is that option great, or are people still going toward it?

Abilash Narahari (VP - Head of Technology & Digital Natives, QualityKiosk) - Yeah, I think I would say that adoption is picking up because we are right now at a very conjecture stage of having AI as part of our product engineering lifecycle and also customer behavior analysis overall. What we are looking at is the maturity is still low in a lot of ways because, I mean, that's one of the parts that we will eventually talk about.

But the idea is that if you have the data sanity and data structure and your data quality significantly better, then you do a lot more leveraging their data. You do better product engineering and quality engineering overall. But observability is these days a quick plug and play at this point of time because you just have too many tools out there. have open source tools.

All you need to do is you can manage your data lakes better, then you will be able to drive insights out of it. But the next step of that is AIOps. The first step is the observability, and the next step is AIOps. If you put a maturity curve to it, people are in a very early stage of adopting observability. I think many have already started integrating observability as part of their product.

But then how do you utilize the data and drive a lot of things is something that we are seeing. People are still figuring it out because it's very contextual in nature. Every company has different architecture and different views on how they build their systems. So it's always contextual, but then, yes, there is a lot of emphasis.

We see a lot of momentum in adopting observability and also transitioning towards AIOps and using real user monitoring to drive incident management and a lot of other things. So yes, I would say that we are in a growth stage of observability. We have crossed the seed stage, or the early stage of adopting observability.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - But to be fair, observability is one thing, but when you talk about coming back to the AI part, like people are still steps behind, people are still figuring out, like OpenAI and AI-based technologies have been around for 4 or 5 years, some kind of technologies even before that, but it does still not have very widespread adoption, specifically in the enterprise setting. So, what is holding these companies back? Why is it still so hard to adopt technology like this, even though when we know that there are practical use cases that it can solve.

Abilash Narahari (VP - Head of Technology & Digital Natives, QualityKiosk) - Yeah, so there are two different kinds of companies, right? One, which is very, if you split this world into two different parts, one is the pure play financial services, banking domains which are driving everything. And then you have the digital and the technology companies. We are much more open to adopting because they themselves utilize a lot of AI based products, right?

The biggest concern for a lot of companies is how the data gets used or trained. So are you going to use the data, train a gen AI or, I would say, general AI components, or are you going to use it specifically for your customers, and then your data sets become really limited, right? Even a big company like OpenAI, is trained on a huge amount of parameters and data points.

But if you put the same thing in enterprises, because it becomes very limited because the data set is too small for anything to be trained on, then the accuracy sort of comes down drastically. So that's why we see a huge black and white in. Yes, right? That's because of how much data is enough and how big of data is enough for training AI, right?

And that's why we are looking at MLOps, for example, start moving towards MLOps so that you start driving your training and a lot of data-centric work to be done around AI. So I believe right now there are no restrictions because there are a lot of companies who also offer on-prem or private cloud instances in a lot of ways.

It also has to do with the way people want to implement because most of the time, you don't have those practices around; you don't have a framework around implementation. So you don't know how to go about it. Most of the time people sell you licenses and products, but then they don't sell services around it to be able to enable it for you.

So, you usually need thought leadership around how you drive implementation for enterprises. So, I think we are halfway there. We are still cracking a lot of things on how to train with a smaller set of data or how to use certain techniques from a wide range of data and use it for enterprise, just in the semantic search part of it and stuff like that.

But yeah, my opinion would be that. It's the practices around it. It's not the tool itself or the product itself. So we'll have to build a lot more practices as quality engineering practices, and more companies have to start contributing towards it to be able to make it easier for a lot of customers to adopt and utilize them right away.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Awesome, so let's fine-tune the conversation to quality engineering right now, right? We have seen that there is still a challenge in adopting AI-based technologies, but as a company that wants to start off in this journey, what do you feel are the low-hanging fruits, the quickest ways that people can start using AI, specifically in the quality engineering field? So, what use cases should we focus on that can at least get us started in adopting AI technologies?

Abilash Narahari (VP - Head of Technology & Digital Natives, QualityKiosk) - Yeah, I think there are two use cases that we can start immediately for, especially the quality mirror inside of it to be able to utilize the data that you have to build a copilot, for example, right? Most of the places that we have seen as developers are usually looking at dashboards and Slack channels and a lot of other places.

And one of the places that we have experimented with one of the customers is that you know, how about giving them a chatbot for them to understand what we are doing instead of they talking to us, right? We try to build that conversation around. I put all my QA documents and knowledge base into a knowledge graph and then expose a Co-pilot to a wide range of people to understand, hey, where are we on the sprint?

How many bugs have we identified in this particular module? And how long it's going to take for us to do something. And then you avoid a lot of calls that would potentially happen and improve the whole efficiency of how people consume quality engineering teams as well.

So that's a very big use case that I see because that does not require you to do a lot of modifications and ask your development teams and other teams not to make these changes in the product; you do this, we need this access to this. All of that sort of gets out of the equation and just focuses on knowledge-based copilots.

How can you build a knowledge copilot for the set of data that you have read? And this you can use, I mean, do using a lot of tools. I mean, one of the tools that we are using is DevRel, where we put in all the knowledge base and the data inside DevRel and create a Copilot that gets exposed to our customers; they can ask natural questions instead of asking us.

And that has actually, to a lot of extents, reduced the way people interact with us. Discussions become extremely focused and efficient overall. The second part of it is where we are talking about the observability piece. I think if you are able to stitch both the back-end and the front-end observability, to will be able to drive your testing and insights around it.

So, those insights are for the quality engineering team to consume and drive their test strategies instead of trying to debug the incidents that have happened. So these are two primaries where you don't have a lot of dependency except getting access and trying to get some basic approvals for some of the tools that you can get hands-on with.

So make sure life is easier and build a lot of efficiencies, and your confidence around how you test the product becomes extremely, I mean, the bar just keeps growing very well. That's something that we have seen closely happening. And this is something that we have experimented with a few customers. So, it seems like it's working. So, I hope it works for others as well. So people should let us know if it works.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - So for a company that is planning to adopt these technologies, I'm sure they would be facing, they would come across a lot of challenges and issues in the future. But based on your experience, what are the things they should be very cautious about? What are the precautions they should take before diving headfirst into these technologies?

For example, you have been talking about data. A lot of data is required to train your models and everything, but the data quality becomes a challenge as well, right? So, what do you feel are the things people should be worried about before adopting these technologies, or maybe you can say prepare before adopting these technologies?

Abilash Narahari (VP - Head of Technology & Digital Natives, QualityKiosk) - Yeah. Fortunately, I was talking to someone just before this podcast and we're talking about data quality and the readiness for you to get into it. One of the conversations that came up is because our customer has been changing their architecture and the way they store the data, the way they utilize the data and their services, a lot of things over the last 10 years.

And if you try to build a mapping around it, it's just too complex because you're being jumping versions from Java, whatever, to what Java is today. And then you have a database which is, you know, from relational to something else. Then things have just changed too drastically over the last 10 years, I would say, or the five years, I would say. But I think that's the biggest, I would say, shortcomings, right?

Because of the fact that you don't have a good picture of your system growth and transition over the last few years, so I would say that one of the activities that even a tester should do is to be able to sit with the teams and try to you know, draft a nice service mapping around how things have changed, what has happened over the years, you know, what's the future changes that's going to look like and then build a readiness around it, right?

Because I'm, if you try to plug in something today, there are 90% chances that it's not going to give you any results because you will see very data that's not mature, data that's not accurate. But if you can try to learn what has happened over the years in the transition that has happened over the years, then your life becomes easier on what you do today.

I'm not talking about the past, but how can you start structuring your data, your practices, and your processes around enabling data for themselves? So those are the one important part. And that's where we are helping a lot of customers by first helping them picture what has happened over the years for themselves. Because they themselves are, did we do all of this? Did we do all of this? We are not aware of it. Yeah, welcome to product engineering. This is what it looks like.

But the thing is that, if we can help them visualize this particular piece is going to be extremely valuable and helpful and give them certain suggestions on how do you build, like how we have quality engineering, we should call it AI engineering, how you enable yourselves to adopt AI technologies over the next years to come and in a significantly better way and also enable yourselves to map yourselves to what is happening in the AI journey in a lot of sense.

So those are, I mean, this is specifically an important piece of discussion we should do. And that's what I believe. And what was the other question? I completely forgot the follow-up question about this.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Nobody. So, let's switch a little bit. Let's switch the gears a little bit. We have been talking about companies, enterprises, and how the organization in general, should think about AI. But let's talk about people. Let's talk about the testers, as you mentioned. So, people have been talking about moving from quality assurance to quality engineering mindsets.

So this is something that has been happening and again, is still, I'll say, I think that is maturing, but how the role of the quality engineer is enhanced or, let's say, really fine in the, because of AI technologies. And if you are in this field, if you are, let's say, starting off in quality engineering fields, what are the, again, readiness as a person that you should have so that you can grow in this field?

Abilash Narahari (VP - Head of Technology & Digital Natives, QualityKiosk) - Yeah, I think our testers as a role have changed drastically over the years, especially with AI coming in. And it's good for a lot of reasons, right? Because it's just going to help us do a lot more things, much more efficiently. But I would say that one of the things that we, as testers, should not change I would talk about things that we should not change.

I used to tell my people that you'll have to be, you'll have to enable; I mean, you have to leverage AI to be 50% efficient in your work, but you will have to be extremely creative for the rest of the 50%. So creativity should remain the same for many tests.

We should continue to keep the bar up because it's just going to get even more difficult as we continue to progress because most of the things are going to be autonomous. I would not call it automation; it's going to be autonomous in a lot of ways.

See, with the cloud you can create an application within a few seconds. So you don't particularly need engineering expertise to do that. But with enterprise, it's very different thinking, thought process and how the data gets populated and stuff like that. However, testers should continue to think more from a customer experience perspective and be as creative as possible.

That is one thing that we should not change because that's what is going to us; I would say that being creative means we can break the system, right? And we can still question the value of the product. On the other side, what we should also be doing is thinking about instrumentation.

I think people should move towards building quality management systems right or quality visibility overall because we have been too dependent on a lot of tools that just build very bad dashboards. This talks about my number of test cases, and this is what the passes and fails are that's not going to help definitely.

But you will have to build a lot more visibility around certain metrics that are going to move and impact customer experience and also the product engineering experience because, let's say, for example, developer productivity, can you build a dashboard around Lora metrics, or do we have something like that for even quality? Can we build a version of it to understand our efficiency and our capability of what we can really do with it?

And also move towards observability and AIOps. I observability right now is a bare minimum for every tester who should come because they should, if they are able to get data points from the customer and they're able to make changes in their test strategies, it's going to significantly improve the way they are going to test and see the product from a different perspective.

And they should be able to build small utilities around it. I think it's time for us to build a lot more automation. If you remember IFTT, right, which sort of combines a lot of tools and builds knowledge, insights, and actions around it. So, testers should start doing that. We should build utilities. We should build small automation.

We should not think about automation as basically Selenium and Cypress. We are talking about automation being an enabler in a lot of places where you shouldn't exist. And then make your process efficient rather than your automation efficient, right? So, we should be very technically sound. We should understand data.

We should understand customer because it's a very nice conjunction that is happening right now, because we used to be very far off from the technical aspect of it because testers are usually asked to think customer-centricity. But it's time that we break that and then go both ways in the spectrum and build a lot more value.

I would say that we are like the drivers, right? You have passengers, you are the drivers, but you also need a map, right? So we need a map to know where we are going. So we should be the maps. We don't have to be a driver, but we should be maps. So that's how we should enable ourselves. The tester should definitely move towards building utilities, being a lot more technical than Trayvon apps and products.

And they should be able to utilize observability overall. I think they should also be able to implement observability rather than it being a developer KPI. It should be a quality engineering KPI, not anymore. And then start moving towards AIOps and other things. So that's where we should be spending our time.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - So the role will transition from right now, which is very automation centric like quality engineering teams are required to create a lot of automation, but tooling, for example, I'm just an AI can help you bridge that gap, create automation fast. Now the role will transition to a more scientific nature.

As you said, read data points and user stories and build up quality assurance matrices, like quality matrices. These are the standards that we have to measure those standards; we need the right observability in places. So, just creating automation or just creating the tooling around automation frameworks won't be sufficient.

You have to move beyond that and be more quality-oriented, customer-oriented, and kind of scientific in nature. So that's pretty awesome insights. The last thing, we are kind of on time. So, last advice for both software assurance teams, qualifications teams, and the people behind it.

What should they upskill? So, we have talked about AI, specifically observability and tooling. What about GenAI? What are the tools and technologies you think the people should upskill in to become, let's say, ready for the future?

Abilash Narahari (VP - Head of Technology & Digital Natives, QualityKiosk) - Well, I don't want to advise anybody because advisors always in a, I mean, because you might have to, you should have done a lot more to advise because imagine it's like, you know, the, it's like generation, right? So, the previous generation did not have access to mobile, and the next generation had only access to the next generation is talking about AI. So, there's always there is a gap between what sort of advisors you would use sort of applies to them.

In this case, think people should be able to use techniques and use GenAI, like, for example, OpenAI, ChatGPT, and others. If you are able to do prompt engineering really well, you will be able to come up with a really good output of what you're going to utilize that for. I see that people don't mean we've been training people on prompt engineering because that's one of the core pieces that we want our testers to be really good at.

One of the examples that I would give is one of our customers wanting to build and test the user permission system. You have different users and roles and responsibilities and all of that. So the combination was coming up to 1200 to 1300 combinations. You can't sit and write test ideas for 1 combination. There are places where you can utilize ChatGPT immediately.

So the team came up. It was supposed to be a two-week task, and the team used ChatGPT to do it in the next five minutes. And then it created a nice X series and then presented it to the customers. So they were like, wow, this is a great use case for leveraging AI. And prompt engineering is a must. Think people should learn deep prompt engineering and try to customize it.

And if you are able to consume the output as quickly as possible, it's a great plus point. So prompt engineering is number one. So everybody should learn prompt engineering immediately. And then how do you leverage a lot of open source tools to build utilities?

You don't have to start from scratch. I think there are a lot of people who have a misconception that I need to have a lot of knowledge and do this, do this. You have AI; you have open source. Can you imagine marrying these two? You don't have to do a lot of things because people are contributing to a lot of open-source communities and projects.

And not a small, tiny utility we have used just significantly impacted our customer's way of looking at quality. So, how do we start doing that? So I have a dedicated team whose job is to uncover good open source projects and take the techniques out of it and put it into GenAI and drive, give testers a way to test certain things which are too complex for them to understand or not able to utilize right.

So these two, think immediate skills, so there's no question on that. So imagine if you're able to do good prompt engineering and you're able to utilize that on open source know utilities, then you're ready to do a great job. The other part of it is to be able to, I think, the data quality part of it is still a major piece. People can learn data quality, understand data, and interpret data.

It's going to significantly improve the way they talk to customers. I mean, I'm not talking about testing per se. It's about how much customers are going to open up to you and how much they can offer, how much insights they can offer, and so on and so on. So I think, yes, data quality is a big topic, but I think prompt engineering and utilities are very accessible to us these days, and we should start doing it.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Awesome, so now we have hit the time, unfortunately. It was a really great discussion, Abilash. Thanks for taking the time to share insights on how AI is changing the quality engineering space. And for all the people who have joined us today, listen to this podcast.

Thank you for listening in, and feel free to add your comments below. We'll be happy to hear about you and what you thought about our insights and do subscribe to Experience Series. Again, Abilash, thanks for taking the time out. Thanks for joining us today.

Abilash Narahari (VP - Head of Technology & Digital Natives, QualityKiosk) - Thank you, Mudit. It was a pleasure talking to you and I hope we do a lot more such activity. Thanks, everyone, for joining this podcast, and see you soon.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - All right, thank you, everyone. Happy testing!!

Past Talks

Upskilling Quality Engineers for the New Age: A Success Story in SDET TransformationUpskilling Quality Engineers for the New Age: A Success Story in SDET Transformation

In this XP Webinar, you'll explore how SDET transformation is reshaping the skills of quality engineers, empowering teams to meet modern software demands with agility, innovation, and enhanced expertise.

Watch Now ...
Creating Reliable and Scalable Test Automation FrameworksCreating Reliable and Scalable Test Automation Frameworks

In this XP Webinar, you'll explore best practices for building reliable and scalable test automation frameworks, ensuring faster development cycles, enhanced code quality, and improved test coverage across diverse platforms and applications.

Watch Now ...
GenAI for Quality TransformationGenAI for Quality Transformation

In this XP Webinar, you'll explore how Generative AI is revolutionizing quality transformation by automating testing, enhancing accuracy, and driving innovation in software development. Discover cutting-edge techniques for achieving unparalleled software quality.

Watch Now ...