January 09th, 2025
31 Mins
Listen On
Bhupesh Mittal (Guest)
AVP Software Quality Assurance,
Tiket.comKavya (Host)
Director of Product Marketing,
LambdaTestThe Full Transcript
Kavya (Director of Product Marketing, LambdaTest) - In today's rapidly evolving digital landscape, AI is transforming industries and functions, including quality assurance. And while AI tools promise increased productivity and efficiency, they also bring challenges and misconceptions.
However, the true opportunity lies in harnessing AI to enhance QA processes, optimize repetitive tasks, and enable deeper value-driven testing. Companies like tiket.com exemplify this paradigm shift by integrating AI into their QA workflows, driving innovation while maintaining a customer-centric approach.
Today's session dives into the strategies, challenges and learnings from tiket's GenAI journey, showcasing how QA professionals can thrive in an AI-empowered future. Hi, everyone. Welcome to another exciting session of the LambdaTest XP Podcast Series.
Through XP Series, we dive into a world of insights and innovations featuring renowned industry experts and business leaders in the testing and QA ecosystem. I'm your host, Kavya, Director of Product Marketing at LambdaTest, and it's a pleasure to have you with us today.
As I said, today's discussion will revolve around GenAI in QA, which is tiket's approach to evolving quality engineering. We are going to explore how AI is reshaping the QA landscape and uncover the strategies for us to stay ahead in this dynamic leap.
Before we start today's session, let me just introduce you to our guest on the show, Bhupesh Mittal, AVP Software Quality Assurance at tiket.com. With over 17 years of experience in the testing domain, Bhupesh has consistently driven high-quality digital experiences while championing QA as the ‘Voice of the Customer’.
Having held some impressive roles at organizations like MakeMyTrip, GrapeCity and HT Digital, Bhupesh brings a wealth of expertise in testing, process optimization and aligning quality with business goals. Currently, he's leading tiket.com's quality initiatives.
He's at the forefront of integrating AI into QA workflows, transforming processes and delivering exceptional customer value. And I'm sure that he has a lot to share with our audience today. And as I said, in today's session. It promises to be an in-depth exploration of how tiket.com is leveraging AI to redefine your practices.
So let's jump straight into the heart of today's discussion, Bhupesh. It would be great if you could share more about yourself and your journey in testing with our audience.
Bhupesh Mittal (AVP Software Quality Assurance, tiket.com) - Many thanks to Kavya and the LambdaTest team for hosting me. It's a nice experience, and I would love to share some initiatives that we are driving on the AI front at tiket.com; like any other company, AI is a hot topic now.
It's a very booming thing, and a lot of people get onto the bus very early in the stages. So that's the angle here for tiket.com also. So we want to explore it in the best way possible so that our human effort, everything that we do, we can basically make it smarter, increase the QA productivity and all.
So that's the angle here; we have been leveraging it for, I think, now 8 or 9 months. We have done some explorations and, yeah, keen to share those certain things with the audience here so that maybe they can also jump in and try out those things.
Kavya (Director of Product Marketing, LambdaTest) - Absolutely. Good to hear that you have already been working on AI for the last 8 to 9 months, especially since it became a key topic; you've jumped straight into it. So the first question that I have is how can QA teams overcome the fear of AI and leverage it as a tool for growth?
Bhupesh Mittal (AVP Software Quality Assurance, tiket.com) - I think this is pretty much trendy now. I have also heard this from a lot of you. I sensed it when I talked to different people. There's a fear out there in the QA arena, right? So people think about what will happen to their future since AI is coming up, a lot of companies are trying it, and a lot of third-party companies are also coming in with this product that product around AI.
So there's definitely a fear that I sense. But I think that's not the right direction they should think about. So instead, what they should think about is all the things they do in daily life during their testing cycle. And what AI is giving them. So they should basically map whatever they are doing with whatever they can leverage from AI.
So it's kind of mixing their work with AI instead of running it away. It's similar to, like, I mean, you are an expert maybe in testing manually, right? Doing all the functional testing, etc. Then, you know, 10 years back when automation was booming up, right? So a few people jumped the ship, few people did not.
So now, I mean, if you look at that, in most cases, people who know automation are kind of more competent; they had more growth in the last few years, right? So basically, they added up one feather in their cap, which is automation; now AI is another significant feather that people should be adding up, so the idea here is to become a one-stop shop for testing related things when it comes to you and your skill-set, right?
So you know testing, know automation, then know AI. So that's the angle people should look into. So instead of running away, they should ride on that wave.
Kavya (Director of Product Marketing, LambdaTest) - Very interesting point, Bhupesh. I think it is definitely a topic that everyone's been discussing, especially on LinkedIn. I do see so many posts around how up-skilling, especially in the age of AI, is required and how we have to work along with AI and not run away from it.
So a great point. It's great to hear that QA professionals can shift their mindset, and of course, you folks at tiket.com are a live example of that. Thanks for bringing that up. The next question is, can you please elaborate on how tiket has used AI to streamline repetitive tasks and free up bandwidth for your QA team?
Bhupesh Mittal (AVP Software Quality Assurance, tiket.com) - So, as I mentioned, we started a few months back with this AI. So we started slow, we started understanding what all things are happening in the AI brand, right? A lot of people were actually exploring at that time.
So, we also keep a certain bandwidth on the AI exploration in the team. I mean, to give you a brief sense of what areas we are touching in the AI front-end QA and tiket.com, we are using it in the test case generation, right? instead of, so shifting the land from creating the test cases by humans only, right?
We are basically mixing it with the AI first approach and then filling in the gaps with the human part, right? So it's a mixed blend approach that we are using for test case generation. And then we are using it in automation scripts.
So not the entire code base that we are writing, but bits and pieces of the automation scripts we are writing via AI, right? There can be certain examples that we can use like you are doing a UI automation, and then there's a gesture that is there, and you want to create a function around it instead of writing that code itself.
So maybe you can go to AI and ask that code function, right? Of course, you have to be competent enough to understand, OK, whether that code actually fits in or I need to refactor it with my usage, right? So yes, that is the other thing that we are doing. Code reviews, etc, also is one of the areas where we are using AI, right?
There are multiple tools in the market. I'm not saying that this tool is better or that tool is better. The point is we tried a couple of tools. We went ahead with one of the tools, and we are constantly exploring. Apart from this, there are also certain use cases which are currently in the exploration stage, which I cannot speak about because it's too early.
But some use cases around commercials, around product teams, that we are where we are trying to use AI and try to help them build something which is useful. So we are focusing not only on the bandwidth of the QA only we are focusing on how we can help the other teams also save some bandwidth via these initiatives.
Kavya (Director of Product Marketing, LambdaTest) - Thank you for sharing these practical examples. So just to sum it up on the usage front of it, right? You have so far used it for test case generation, using it for automation scripts, code review and so on. So essentially, you know, in the entire queue cycle, you're basically figuring out which places AI can come in and help free up your team's bandwidth more or less, right?
And just out of curiosity, how did you go about structuring your team because that's again a question that a lot of leaders in the quality assurance space tend to ask about that. Along with the existing work, how do you also manage freeing up time for AI-related tasks and so on? Did you have a structure in place, or was it more like an initiative that someone took up?
Bhupesh Mittal (AVP Software Quality Assurance, tiket.com) - So initially, it was an isolated initiative. So we have multiple sub-teams under the QA charter and tiket. So there are 10, 11 small teams inside the whole QA arena and tiket. And so we started with picking up a couple of folks from one sub-team where we identified our first use case, right?
So we picked them up. We started with that particular use case. We did some POC, generated some results, did some improvements, and again generated some results, right? Till the time it becomes a bit of a success story, right? It gave us something to work on, right?
So then, with those results, we showcased to other folks in other teams, right? So, that basically builds up the curiosity in those team members that, okay, somebody's in my tiket.com itself somebody is working on the QA initiative, right? I should also jump in I should also try out something, right?
So it built a little curiosity within the team, and then we picked up a few more folks from a couple of more teams we identified certain more use cases, right? So, I mean, it started as an initiative in different, different teams, right?
And then, at that time, basically, I was acting as a product manager for what all we needed to explore, what all we needed to build a sector, right? But then after a couple of, I think, three, four months, it was like I shifted the head from myself to their head, right?
Okay, now you are the product manager, you have to identify the use cases and come back with the ideas, and then we can work up them. So that enthusiasm slowly builds up, and the reason we started slow since we were exploring. when you explore something and if you ask everyone to explore, right?
So there's a chance that a lot of people are exploring similar things. A lot of people might get into similar challenges etc. Since you have to keep on the deliverables also on the pace.
As a leader, you have to be cautious. When to put more pressure on the initiative, how to align that. So yeah, we started with isolated initiatives and now we are implementing it across.
Kavya (Director of Product Marketing, LambdaTest) - Great, thank you so much, Bhupesh. I asked that out of curiosity, but I think it was a very thoughtful approach that you have implemented within the team. And moving on to the next question, how do you measure the ROI of implementing AI initiatives within your QA team?
Bhupesh Mittal (AVP Software Quality Assurance, tiket.com) - That's a very important question. I think this aspect, the ROI aspect is not only related to AI initiative. It should be the focus area for every QA leader, every lead, and every engineer. Whatever initiative, whatever tool they are using, be it AI, non-AI, etc.
So it should be the first point of the area where they should focus. What am I getting out of this initiative, right? They should have some ability around it, right? Because that is basically the goal, right? So you have to run towards a goal. You cannot run just for the sake of it.
So, yes, and tiket also for the AI part when we were exploring when we started with the initiatives, right? So the goal was what will, so for the AI specifically, the ROI factor is majorly, the productivity increase of the members right how much time we will save that's the primary goal when we pick up the AI initiatives.
So we for any initiative for for any use case that we tried. We started with benchmarking the timing that is getting zoomed right now when we are doing it by humans right only. Let's say let's say you pick up the test case generation part, right? So it's not like you will pick up the time for today only or this print only right? So you focus on the trendline, average it out and that can be your benchmark now.
You start putting the AI initiative in place, right? Then you monitor that, Right you notice the timelines there, right and then, but over the course of a period, you will make some improvements that will further improve your ROI.
So the delta between the original time that was getting consumed when AI was not there versus what is the time getting consumed now after you went up the AI road. So that delta is basically your ROI. And for us, for now, the major ROI factor is the time that we are saving by this initiative.
Kavya (Director of Product Marketing, LambdaTest) - Great, thank you for sharing those insights. It's good to hear how ROA has been carefully tracked within tiket.com, not just for the AI initiative that you're implementing, but also for non-AI initiatives that's been implemented.
So moving on to the next question that we have, what were the biggest challenges that you faced while implementing AI, and how did you overcome them both as a QA leader and as a team, you know, it would be good to hear both sides of it.
Bhupesh Mittal (AVP Software Quality Assurance, tiket.com) - So, yeah, whenever you pick up a new thing, you have to bump into some challenges, right? Some are easy, some are complex, some get solved early, and some take a lot of time to solve, right? So yes, we also ran into a few challenges in whatever use cases we tried, right?
I think two, or three major challenges were there which we overcame over the period of time. The very first one was when we talked about AI, so it's majorly GenAI that you use, that you use to export, right? So GenAI, like, I mean, a lot of people now know that it is, so its result is as good as your prompting, right?
So now you might get a good understanding of the prompts for general use cases, right? But when you go into deep, when you go into different, different verticals of your team, different, different kind of feature list of your team, right? So some are internal, some are external, some are exposed, and some are not exposed.
So, I mean, then the challenges start bringing, okay, what kind of prompt I should use, which is more effective, right? So deciding on prompts that too, I mean, basis the teams, basis the work, basis the feature, right?
So I think that was a bit time-consuming and a bit challenging. We did a lot of to and fro's there, right? And in between, like, the versions of GenAI keep on coming. So this version versus that version, right?
So that was one of the biggest challenges that took a lot of time. And it is continuous processing, by the way. So now also we keep on exploring those problems to make it more better.
The second challenge, I would say, would be which GenAI tool to use. So that's also a very important question to tackle with because there are multiple GenAI tools. So a lot of them are there now, and they are constantly evolving.
So one tool that might be working good for you today might be behaving better than other tools in your use case today, right? That situation might change one month down the line, correct?
So, and yes, you can create the fallbacks, etc, that, okay, this tool primary and this secondary, etc. But in some use cases, you might have to go towards the commercial side of those tools, right? That's okay, I need to purchase some licenses here, right?
Now, which tool license to purchase this one or that one? I think that's a cautious effort. That is a challenging part because then you have to do a of analysis, lot of, you know, you have to keep on doing it for, let's say, one, two months to understand which one is more better.
So that's the additional energy and time that you've got to spend. Right. And especially when when you are doing it in an initiative way, not in a specific, carved-out bandwidth way. Right.
So that was the major second challenge. One other challenge was like you did some exploration, right? In particular use cases, you got some POC results and you looking at those POC results, you created some anticipation that, okay, if this is that, then when I actually implemented, most probably my results, my ROI will be such like XYZ. Correct?
But when you start doing those implementations, when you monitor them, so you figure out the anticipation percentage versus the current ROI. It is having a stark difference. But the important thing to understand at that time is, I mean, for us was, OK, we utilize DA, but somehow we've got to understand that, OK, only AI is not solving that math. Right?
So we tried some customizations on top of the AI implementation. We did some customizations. We improved certain things over the input part, over the output part, over the pre-processing part, etc.
And then we reached up to that anticipation level after doing those customizations. in some cases, it was not straightforward. So I think that's the way one of the major challenges that we got into and resolved over the period of.
Kavya (Director of Product Marketing, LambdaTest) - Thank you, Bhupesh. Sounds like these challenges were also very valuable learning moments for the team, seems. with every version of AI, you said, there's something new that is coming up, a new challenge that is coming up. That again seems to be, at the same time, a learning moment for the team to begin with.
So moving on to the next question that we have is How did you build a team and foster a culture that embraced AI within your QA organization? So you did touch upon it in one of the previous questions that I'd asked, but it would be great to hear more in-depth insights.
Bhupesh Mittal (AVP Software Quality Assurance, tiket.com) - So I think one, like I told you earlier, so we started with the initiatives, right? Then we started showcasing those POC results. We started creating some curiosity in the team, right? And it actually pushed people in a healthy way that I should also do something, right? So that's one thing, right?
Second thing, like we just started doing that. So we started preparing certain sessions which we want to give. So whatever the team has done till now, right? So We are preparing certain sessions to deliver in-house, right? Not only to QA but to the entire tiket.com, like dev, product managers, QA and all.
So it actually created So it will help in creating that curiosity not only in QA but in the entire organization as a whole that okay. This particular team is doing something, they have achieved something, right? So why not we also try something, correct? So that is the second thing.
Third thing, so let's say somebody did something, right? We were able to get some successful ROI into some use cases. So then, appreciating that in a bigger forum, right? So it actually creates a healthy competition between the team that, okay, I also need to do it, right?
Kavya (Director of Product Marketing, LambdaTest) - Yep.
Bhupesh Mittal (AVP Software Quality Assurance, tiket.com) - The fourth and foremost, fourth and most important thing is you got to sit with your folks, right? You got to discuss with them that, okay, this is the area where we need to tap into, right?
So why don't you guys do some brainstorming, right? Next week, we will meet again and try to bring out some ideas, right? So we will listen to your idea and if it is workable. So we will definitely work and you will be the key pointer on that particular idea.
So that gives you a sense to the people that, okay, I will be the front runner of this particular idea in the organization. I will be the one who will be leading the flag for this particular idea using AI and all.
It actually creates a sense of achievement if they do it. So I think those are a few things which basically, because when you are going into the exploration and innovation side, you cannot just say that, now from next year onwards, this is your OKR.
So you cannot do that. It is not fruitful. So it has to be coming from within. So I think. Yeah, changing their focus and creating a space for their possible achievements, right? So that is the key thing that we should be working on to create a culture where they should go in that direction.
Kavya (Director of Product Marketing, LambdaTest) - Good to hear that because it seems that culture really is the key, as we also say a lot of times within our team at LambdaTest. And what I'm getting a sense of is that you have taken the time to basically nurture this mindset of innovation and collaboration within your team at tiket.com and that's an impressive job.
Moving on to the last question of the day, which is, what are for our QA listeners who are interested in exploring AI but who are unsure about where to start? What are the two to three key recommendations that you would have for them?
Bhupesh Mittal (AVP Software Quality Assurance, tiket.com) - So, well, I would say I, as an individual, and we, as tiket.com, have just started the journey. So we are not experts on this AI thing. Like a lot of people are doing a of things, and we are also trying a lot of things, right?
But I do understand because when we started, we were also in the same space. We had to start how to get on things, right? So maybe from my journey, I can tell you that the very first thing that you should be doing if you have not started it.
So if we tap into the industry, go to social media channels, go to different conferences and try to understand what all the different things people are doing on the AI part, right? So you watch them, you understand what all the use cases they are solving, right? Then you figure out what all use cases you have in the ecosystem, which are mapping with those use cases.
And then, identify what are your top three areas where you want to work on accordingly. So now you have some industry insights, you have some mapping on the workflows that you have, and then you can actually start with the top three key initiatives.
And since people are already doing it, so just in case you need some help or you need some documentation, etc. So you might easily find that. So I think that's the key point to start with. thing, a couple of things I want to say for the newbies and for the people like me also.
So we need to understand that AI is not a silver bullet. So it will solve a lot of problems, but can it solve all of your problems? We don't know. So whenever you are picking up some initiatives, you have to really understand what problem statement you are solving, right?
And what ROI will you be generating, correct? Without that part, right, you might go into putting up a lot of cost, and a lot of energy, right? A of time, but maybe not on the right part, right? So focus on the areas where you want to solve something. You have to understand the ROI of it, and then you go deep into it, right?
Don't just do it under the FOMO pressure, right? That, OK, that company is doing what that person is doing, so I should also jump in, right? Please clearly understand the use case that you want to solve, right?
And the last thing I would say, in certain use cases, you might find, you might feel that, OK, straightforward, there is nothing as such from the AI. But maybe certain companies, and certain third parties are giving you some tools and platforms that have pre-solved certain AI use cases, right?
So maybe you want to try out that. But again, the key is you have to be cautious while going there, right? No harm in going there. Definitely should try out a lot of things, whichever becomes fruitful for you. But be very cautious about the ROI. So the cost and the time that you put in versus the return that you will get.
So it has to be fruitful, right? Maybe not fruitful in the starting when you start the journey, right? Like it was for us also. mean, starting it was slow, but eventually, it kicked off, right? Because on the go, you learn, you improve, and then you get better results.
But please be very cautious about what areas you are picking it up and how you are doing it. And the last thing I would say, like it happened with us also, you picked up some initiative. You tried out AI. It might be like AI only cannot solve that particular thing.
You might have to blend and use a blend-in approach. You might have to do some customization yourself in the process or in the, if it is around coding, so in the code, right? Plus the AI, so it can be a mix of it, right? So think of all those angles while you are going for the AI initiatives.
Kavya (Director of Product Marketing, LambdaTest) - Thank you so much, Bhupesh. These are some fantastic takeaways for our listeners. And thank you for also breaking it down, what I heard were very actionable steps that you personally implemented within tiket.com, which can be implemented across different organizations and teams as well.
Really appreciate your time; what an insightful session it has been. As we wrap up today's discussion, I would like to thank Bhupesh for joining us today and shedding light on how AI can revolutionize quality engineering. We hope this session will inspire you to embrace AI as an enabler of growth and innovation in your QA practice.
But yes, as Bhupesh said, do remember that AI is not a silver bullet. I think that's a phrase or a takeaway that I would also take with me. To our audience, thank you so much for joining us today. Those who are listening, please Subscribe to the LambdaTest YouTube Channel for more episodes in our XP Podcast Series. Thank you, Bhupesh, once again for joining us today. Thanks, everyone.
Bhupesh Mittal (AVP Software Quality Assurance, tiket.com) - Thank you, Kavya, and thank you LambdaTest, again for hosting me and inviting me here. It was a nice experience. Thank you so much.
Guest
Bhupesh Mittal
AVP Software Quality Assurance, tiket.com
Seasoned Quality Leader with nearly 17 years of experience in the testing domain. I lead teams in delivering seamless, high-quality digital experiences and ensuring that QA acts as the "Voice of the Customer". Throughout my career, I’ve held various roles at renowned organizations such as MakeMyTrip, Grapecity, and HT Digital, where I’ve honed my expertise in testing, process optimization, and aligning quality initiatives with business goals.
Host
Kavya
Director of Product Marketing, LambdaTest
With over 8 years of marketing experience, Kavya is the Director of Product Marketing at LambdaTest. In her role, she leads various aspects, including product marketing, DevRel marketing, partnerships, GTM activities, field marketing, and branding. Prior to LambdaTest, Kavya played a key role at Internshala, a startup in Edtech and HRtech, where she managed media, PR, social media, content, and marketing across different verticals. Passionate about startups, technology, education, and social impact, Kavya excels in creating and executing marketing strategies that foster growth, engagement, and awareness.