February 07th, 2025
38 Mins
Listen On
Shailesh Gohel (Guest)
Head of Quality Engineering,
ProductSquadsKavya (Host)
Director of Product Marketing,
LambdaTestThe Full Transcript
Kavya (Director of Product Marketing, LambdaTest) - A giant DevOps and AI-driven testing have revolutionized how quality engineering teams operate. But these advancements also bring unique hurdles. From managing complex tech stacks and shorter release cycles to integrating testing into CI/CD pipelines and leveraging AI/ML for better coverage and defect protection, QE professionals face a dynamic environment that demands innovation and collaboration.
In this session, we will uncover strategies to tackle these challenges head-on, offering practical insights and real-world examples from expert leadership of ProductSquads. Hi, everyone. Welcome to another exciting session of the LambdaTest XP Podcast Series. Through XP Series, we explore a treasure trove of insights and innovations brought to you by renowned industry experts in the QA and testing ecosystem.
I'm your host, Kavya, Director of Product Marketing at LambdaTest, and I'm thrilled to have you with us today. Today's session is all about redefining quality engineering so that you can overcome modern challenges using Agile, DevOps and AI-driven testing methodologies. Let me introduce you to our esteemed guest speaker, Shailesh Gohel, Head of Quality Engineering at ProductSquads.
With over 18 years of diverse experience in IT, Shailesh is a trailblazer in quality engineering, project management, and digital transformation. He has successfully scaled QA teams, implemented cutting-edge practices like AI-driven QA processes, and even modernized legacy systems for cloud migration.
Beyond his professional milestones, Shailesh fosters collaboration through CSQA, a thriving QA community, and as a sought-after speaker at industry events. And when he's not revolutionizing QE, you will find him exploring new cultures, and drawing creative inspirations from his travels.
It's a pleasure to have you here today, Shailesh. Today's session is packed with valuable insights for tackling the processing and testing challenges in QE. Shailesh will guide us through integrating Agile and DevOps principles into testing workflows, scaling automation, and harnessing AI/ML.
So now, without further ado, let me hand over the stage to Shailesh. Shailesh, we're excited to have you here. The floor is yours. Please let us know a bit about your professional journey in QE.
Shailesh Gohel (Head of Quality Engineering, ProductSquads) - Hey, Kavya. Thank you very much. Thanks a lot for having me over here. So I was following this XP Podcast and all this long, hearing out all the like experts and all and today I'm really glad to be part of this particular podcast sharing as one of your guests. Thank you very much for that. Thanks a lot for a great introduction. You already covered a lot of things about myself.
Just to add something to that. yeah, I'm Shailesh Gohel working as a Head of QA at ProductSquads called Technolabs based out of Ahmedabad. I have over 18 years of experience in the industry, where I work more in quality engineering and project management, and last couple of years, I have been in a leadership role.
I started my journey in 2006 as a test engineer, and then I had many heads I worked as an Automation Engineer, a Test Lead, a Test Analyst, a QA Practice Lead, a Performance Engineer, an Automation Architect, a QA Manager, Senior QA Manager, and now I have landed at ProductSquads as Head of Quality Engineering.
So far, I worked with manual testing, automation testing, and functional, and performance testing, all with various domains. Over the last couple of years, I have been more involved in digitization or the modernization of legacy products in this organization or in the previous organization.
Most of our customers are product companies that have which are having legacy systems, and they have like that they want to migrate from desktop to where they want to migrate to the cloud. In all these things, I'm heavily involved in creating the testing strategy for such complex migrations creating the automation roadmap for that, helping them on their dream to fulfill that dream of modernization in the system.
And I'm not working, I like to travel and all. Thanks for mentioning the CSQA, the Committee for Software Career Professionals, that's something we started three years back, especially for the people in Ahmedabad and Gujarat. We have done so many events online and offline in the last three years, like meetups, and webinars. We have done some kind of workshops, conferences and all. So thank you for that.
Kavya (Director of Product Marketing, LambdaTest) - That sounds great. Thank you so much, Shailesh, for that introduction. And very excited to hear about some actionable examples from ProductSquad's own journey that you have been leading, especially from a QA perspective. So let me jump on to the first question, that is, what are the biggest pain points your team faced before redefining your QE approach?
Shailesh Gohel (Head of Quality Quality Engineering, ProductSquads) - Great. So as I mentioned, product scores, are involved in modernizing the QA, moving from legacy to the more involved applications. So before we redefined our processes, we defined our QE strategy and all, there are a couple of challenges that we are facing. A few of them were unstructured processes. There was not sufficient test coverage. There were not a really good amount of regression or integration test cases that covered each of the areas of the product that we wanted to test.
There was a lack of test data management, so data is a key right for any product. We are testing with dummy data and all, but until and unless you are testing it with something that your customer is going to use, there are high chances that you'll get some failure or production leakages, so there was not an adequate amount of data. Automation was quite limited.
There was limited focus on all the latest tools and on training technologies because of the lack of bandwidth, as the bandwidth was just going into the manual efforts and everything. There was no total focus on the non-functional testing areas, and with all of those things we're ending up with, we're ending up with a lot and a lot of effort into the manual testing or designing the test cases and just executing that manually.
We don't have anything like CI/CD, and we also, we lots of time we spent on those things, which resulted in the crisis of production leakage because of no proper data or there was no proper test coverage. So these are the kinds of challenges that we were facing before we redefined our QE strategies.
Kavya (Director of Product Marketing, LambdaTest) - Thank you for throwing insight into that, especially because you definitely face these real-world challenges. Every team sort of faces these challenges and then you have to redefine and understand how you can create or come up with innovative solutions to solve those. This brings me to the next question, which is how did Agile, DevOps and AI/ML specifically impact your QE strategy, and what changes did you implement?
Shailesh Gohel (Head of Quality Quality Engineering, ProductSquads) - Yeah! So, we went like, we have like let's tell them these are the issues that we are facing and how we want to tackle that. So one of the issues was okay, we didn't have the automation, and due to that, we needed to do lots of manual testing and all. There's no better coverage, and all because of things are here and there.
One of the reasons was that there was no effective collaboration between the teams. So that was one of the pain areas that we had. We didn't have like fast feedback loop. Because of that, we ended up getting the feedback at a later stage, and we ended up with a shorter amount of time to replicate and address them before that, we had to do the production deployment and all, which is the link to the product and leakages and all. Just to resolve all these things, we have started implementing more into Agile.
We move more and more into Agile. We started implementing the day-offs. We started implementing the AI and all implementation of Agile right, so we got, I mean things like we got the shift-left approach with Agile that really helped us out which if left approach we have started doing the thing we have started testing the things early we have started really great level of collaboration between the teams QA teams started very actively involved in everything like a backroom, screen planning, requirement discussions and all sort of Agile which bring a great level of collaboration between the team.
Testing was happening on bits and pieces, but then testing was started. It was an integrated part of the whole development cycle. So, with the shift-left approach, we were able to have early feedback about the testing. We were able to increase our coverage of the testing; we were able to collaborate more actively among the team, which really helped us to reach all the corner cases or edge cases areas.
We started doing the retrospectives, which helped us out. We started with the retrospective; we started thinking three steps back, okay, where we were wrong and what we needed to correct. That's something we have done that ensures that really good alignment between all the teams and the feedback loops. We started implementing the BDD in most of our projects and products.
With the BDD integrations, we have leveraged the very clear and testable requirement that we can execute. So this is how a child helped us out main benefit what the shift-left approach. We implemented the DevOps. So with Agile, we implemented lots of automation. Automation brings also DevOps into the picture.
So we have developed a really great amount of automation framework that helps us out to reduce some of the manual efforts that we were putting into functional testing or regression testing and all, which helped the team to focus on more quality areas like whatever the critical areas to test they were able to focus more time.
On that, we were able to reach more corner areas they were able to implement more edge cases to test, which helped us to uncover some of the hidden bugs automation helped us to run our regression and integration tests overnight, and along with that, we implemented the CI/CD and the quality guides, they'll help us out in terms of the regular automation execution with each of the build.
So we started getting the early feedback about how the things and all. We are getting, let's say, two builds in a day, and we executed our automation test with those two builds, and one build is getting failed. Everyone, along with developers, product, QA, they get to know, okay, this is what like failing, this is where the issue is, and they'll fix it within the same day, deployed and again the automation run, and we made sure that okay everything is good.
There is nothing breaking, and we can move ahead with that this is how they also helped us a lot in making the environment parity earlier, we had only one environment, like the dev environment or q environment or something like that, but nothing looked like a production environment generally what happened in a production environment.
We have high availability or something lower balances, and so many things are that if you don't have such an environment built for your testing and if you are just using a very simple environment, there might be a possibility of some configuration issues some environment level issues also so we overcome that with the DevOps with lots of localization restarting using the Kubernetes and all where we have built the environment which is production look like our automation is running on those environments even our automation we are running on parallel just to get it faster and all.
So this is how DevOps helped us out. As I said, one of the challenges was there was a lack of non-functional testing, or I would say there was no focus on the non-functional testing due to that, we are getting very like complaints from our customers that okay the slowness of the issues or something like that.
Implementation of DevOps, we started performance monitoring, we started active, we started actively implementing the performance test engineering, we started getting okay with what is like a bottleneck, what is like the overall performance of the application, which helped us to mitigate those performance areas the non-functional challenges there's something has been really benefited to us so with Agile and DevOps we want to move really faster in the shift lab environment or learn to implement the automation and all and this is where the AI role.
The team started using AI for everything we are doing in the testing. We want to write a test case. Okay, instead of writing the test case, we started using some of the AI tools and we started learning the prompt engineering. We give that prompt to those AI tools. We started getting those BDD test cases readily available. With the increase in implementation is increased of using more and more AI tools and models are being trained, right?
So now we are trained in a way that we are getting around 95-98% of the accuracy for the kind of test cases that we want to write, right? So this helped us to achieve 2x-3x results. The tests typically take 8-9 hours to write around 30-35 test cases now; we are able to do that within half an hour. So this has actually helped us out. So we saved the seven hours that we are spending on automation.
We saved seven hours in testing other areas like we are doing exploratory testing, response testing and all. This is where it has helped us out. We started implementing AI in the test data generation. One of the main areas was that the production looked like test data. So AI helped us out a lot to generate the synthetic test data. It was like, I mean, the data that we wanted was kind of not that straightforward that anybody can create.
You need some kind of good domain knowledge and all, which everyone doesn't have. This is where AI helped us out. Without any domain knowledge, we provide our inputs, and it has just created those data. With the really rich amount of test data, we started uncovering all the hidden issues that we are getting as production leakages. It helped us to reduce the production leakages that we do have, AI helped us out.
So it helps us to generate the script automatically. It helps us out in some of the reporting dashboards and all where we can do some predictive analysis. It generates the trains and all. So this is how AI DevOps a child helped us to overcome our challenges.
Kavya (Director of Product Marketing, LambdaTest) - That is a very comprehensive approach that you just mentioned, Shailesh. I am sure that our audience would definitely gain a lot of insight into how you looked at the pain points and came up with the solutions, and not even in one single segment, right? But you took the use of Agile, DevOps and AI/ML and then implemented this comprehensive solution.
I'm sure that every testing team that is out there, every dev team that is out there, they're looking to understand how you can ship faster. And, of course, I made a note of all the interesting points that you also mentioned, that there have been multiple approaches that you have implemented, right?
From ensuring that there is an effective collaboration between the teams so as to create this faster feedback loop all the way up to figuring out the pain points, figuring out what can be added, adding the BDD approaches, and using AI/ML for test data generation. And of course, the way you went about implementing the shift-left approach, All these are very genuine and useful approaches that our listeners can definitely implement. So thanks for sharing that.
Shailesh Gohel (Head of Quality Quality Engineering, ProductSquads) - Thank you!
Kavya (Director of Product Marketing, LambdaTest) - Now, moving on to the next question, how does ProductSquads manage the challenges of shorter release cycles and the need for a faster feedback loop? You, of course, mentioned that briefly in the previous question. So how do you manage the challenges that arise with shorter release cycles?
Shailesh Gohel (Head of Quality Quality Engineering, ProductSquads) - Sure, with a shorter release cycle, the main challenge that is you don't have sufficient time for testing because the release cycle is very short. You have a huge amount of regression tests to cover and you don't have sufficient time for that. There might be a lack of automation and the frequent changes that we want to adapt within the short time we need to ensure effective test coverage as well.
So these are some of the challenges that we have faced or we have seen in the shorter release cycle and we need to have a quick feedback loop for that. Couple of things I already covered, and I might be just repeating that. So shift flap is something that helped us out, right? So it helped you out to get the early feedback. It helps you out to get better coverage. So this is how the shift flap will help us.
Early automation really helped us out. It saved a lot of time of us using AI that helped us out. So use of AI helped us out in creating those automations, reducing time of tests, generation, etc, quality kits and everything that helped us out to focus more on the risk-based testing, to focus more on the exploratory testing.
This is how we have addressed all the challenges with the shorter list cycles. And now these things are looks more better than what we are in the past in all the shorter lead cycles because the kind of quick feedback loop or the continuous feedback loop is already implemented.
Kavya (Director of Product Marketing, LambdaTest) - Thank you, Shailesh, because short cycles are definitely a challenge for most of the QE team. So it's good to hear that how you went about implementing the fast feedback loop for your team. That is definitely a practical approach. And moving on to the next question, that is how do you effectively scale your test automation efforts to keep pace with the growing number of features and functionalities?
Shailesh Gohel (Head of Quality Quality Engineering, ProductSquads) - Okay, so one thing that I've seen like many times that when it's about automation, people started thinking about the rocket science. I would like to just say to everyone who is doing automation, one message is just to keep the things very simple, right? So don't think complex, don't try to implement something really complex.
One thing that we had or many of the other companies that something like I also want heard about or have experienced is there are multiple products and multiple frameworks or multiple tools they are using. Product A using technology X, product B using technology Y, project C using same technology Y but both have different kind of framework and all. Some similar thing was with us as well that was stopping us or not letting us to do kind of resource movement or the collaboration between teams because it requires some kind of trainings to learn the tool or the framework etc.
So first thing we have done was the standardization of our framework or the tools. We have done a little bit amount of feasibility study. We choose okay this is the tool we want to go ahead with that. We are using Selenium with Java. So why we choose the Java? Because okay we have done all kind of analysis regarding what is the like kind of skill set we have on the floor, the people, what kind of language they know or what kind of language they can be like they can be mentored quickly, what kind of community help we will be able to get, what kind of other stuff and all, what is the like fitment of the particular framework or tool with our requirement, with our products and all.
We have standardized our framework. instead of having multiple frameworks, we go ahead with a single framework, which is more stronger and scalable. And we have created a framework in a way that there are lots of usable methods we have created. There lots of usability and all.
So when a tester want to automate something with minimal effort, they can start building automation script, which help us out to accelerate our automation tests. We are heavily using the AI for automation, be it writing the taste or generating the comments or documentation. We are also trying with auto-healing and all these things. So that's something we have done that help us out. The third thing was we have enabled the team. We have trained them in a way that they can efficiently write automation scripts.
So this is about the building. After building, we think about, now we have let's say 500 test cases and it is taking like X amount of time. That is also going to shorten. So we started looking at how we can do things in parallel. We started doing the parallel executions using some of the tools and plugins.
We started leveraging some of the portals that allow us to do the parallel executions in Bostec or LabNet itself with hyper automation and everything right so it helps us out on all this stuff so this is how we are able to scale our automation efforts and minimizing our automation execution.
Kavya (Director of Product Marketing, LambdaTest) - Thank you so much, Shailesh, for defining that. It definitely sounds like a great strategy because test automation, especially at scale, can be a bit difficult, but the approaches that you mentioned definitely could make it more efficient and sustainable in the long run. So thanks for sharing those approaches.
And what are the key considerations when selecting and implementing AI/ML tools for testing within a QA team? Because I'm sure that our audience would want to know more about those.
Shailesh Gohel (Head of Quality Quality Engineering, ProductSquads) - Yeah, so there was again one of the thing challenging thing for us. Okay, what to choose? There are a lot many things in the market. There are some plugins available. There are some like Jade-based stuff available like JTPT or the Jmini or the copilot and all that what to use. There are models available. So which model to use, there are some paid entities available.
There are certain tools available like low code platform available which is in creating AI very efficiently. There are certain tools available to generate the test case and all and we analyze them and everything is good. I won't say this is a bad or this is not and all. What we should do or what we have done is first thing is we need to check how it is aligned with your problem statement. What problem you want to solve.
Our problem was two things we want to generate test cases using AI. We want to create our automation script using AI. We want to write our automation script. So for that, we look for some of the plugins which is helpful for us. Now with those plugins which we look for us, the second thing we thought is, how good it is to integrate with our current tech stack.
We are using JIRA with Safari and we can do some integration with that. So it directly read my requirement from JIRA and generated the case for me. We are doing the CI/CD using Azure. Can we integrate these AI tools or plugins with that? That is something we can do. So integration capability is again a very key area that we need to do that. The third thing was the accuracy. We have four plugins. We got the continue plugin.
We got some ChatGPT plugins and all, and we started checking the accuracy level of things that we wanted to do we learned that okay when we started, so some of the tools were good with the language if you are using something with the Python or JavaScript, but they are not good with java now my tech stack was that I want something which helped me out to generate the accurate code in java and like we have seen that okay the continue plugin which is good which we are using with that now.
So that we were using that plugin because it is more accurate for my problem statement, my tech state, right so that is the third thing the fourth thing is, again, as I said in my answer for automation, don't make it in the complex right so which is easy to use Right we should go ahead with tha,t and we started with the GPT Co-pilot. However, the team members were more comfortable with using those plugins directly, which are integrated with the VS code.
So we started going with that because the ease of use was there and all. And after doing all these things, we have done lots of piloting. And again, the feedback loop helped us to choose the correct tool for the AI.
Kavya (Director of Product Marketing, LambdaTest) - Thank you, Shailesh, because I am sure that the QE teams out there, a lot of times they'll also have to look at the tech stack that they are using, as you mentioned, right? And then figure out what works best for them based on the team's abilities or skills as well. Very valuable insights. Moving on to the next question. How do you address the potential biases and limitations of AI/ML models in your testing processes?
Shailesh Gohel (Head of Quality Quality Engineering, ProductSquads) - Okay, yeah so bias very good thing When I want I like this thing or I like that, you know, this is good. That is not good and all so first of all, I am a believer of like nobody can replace humans so when automation, a boom was there. I'm an old school guys. I started my journey in 2006 where things were more into desktop-based, and then something came on the web and all.
And then automation came okay now we don't need a manual tester but still manual tester exists and more important something else came okay we don't this doesn't exist similarly, when AI came nice people started talking okay AI will replace us I was thinking no AI will help us to make our life easy right.
So to do that, okay, we need to have that mindset instead of thinking about any biases that we heard of we need to start implementing, then we need to touch things practically right so AI or ML, right so the name itself is machine learning, right it's a machine right, and it's a machine learning program you need to teach that even once it learns it implements right. I have my son 12-year-old, and I want to get him to do something.
I have to just feed him with more and more information then only he can do things better way similarly, you need to train your model right, so what you need to do is you need to check any AI model or plugin with a very diverse amount of the test data. I said okay, see, Selenium and Java are one thing and if I just go with that, that, that and all, and I may get the result, I may not get the result, right?
So that will not help us out. What we check with, we check with the different languages, we check with different types of domains, right? So that helps us out to make sure, okay, then how accurate the model is, right? So we are using the diverse data to train our models. One example is what we have done just to train the model for the test case and just to learn our products. We have a huge amount of all the help documents and all. We started feeding that to our agents, and then we started asking the question.
So with different product documentation feeding, I started giving more accurate results. So this is where like we overcome these biases, right? We do the regular validations of those things, right? So like, we started using it, and then what would happen is that we are human, and we start, okay, this person is doing good, or this agent is doing good. then like we blindly trust that.
We don't do that. We regularly validate that. And there is a good amount of human intervention. We don't want to do everything autonomously. We're using AI to make things quicker to make things better but not to do everything automatically via AI. There is some amount of human intervention is there, and it is really helping us out.
The third thing is we are already making ourselves aware about what are the limitations of the tools we are using. We know that, okay, it can do this thing only, not above that, and that message is going above and beyond the level, so there won't be any wrong expectations being set up. This is how we cop up with the limitations.
Kavya (Director of Product Marketing, LambdaTest) - Thank you so much. I mean, it's a very crucial point because AI/ML has so much potential, but at the same time, there is so much, I would say, initially, as you rightly said, there was also a lot of fear and this questioning among people that whether it would end up replacing testers, for instance, on dev teams, for instance.
But as you said, it's rightly not the case. And of course, the approaches that you shared, that's also very thoughtful working. In fact, LambdaTest also came up with KaneAI, which is our testing assistant. We call it the world's first end-to-end software testing agent. And as you said, it's again something to help testers out there so that they can author and evolve their end-to-end test cases just using natural language.
So it's not basically going to replace testers at the end. AI/ML is not going to replace testers at the end of the day. great, sounds great. Moving on to the next question, how does ProductSquads ensure its QE teams have the necessary skills to effectively leverage Agile, DevOps and AI/ML technologies? And I'm sure that you will have a lot of insights because, you know, the quality engineering space has been evolving from past nearly a decade or so, right?
Shailesh Gohel (Head of Quality Quality Engineering, ProductSquads) - So for this question, like to start with, okay, so though we started the class in 2023, we started with the mindset of creating a really good culture of mentorship, a really good culture of like learning and all. And there are really good leaders joining the company and for the leaders join and then we built the whole team.
So that also has helped us help everyone out because there's a leader already there and we are then recruiting more people, hiring more people. Once they join, we have created a really good onboarding program, which includes training in Agile, automation and AI.
So training in automation, it's not only about how to do the automation, but before that, it is like why we are doing automation. How to write the efficient automation. What to automate, what to not automate. That is the key focus area that we do have instead of only how to automate. For how to automate, we are using so many online learning courses and certifications. There's something people can do. But as a leader, what we are focusing on is why to do that particular thing or what to do. Similar thing in Azure.
You are using Agile but like how you can efficiently implement Agile and Agile Being Agile is not like any methodology or process It's a mindset, and that is what we have cultivated in the products world that we have the Agile mindset, right? So we are Ready to adapt to any new changes.
So we have had good internship programs last two years. We are hiring interns, and for that, we have a very tailor-made 2 or 3-month internship program, which includes all these things, automation, along with all other industry standard courses available. So that's something we are doing to keep our team up to date with the latest trends, keep them aware about okay what are the right things to do and what is not, you know, we know not to do that.
This is about learning, right? Apart from that, we have some regular mentorship and knowledge-sharing sessions and like we have regular workshops and tech sessions that are something we do every week, bi-weekly, monthly etc, which help people to learn new things which help to share anything that they learn anything they have implemented any small or big innovation they have made in their particular work.
They can come up on the floor, and they can share those things which motivate and help others to ensure, okay, this is something I can do that so there a regular knowledge sharing sessions happening we are so much keen about the hackathons, so in a year I mean we do at least 2-3 hackathons and all some hackathons are on like a real tag some are specialized on Agile some are specialized on testing and automation and all.
And that helps us out, right? So when we learn something for a few days and all, but until and unless we are not doing the hands-on, right? So we cannot be so much confident. And hackathon is like something where you have those 24 hours or 48 hours hackathon where you just do this coding, coding, you just do all your hands-on for like immense amount of time without looking back, without even sleeping and all.
People really enjoyed that and we got really good hands-on practice with that. Even people like us who are into leadership roles and doing more leadership activities or strategic stuff and all, we got a chance to do hands-on with that. So hackathons really help us out. They have good creative collaborations that help people to learn new products, new technology, etc.
We have some online platforms like LinkedIn Learning and all certifications programs we are using AWS, which help the team be up to date. This is how we are scaling our team on all these technologies.
Kavya (Director of Product Marketing, LambdaTest) - Thank you so much, Shailesh. Very insightful approaches again, especially how you are combining internship programs and hackathons and then also relying on all the materials, as you said, that are available on the internet, for instance. And the end goal for every leader is to make sure that it's efficient automation that is going to happen.
So, of course, investing in that skill development definitely helps the team, but also, I think, helps drive the success of the organization at the end of the day. What an incredible session. Thank you so much, Shailesh, for sharing your deep expertise and strategies. Yeah, it has been a pleasure hosting you.
I'm sure that our listeners would definitely like to implement the strategies that you shared for overcoming challenges that they face in quality engineering wherever possible. And, of course, the real-world examples that you have shared again add a lot more value to them. To our audience, thank you so much for joining us and contributing to the engaging discussion. Stay tuned for more episodes of LambdaTest XP Series, where we continue to transform ideas and innovations in testing. Thanks once again, Shailesh, I really appreciate it.
Shailesh Gohel (Head of Quality Quality Engineering, ProductSquads) - Thank you, Kavya. Thanks a lot once again for having me over here. Thank you everyone. Okay, have a nice day and bye.
Guest
Shailesh Gohel
Head of Quality Engineering, ProductSquads
Shailesh Gohel, Head of Quality Engineering at ProductSquads Technolabs, brings over 18 years of expertise in Quality Engineering, Project Management, and Leadership. He has built and scaled QA teams, mentored professionals, and driven innovation in functional, automation, and performance testing. A digital transformation expert, Shailesh has modernized legacy products, migrated systems to the cloud, and implemented advanced practices like RPA and AI-driven QA processes. As the founder of CSQA, he fosters collaboration within the QA community. A sought-after speaker, Shailesh shares insights at tech events and enjoys traveling, exploring cultures, and finding creative inspiration beyond his professional pursuits.
Host
Kavya
Director of Product Marketing, LambdaTest
With over 8 years of marketing experience, Kavya is the Director of Product Marketing at LambdaTest. In her role, she leads various aspects, including product marketing, DevRel marketing, partnerships, GTM activities, field marketing, and branding. Prior to LambdaTest, Kavya played a key role at Internshala, a startup in Edtech and HRtech, where she managed media, PR, social media, content, and marketing across different verticals. Passionate about startups, technology, education, and social impact, Kavya excels in creating and executing marketing strategies that foster growth, engagement, and awareness.
January 31st, 2025
29 Mins
January 17th, 2025
39 Mins