In this XP Webinar, you'll discover how AI and Quality Engineering intersect to drive innovation. Learn about intelligence-driven approaches that enhance testing methodologies, boost productivity, and redefine user experiences.
Listen On
Head of QE, Accenture
Head of QE, Accenture
Antony Kaplan is a leader in Quality Engineering with a long background in the Retail space, focusing on point of sale, back office, payments & end-to-end integration. He has a wealth of experience in multiple industries, including Oil & Gas. His focus on excellence and prioritization of the needs of customers, businesses, delivery and operations is what sets him apart as a leader in Accenture.
VP of Alliances & Channels, LambdaTest
At LambdaTest, he heads Alliances & Channels. Prior to that, he has worked as a Mentor & Advisor at upSAVE Analytics and as an Associate Director of Sustenance Engineering at Innovaccer. Sudhir holds Masters in IT including several certifications like Certified PRINCE2® Foundation and Practitioner from AXELOS Global Best Practice, Certified Scrum Master (CSM) from Scrum Alliance, ITIL V3 Foundation, and RHCE.
The full transcript
Sudhir Joshi (VP of Alliances & Channels, LambdaTest) - Hello, everyone. Welcome to another exciting session of the LambdaTest Experience (XP) Series. Through XP Series, we dive into the world of insights and innovation, featuring renowned industry experts and business leaders in the testing and QA ecosystem.
I am your host, Sudhir Joshi, heading Alliances and Channels at LambdaTest, and it's my pleasure to have you with us today. Today's webinar will explore the intersection of AI and quality engineering, uncovering how innovation is accelerating in the realm of software testing and quality insurance.
But before we dive in, let me introduce you to our guest on the show, Antony Kaplan. He's the Head of Quality Engineering for Accenture UKI. Antony is a seasoned leader in the field, spearheading transformative initiatives in quality engineering with the dynamic landscape of modern engineering.
With his extensive experience across multiple industries, primarily retail, oil, and gas, Antony brings a wealth of insights and expertise to our discussion. In today's discussion, we will delve into the transformative potential of AI in quality engineering, exploring how intelligence-driven approaches are revolutionizing testing methodology and reshaping the future of software quality.
From AI-powered automation to human-centric testing strategy, we'll uncover the latest trends, and innovations, shaping the quality engineering landscape. Let's hand over the mic to Antony to share his invaluable insight and expertise with us.
Antony, my first question on this would be, beyond automating repetitive tasks, how do you think an AI be leveraged to predict and prevent quality issues even before they arise in the SDLC?
Antony Kaplan (Head of QE, Accenture) - Good morning, Sudhir, and thank you for having me. That's very nice to be here. I guess one of my biggest challenges has always been how can we be more proactive in detecting issues before they happen. For example, finding the same defects over and over with every release.
So things like being able to use AI to analyze historical data. AI can help to predict potential quality issues before they occur. This allows teams to be more proactive in addressing these types of issues, also saving time and resources. Other areas, like looking for patterns in code or behavior, might lead to defects.
So recognizing these patterns early on, developers and quality engineers can prevent these issues from escalating. Adding AI to automated tests. So we automate a lot, certainly in the projects that we work in, we shift left, we automate first.
So adding AI to the automated tools can help to simulate some real-world scenarios to identify bugs more efficiently than a manual tester would be able to, and this helps to catch issues before they impact the real users in production. And then the other area of things like when issues occur, we always look at root cause analysis.
So one thing AI is good at is using big data to analyze issues. So giving it huge amounts of data allows it to look for the root cause, similar issues, and underlying issues to try and prevent them from occurring in the future.
Sudhir Joshi (VP of Alliances & Channels, LambdaTest) - Great. And with the rise of infrastructure as a code, they call it IaC, and continuous integration and delivery, how is the role of the QA engineer evolving within the DevOps or around the DevOps pipeline?
Antony Kaplan (Head of QE, Accenture) - Yeah, so quality engineers within DevOps are evolving to encompass a broader range of responsibilities. So we're already doing shift left testing. So QE engineers are increasingly involved earlier on in the development lifecycle and collaborate with developers to ensure that there's quality right from the start with infrastructure as code, the review, the review code changes in infrastructure, scrubbed alongside application code.
So it's, it's working together, right? I think there's never, I don't think there's going to be a world where clients are going to want to completely do away with automation testers or quality engineers. Modern engineering is looking at more engineering-led teams and capabilities.
And we're still seeing a very big need for quality engineers as part of that development in CI/CD pipeline squads. Quality engineers focus on integration testing, verifying that all the changes to infrastructure and application code work well together. They execute tests and validate that the end functionality of the system is the end-to-end functionality of the system in different environments.
You also need to look at the performance of these infrastructure environments. And so I think overall, there you can see how core engineers work quite well in these types of engineering-led CI/CD scores.
Sudhir Joshi (VP of Alliances & Channels, LambdaTest) - Great. I think my question is quite interesting is, and we're trying to get specific, do you have any specific example from your experience, maybe in the retail space where AI power testing played a critical role in improving the overall quality of the end-user experience?
Antony Kaplan (Head of QE, Accenture) - So one of the things we've been looking at in Accenture are accelerators to help. One of the questions we had at the beginning of the whole GenAI journey was how do you test GenAI, right? We can develop code, and we can add AI to automation tools, but actually, what is the role of quality engineers in testing that it's working properly?
That is giving you the right results, that is giving you the ROI that you're looking for. And so, as I say, we've been working on a number of accelerators. We've been doing a number of proof of concept to reduce the amount of time. One of the things we've been looking at is API testing.
That's real low-hanging fruit around how you can enhance your automation with AI to produce real-world quick API scenarios. This is something that manual testers can also do. So where you can let the manual testers write manual code, manual strings, or manual words.
And then the AI can go off and rip and create the GenAI or automated scripts for you. Another area we've been looking at is how we can use GenAI to take user stories and acceptance criteria and create test cases for us. We can compare what manual testers or automation engineers used to do around historical test scenarios and then see how GenAI can create even more scenarios surrounding those user stories and acceptance criteria.
So these are all the types of things we're doing in a retail environment. It's extremely important for industries to get to go live quickly retail environments are consistently looking at the market share and ways to improve customer interaction, keep customers in-store longer, buy goods, a good user experience when they get to a self-checkout or when they get to the tills.
And so all of these things that we are doing to increase to reduce the time it takes to get to for our testing will help the retailers get out, get things out very quickly.
Sudhir Joshi (VP of Alliances & Channels, LambdaTest) - Super. Antony, you brought up two interesting points. One is, who's going to test the GenAI-driven code? I think the huge responsibility of our fellow quality leaders and engineers is they need to make sure this is very well ring-fenced, and well-compliant because a lot of BFSIs are not very comfortable adopting to something that is not being tested.
So, this is a new great opportunity, but also need responsible hands to kind of take good care of it. The other thing that you mentioned, so I've been with a customer science company for about almost eight years, right? And from retail, what you said, how much effort they put into retaining their loyal customer to a level of, what's my store layout, which could be very different from region to region, right?
So there's a lot of science going behind all that designing. So I'm sure it's an exciting area to work on. And I'm sure offline, we would love to talk about that.
Antony Kaplan (Head of QE, Accenture) - Yeah, exactly. It's store layouts, it's pricing models, it's promotion types. It's all of these types of things that draw customers in, and they're, and they generally, you know, could be different across different areas, and using AI, you can use the data to determine the best promotions for a specific area, best loyalty schemes, you know, etc. So yeah, it's all very exciting, all very new.
Sudhir Joshi (VP of Alliances & Channels, LambdaTest) - Yeah, exactly. So you already kind of touched base on that. What are some of the biggest challenges you see organizations facing as they adapt to AI-driven quality engineering practices? And, in fact, what would be your advice to organizations who's kind of looking forward to leveraging AI and other similar technologies?
Antony Kaplan (Head of QE, Accenture) - So I've mentioned already about data, big data. Data is really important. The availability of data, the type of data. AI relies heavily on good-quality data. Organizations might struggle to access clean, relevant, diverse data. What you put in, you get out.
There are also ethical and regulatory considerations we talk a lot about this in Accenture, actually how AI-driven quality engineering raises ethical concerns around bias and privacy organizations need to be careful and navigate around these types of requirements.
So, you know, to address this, you could start small scale gradually, begin with a pilot, look at some proof of concepts to demonstrate the value that that AI-driven tests in specific areas of the QE process are giving you. Invest in data quality and governance around that.
Make sure that you are putting effort in to improve your data quality and accessibility and putting governance around your AI algorithms so that it's giving you the right output for the data that you're giving it. And then address your ethical and regulatory concerns.
So make sure that you put guidelines together, policies together to address and make sure that any of these are being addressed associated with AI.
Sudhir Joshi (VP of Alliances & Channels, LambdaTest) - Yeah, I mean, in the continuation to this, do you also think that with kind of restrictions and apprehension adoption of GenAI, what do you suggest as a mentor to our, you know, the people who are young quality engineers, right? Very excited about GenAI. What's your suggestion as an industry leader who's a practitioner, who's seen this world evolving from testing 1.0 to now testing, you know, 4.0 or something?
Antony Kaplan (Head of QE, Accenture) - I think it's important to keep bare the restrictions in mind, but you need to put it out, you need to practice, you need to get it out there, you need, you know, young engineers coming in need to be able to express themselves. I think, you know, going into big organizations, I don't think the restrictions that I talk about should stop you. It's just, you know,
Whether it's AI or not AI, you generally need to be sensible and right in what you're doing, whether you are, however, you are developing code or getting things to work. So it's really just, you know, be sensible, think about the bias that sometimes we forget about, think about the ethical side of things.
But I do think that I would, my suggestion would be is continue to collaborate with teams, continue to develop, continue to, you know, we might need to cut this out by the way. I'm trying to think of, I'm trying to think of a word. Sorry, hang on.
No, I'm trying to think of a word, you know, be, be, it's on the tip of my tongue. I'll rephrase the sentence when I get to the word. But you know, when someone is, you don't want to stop someone from being like colorful, but that's not the word, if you know what I mean, like.
Sudhir Joshi (VP of Alliances & Channels, LambdaTest) - Be exploratory, right?
Antony Kaplan (Head of QE, Accenture) - Yeah, yeah. OK, let me let me let me rephrase it. Let me rephrase. So my suggestion to young people, young engineers, would be to think about the ethical side, think about the bias, but don't let it restrict you from what you're doing. There's a lot we can do, there's a lot we can gain from AI, but just bear that in mind. I don't think it should restrict you.
Sudhir Joshi (VP of Alliances & Channels, LambdaTest) - Yeah, I mean, I think act responsibly. And, you know, we have, we always had tons of referenceable material on the internet anyway. And I think we, I kind of strongly agree with you, right? the teams have access to YouTube and pretty much all the websites, right? But they've been always very ethical in their approach, right? And do everything in the right jurisdiction. So completely agree with you.
Antony, with all the kind of investment that organizations are doing like Accenture, and you spoke about a few things, how do you think the quality leaders be able to effectively measure the ROI of such AI-powered tools and strategy implementation?
Antony Kaplan (Head of QE, Accenture) - So it's not dissimilar from any POC you do, whether it's AI or not. I think the key, the things to remember is time saving. So measure the reduction in testing time achieved through automation with AI power tools compared to historical testing. That should be quite easy. Through that, look at the cost reduction.
So evaluate how much time you're saving versus the cost reduction you're getting the key thing for me, and, and this is really, something I've, I, I really, look at is how effective is it in reducing or detecting defects early on? One of the most frustrating things that can happen is finding a defect just before the pilot, right? We've all seen it. We've all been there.
And I'm really looking at AI-driven tools. We really need to see how we can detect defects much early on through the use of the data that we're giving it and giving it the historical data so that it can start being a bit more proactive. And then test coverage, I guess, is the other thing, right?
So like I said before, compare the increase in your test coverage achieved through your AI power tools versus the previous test cases you've got. And you can then see through the use of user stories and ACs do actual comparisons and hopefully through even more coverage, but completing the testing quicker through time savings, and cost reductions. These are the types of areas I would see as ROI.
Sudhir Joshi (VP of Alliances & Channels, LambdaTest) - Yeah, very interesting. And I think I really like what you said earlier in the conversation some of this would be direct cost, but even little, but some part of it is all about user experience. Whatever you're delivering and what you said towards the end, right?
You don't want to have a defect in the production because, okay, the cost of fixing it will be high, but the impact of that will be huge in terms of end-user experience. As an organization, all they are striving for is an excellent customer experience.
Antony Kaplan (Head of QE, Accenture) - Exactly. And we've been striving for many years to shift left. We talk about that a lot to automate early, automate even as part of the design stage. And these are all the things to find defects early on using AI power tools should be, should give us even more opportunity to be able to do that. So I'm really excited about that.
Sudhir Joshi (VP of Alliances & Channels, LambdaTest) - Great, so AI definitely is one of the, well, GenAI is one of the emerging ones, but what else, especially in the QE world, what else or what kind of technology you think is emerging in the next five to 10 years that leaders, practitioners should watch out for?
Antony Kaplan (Head of QE, Accenture) - So I know I've talked a bit about reducing testing time, saving costs, reducing more of the manual testing, but actually, I do see a world where when AI matures, and it's still early now, right? And we're looking at maybe, as you say, sort of five to 10 years ahead.
When AI matures, we may see a resurgence of manual testers. And we've actually been talking about this within my team over the last few weeks, months. With the use of no code, low code powered by AI, you don't need to be an automation engineer or a data analyst to write code. And I see this progressing and maturing in the years to come.
So I don't think we'll ever see manual testers disappearing. I think we'll probably see a bit of a resurgence of that. With AI, shift-right testing will focus on testing production environments. So one of the things we've struggled a little bit in the past is how do you gather real-time feedback quickly. How do you use the production data effectively to gather enough feedback?
In order to address issues quickly, learn from what you've done in order to shift left later on. So these, and give you the right feedback loops. So this is one area that I think AI can certainly help with. Cybersecurity threats have increased and are increasing. We see it all the time in the news.
So, the use of DevSecOps is now where we integrate security practices into the DevOps pipeline. This helps as well around security considerations and we can prioritize these through the use of software development life cycles. And then as I mentioned before, as organizations continue to mature the use of AI, we'll probably start to see more conversations about ethical use of it.
It's really, really super important to understand the output of your AI-based software and ensure producing the results that are ethical and right for your organization. So I think for me, those are the key things that we'll be looking at.
Sudhir Joshi (VP of Alliances & Channels, LambdaTest) - I mean, it's great to hear your view on the whole manual testing. I think I can only add one flavor and rather an example, right? We always had these IVR systems and then whenever you had an urgent issue, you would always rush to talk to someone. So transactional things can be taken care of, raising a complaint, etc, but you always need this human touch.
And I think this isn't I'm sure this may happen in five years down the line, but there's no easy way to really test the user experience. And that's where you really need humans doing that kind of interaction with your application, with your systems. What is my user journey to achieve this outcome, right? With ordering a product or adding my card, etc, and maybe a few more complex examples.
But essentially I completely agree with you that this is not making people redundant. This is about empowering them so they can focus on the right things where we need that kind of human intervention and experience.
Antony Kaplan (Head of QE, Accenture) - Yeah, it's, it's sorry, it's, but it's absolutely not making people redundant. And we don't like to use that word. It's about, it's about changing the role, right? It's about making people, it's about giving people the ability to do something more interesting than sitting and writing manual test scripts for eight hours a day.
It's about utilizing the power, utilizing the power of AI to do that work. and then, and then training them and upskilling them on how to analyze the results and continue in other areas.
Sudhir Joshi (VP of Alliances & Channels, LambdaTest) - Super. I mean, I've done multiple conversations, Antony, but this is by far the best 25 most productive and quite insightful. And as we wrap up our today's show, I would really like to thank you, Antony, for joining us today. I really understand your busy schedule. So thank you so much for taking time out.
I hope our audience must have gained quality insight from Antony's experience and the kind of experience experiment he's doing within the organization. Also, thanks to everyone for tuning in and being part of this discussion around AI and quality engineering.
Stay tuned for more insightful episodes and engaging conversations as part of our XP series. Until the next time, keep innovating and exploring new horizons in testing and QA. Thank you and goodbye.
Antony Kaplan (Head of QE, Accenture) - Thanks for having me.
In this XP Webinar, you'll discover how GenAI empowers IT engineers, revolutionizing their approach to software development with unparalleled efficiency and potential. Explore the transformative impact of GenAI on testing in the digital landscape of tomorrow.
Watch NowIn this XP Webinar, you'll discover how embracing continuous learning and adaptability can lead to more effective strategies and improved outcomes in today's ever-evolving landscape of innovation and competition.
Watch NowIn this XP Webinar, you'll delve into Avo's magical AI-powered test automation journey, unraveling the whimsical adventures from brainwave to inbox, showcasing transformative innovation along the way.
Watch Now