In this XP Series Episode, you'll embark on a journey into the future of testing, leveraging AI for impactful test generation. Stay ahead with innovative strategies and redefine your QA approach.
Listen On
CEO & Founder, Inflectra
Agile Evangelist, Inflectra
CEO & Founder, Inflectra
Adam Sandman is the founder and CEO of Inflectra. He is responsible for product strategy, technology innovation, and business development. He lives in Washington, D.C., with his family. Prior to founding Inflectra, Sandman worked as a director for Sapient Government Services, where he was in charge of development with the U.S. Marine Corps. and other government agencies, and was responsible for leading many capture teams and writing whitepapers and position statements to build Sapient’s reputation as a leader in the defense space. He studied physics at Oxford University. When he is not working, he can often be found giving talks at events such as NDIA Agile in Government Summit, STAR East, Software Testing Professionals Conference, and Swiss Testing Day. He is passionate about economic empowerment by helping to bridge technology opportunity gaps in the developing world.
Agile Evangelist, Inflectra
Dr. Sriram Rajagopalan is a distinguished Agile Evangelist and leader at Inflectra, spearheading our training and implementation services. He plays a pivotal role in assisting clients with product adoption and transformative process improvement. Dr. Rajagopalan has successfully established a Level 5 PMO, integrating institutionalized processes that underscore operational excellence. His academic prowess is marked by a Ph.D. in Organizational Leadership, complemented by a plethora of certifications, including PfMP, PgMP, PMP, PMI-ACP, and many others. As an active scholar, author, and thought leader, he has enriched numerous scholarly journals and practitioner columns. Beyond his professional achievements, Dr. Rajagopalan is deeply committed to giving back to the community as a mentor, trainer, and coach.
The full transcript
Mudit Singh (Head of Growth & Marketing, LambdaTest) - Hello, everyone! Welcome to another exciting session of the LambdaTest Experience (XP) Series podcast. Through XP Series, we bring to you the latest innovations and best practices in the field of quality engineering, product development, and software business in general. We connect with industry experts and business leaders in this QA ecosystem and get a chance to learn from them.
I'm your host, Mudit Singh, Head of Growth & Marketing here at LambdaTest, and it's a pleasure to have you all with us today. Joining me today as guests of the show are Adam Sandman, CEO and Founder at Inflectra, and Dr. Sriram Rajagopalan, who is an Agile Evangelist at Inflectra.
So to introduce Adam, he drives product, technology, and business development initiatives at Inflectra. In the past, he was the Director at SAP Government Services, where he was in charge of development with the US forces and other government agencies.
Dr. Sairam has three decades of experience in the software industry, he has a Ph.D. I was going through his LinkedIn profile, he has more than 30 certificates. I don't know how he found the time to do other things. He has certificates in program management and product management, and on top of that, he's also a contributor at Project Management Institute Guides. Apart from all of this, he is also a community member, trainer, and coach.
And along with Adam, they both are a very, very important part of the community. They have done a lot of conferences. And Adam and Sriram, first, welcome to the XP Series. And thank you for joining us on the show today.
Dr. Sriram Rajagopalan (Agile Evangelist, Inflectra) - Thanks for having us.
Adam Sandman (CEO & Founder, Inflectra) - Thank you for having us. It's a pleasure to be here.
Mudit Singh (Head of Growth & Marketing, LambdaTest) - Awesome!! So in today's session, we are going to do a little bit of a deep dive into the hottest topic that is right now, which is Artificial Intelligence, and how artificial intelligence intersects with one of the first and most important steps, that is test case and test case generation.
When we start with any project, the first step that people start off with is understanding what are the requirements and then creating test cases out of it. One of the most effective things we see in the use cases of AI in this scenario is how that specific use case of AI intersects with the whole process of software testing, specifically the process of software test creation.
So this is, I think, a very interesting topic. In our survey we recently did our survey on the Future of Quality Assurance. And there as well, we see that currently, nearly 46% of testers are effectively using AI tools. And the highest usage of AI tools is either in test data generation or test creation.
So I'm starting with you guys, Adam. Let's see. What are your thoughts on this subject?
Adam Sandman (CEO & Founder, Inflectra) - Well, I mean, there are the thoughts, and there are the practices. So it's interesting that you mention that. So we recently rolled out a plug-in for our test management tool. And it was an experimental plugin to get user community feedback and see what they thought. And what it does is it takes a requirement that you write and it will generate test cases.
It also generates risks that will identify risks, and it can do BDD scenario generation. So when you talk about the requirement, it's not just sharing test cases, it's also all the ancillary information around a requirement that often needs to be through. So, you know, if you think about it pre-AI, what would we do?
We come up with a user story or requirements or an epic, whatever combination you're putting together. And then you go talk about it, think about all the possible things that requirement implies, like what work will need to be done, what development needs to get done.
One of the risks we should think about mitigating which testing is a mitigation for many risks. So when you think about that whole process, that would be people on conference calls, Zoom meetings, or in person in the old days with whiteboards. A lot of that was just brainstorming and cross-pattern analysis.
So how do we do that session? What was happening was people were bringing all their experiences from previous projects and thinking about, well, I did this project that was an ERP upgrade for SAP. This requirement reminds me of that. Therefore, I think we should think about these tests, these risks, these former aspects.
So it's basically pattern matching. We humans are good at that intuitively. But we often have biases. So if I've done an SAP project and now I'm doing an Oracle project, just to pick an example, well, I'm going to bring all my SAP biases to this new requirement, even though this project is not SAP, just because that's my experience.
The real benefit of AI is, A, of course, we have this plugin. You can push a button and generate a bunch of test cases at a bunch of risks. And that's a great piece of functionality. It saves time. But more than saving time, it mitigates the bias effect, which means the AI doesn't know that you were previously an SAP engineer.
It's going to look out and say this is a new requirement to create an order management screen for Oracle financials. It's going to go out and do its research from its large language model. It's going to come up with all the test cases and the risks, and all the information.
For that, it's not going to be biased by previous experiences. So then, the team takes what the AI generates and can then apply their human experiences and human biases, which is a good thing. Because often the thing that, quote, smells wrong to a business analyst is important or a test designer.
But you're getting the best of both worlds. You're getting that objective view from AI and the subjective view from humanity. You put it together, and this human-AI team is stronger than the sum of the parts. That would be sort of my thought to start with, anyway. Sriram, what do you think?
Dr. Sriram Rajagopalan (Agile Evangelist, Inflectra) - I completely agree. One of the important things about any project for that particular matter, is it's not just a scope alone. So the product owner, project manager, somebody coming and telling this is what the requirements are.
What AI does is put a spotlight on some of the blind spots. There is known knowns and known unknowns and unknown unknowns and unknown unknowns, right? So it is now putting that spotlight on, okay, what are my blind spots in my requirements?
So conceptually, technically, schedule, scope, and all the other elements that come with that, we are able to see and say, all right, now that I know a little bit about the unknowns that I have not factored in, what should I do? And this comes from both the management angle as well as from the technical delivery team angle.
So what we are trying to do at this particular point is alleviate that information and see about more information about what will impact my project and ultimately the targeted audience of my, you know, my product what should I do so it's focusing on that good quality information?
One of the things that Adam also mentioned is that the human artificial intelligence team, if you think about the artificial intelligence as an additional team member because they are providing some input right, you know, unpaid or paid, however, you want to call it like you know if an additional team member who is providing some Intel.
And that person has to explain that person himself or herself. So it's very important that the AI is also able to explain the reasoning and the rationale behind why I said this, why I am trying to tell this, and stuff like that. So 1 plus 1 is always greater than 3. That's the idea behind artificial intelligence coming and adding value for requirements scenarios and test case scenarios.
Mudit Singh (Head of Growth & Marketing, LambdaTest) - So that's kind of a very interesting point. We were talking about that artificial intelligence kind of as an added tool set, let's say value add accelerator of sorts. And one of the biggest advantages is that it removes a few of the blind spots.
So that also intersects with one of the key metrics that kind of organizations are going after, that is test coverage or let's say, overall coverage of the whole code itself as well. So what do you guys think can the organizations first rely on AI for this concept of test coverage? And if they can, how they can effectively use the AI components to kind of drive that test coverage?
Dr. Sriram Rajagopalan (Agile Evangelist, Inflectra) - So if I can take that lead in this particular question and then probably I'll pass it back to you, Adam. So one of the important things that we have to keep in mind is the answer, in a nutshell, is going to be yes, but it's a qualified yes.
Yes, we can use AI to generate test coverages, but AI relies on the training data that it is using to create all this information. So even if you take ChatGPT, for instance, you know, it's gathering all the language models and then putting some information about where the data is coming from.
So garbage in, garbage out. So if the training data is not going to be very good or stale, then the outcome that is coming from the test generation, test case generation, is not necessarily going to be effective. So one of the things that we have to constantly think about is, I'm sure you probably are aware of this big, four V’s of Big Data i.e., Volume, Veracity, Velocity, and Variety.
So if you look at that, you have to constantly be in a mode where you are engaging and training the data model itself so that the outcome you are creating from the AI model is going be the way it came from the AI model is very effective.
So it will be for super edge cases, boundary scanning, and making sure that you are able to come up and not just happy path scenarios, but also unhappy path scenarios, exception flows, alternative thinking that comes up with, the use case generation and stuff like that. So I think it can be used for a number of different things.
So long as you constantly keep the training model effective. And that's one of the reasons why, you know, risk-based testing, for instance, it can come and add value to it. And even in Spira, we have risk-based modules where we are able to come and say, based upon the risk score associated with the requirements, these requirements require to be, you know, tested more thoroughly than some other models.
So if you have a limited amount of time, what is the requirement I should really test? So it comes up with all those scenario thinking and then creates the test case as well, test cases as well. What do you think, Adam, on that?
Adam Sandman (CEO & Founder, Inflectra) - Yeah, I think a qualified yes is exactly the right answer. So in terms of generating test coverage, in our experimentation, and we released this plug-in, right? And we thought people would play around with it, but many of our customers are in these sensitive, complex industries like insurance and banking.
And we're thinking that although this might be useful for any commerce website or something very simple Is it going to be useful in a more complex industry? Well, it turns out these people have been very excited about the plugin. They really want to use generative AI. And when we've been doing our testing, and we do demos and webinars with the tool, it's amazing.
You put in a single requirement like, I want to be able to reserve a hotel room. And it's able to generate useful, I mean, generally like 10 to 20 test cases, which are very useful. And our head of QA looked at it and goes, of that 20 test cases it generated, 10 were things that were somewhat obvious that I would have thought of, but it generated them in 10 seconds to create them all or upload them from an Excel sheet would take, I don't know, half an hour to an hour.
So even the ones that they knew of were a time saver. So there's a productivity benefit for test coverage generation. The second thing of the remaining 10, 5 were things that didn't make sense that we deleted that obviously then takes time up, but they are thought-provoking.
And the last 5 are ones that we haven't thought of initially. There were things like booking a hotel reservation with them for four years. Hotels don't like you doing that. That's a different kind of business model. And we wouldn't think to do that as a tester often. Or we did a flight reservation and it said, try and book a flight to the same city.
So there were things that as a human, you know, they're negative test cases, but you often overlook them. And so it was very good at generating both a combination of happy path, as Sriram mentioned, and boundary condition test cases. I think we get into a qualified yes. These are also use cases that the general public is aware of.
So there's a lot of trading data about booking flights. I mean, how many flight reservation systems are there? You know, Expedia, Travelocity, booking.com, hotels, you know, dot com. There's a lot of public data out there, so an LLM is very good at that.
But let's say we were developing a brand new IT system or a brand new, you know, wearable device that no one's seen before. It's heavily patented, it's got very restricted IP. First of all, you probably don't want that going out into the public LLM, but even a private LLM won't necessarily have enough training data. There won't be enough variety of data.
So as you get into some more specific requirements that are more niche use cases, there may not be as much benefit, as much lift from AI, just because there's not as much data. And that's where you have human researchers. And other kinds of AI may help, though, non-generative AI.
For things that are doing data analysis of your industry, you're crunching large data that's very industry-specific, which could be a more useful approach than generative AI, which is more tech space completion. So I think AI will be useful, but maybe other flavors of AI may come into play more.
Mudit Singh (Head of Growth & Marketing, LambdaTest) - So, like cognitive AI, the kind of things like IBM Watson, like the original application of AI.
Adam Sandman (CEO & Founder, Inflectra) - Right. Yeah, the deep learning AI machine, not necessarily Generative AI, which is just one type of AI use case, one type of model.
Mudit Singh (Head of Growth & Marketing, LambdaTest) - So people are very happy about generative AI today. But AI has been around since 2016. I've been hearing IBM Watson and Google, for example. They have been using deep learning in their searches for, I think, a decade.
Adam Sandman (CEO & Founder, Inflectra) - Absolutely. One of the AI use cases we're looking at is completely different from generative AI. What it is, is they're building a model where they're looking at all the code commits in a Git repository. They're looking at all the test failures and correlating those to see if we make these kinds of code commits; which kind of tests should we do?
Predictive maintenance, we would call it, in a manufacturing environment. And that's a different type of use case. And we have a partner that's working on that right now. But that's beneath the kind of the spotlight. As you say, everyone's looking at generative AI. It's going to do all this stuff. And it is, but that's just one track of AI.
And there are so many other tracks that are ongoing. I think the risk is that all the money flows to just one use kit. Everyone follows the hot money, and that's the big danger. So all these other branches of AI are going to get defunded or deprioritized, and that could be a big loss to the industry. I know, Sriram, what do you think?
Mudit Singh (Head of Growth & Marketing, LambdaTest) - Yep, that makes sense.
Dr. Sriram Rajagopalan (Agile Evangelist, Inflectra) - Yeah, absolutely, absolutely. So extending some of the concepts that we were just discussing over here, one of the things that we also have to think about in terms of the test case generation or test case coverage and stuff like that is a user persona.
So I've been speaking about a user persona long because a lot of times, we are actually creating unhappy and happy path scenarios based upon the persona that we already know. I mean, today's products are getting a lot more sophisticated.
Like, you don't even have a user interface, like software is a medical device like having an implant on your heart and there is no, you know, press one to do this or go to a menu bar and do this because there's no such thing right.
Adam Sandman (CEO & Founder, Inflectra) - Giving a heart attack, Press 1, Hahaha!!
Dr. Sriram Rajagopalan (Agile Evangelist, Inflectra) - So yeah, all of the updates are happening at the same time so you know when you are looking into these kinds of sophisticated product development. You can also think in terms of hackers. Like how can a hacker actually abuse this interface?
So thinking about those user personas and coming up with test cases for that. So as your product is maturing, you also want to start thinking in terms of, okay, it's not just the requirement and the test cases. It's like, what are the other types of people who may have malicious intent and hack the system completely different?
And it may end up in a life loss coverage at this point. So there is a lot more to think about in terms of where the test cases can actually come into play, not just creating a test case like as a user do this, but who is that user? It can even elevate to additional levels.
Mudit Singh (Head of Growth & Marketing, LambdaTest) - Right, so I think that kind of opens up another set of questions. So we talked about how the organization should look at data specifically. We also talked about persona, which, but we all agree on the fact that, yeah, AI is going to be disruptive. All the flavors included are going to be a disruptive field, specifically in the field of quality engineering.
But as an organization, as a large-scale organization who is just stepping into this new world, what do you think are the most important factors these organizations should consider when they are moving into this space, this ecosystem of AI-assisted technologies in testing and what they should prepare to stay up to date and kind of go with the flow in this new world.
Adam Sandman (CEO & Founder, Inflectra) - Yeah, I mean, the organizations need to understand their client's legal privacy and, first of foremost, contacts. Because the day will be using AI before you understand all of the applications using it.
And like, you know, for example, law firms are right now struggling with people outside of software development using it to do legal preparation, and many consulting firms have the same thing. And I know a lot of times they've had to now send down mandates that, you know, nothing should be done through AI right now until we understand the legal, regulatory frameworks that go with this.
So I think it's fine to have experimentation. So I think companies should first of all have teams that are explicitly experimented with. That's a great thing. Understand what it can do. Try it out on sanitized projects. Maybe try out a previous project and compare it with what actually was generated by humans to do some A/B testing.
But I think having some rules in the combination around this experimentation, is this actually being used for real projects, is important because the danger is people start to use it for real without understanding the implications of what they're doing, and it evolves into being used in real-time until it's too late and then you end up with major problems.
So I think being very intentional in an organization around this is experimentation, this is a trial project, this is in production. What elements of AI are we going to use in each of these phases? And where are we on the maturity curve, which may be different for each industry or company?
So I think that kind of top-down guidance and planning is critical. And then teams should feel free to experiment within that context. Sriram, what do you think?
Dr. Sriram Rajagopalan (Agile Evangelist, Inflectra) - Absolutely, I can agree more with you. So one of the important things that we have to keep in mind is where experimentation is and where production comes into play. That needs to be really clearly differentiated.
And within each one of those boundaries, we call this project management, enterprise environmental factors, and organizational process assets. So people, process, technology, and organization, we have to think from all those four dimensions into this environment screening.
So I already mentioned data. You know, the efficacy of the AI model or machine learning model is primarily dependent upon how effectively we are training this. And we talk about continuous learning, continuous improvement, continuous delivery, continuous deployment, you know, continuous integration.
We talk about continuous so often we have actually applied the same concept to continuously training our organizational resources, which includes our people as well as our processes, as well as our AI. You know AI is an additional third person that we have hired who is not going to be paid.
So we have to constantly train from the data angle. And also the product has to be evaluated. How is our product actually getting used in the production setting? Because in reality, you may work in a very nice environment within your organization and see that this product is really working fine.
But as soon as you take the same product and put it into a real-life environment, it's completely different. What does the product speak about where it is being played, where it is functioning, and stuff like that?
From that angle, when you look at this, the environment in which a product is being used, are we actually simulating that enough? You know, for instance, you have facial recognition. And is the facial recognition software working fine? Absolutely fine. But when there is not enough light, when there is a lot of, you know, changes in your face, will that facial recognition actually work correctly?
And this is where Adam was referring before, and we have been talking about that intentional or unintentional bias. How are we actually bringing those kinds of concepts into our data?
So there needs to be both an inward-looking and an outward-looking thought process on our people, process, technology, and the organization as a whole. Otherwise, we are not going to be training our organization effectively for continuously using these AI models.
Adam Sandman (CEO & Founder, Inflectra) - And I suppose what you'll probably find is the companies that shortchanged training their human employees are going to be the same companies that will shortchange training of their models because it's the company that doesn't value investing in the intellectual well-being of its workforce, doesn't believe in developing its workforce, whether that's human workforce or a machine workforce, it's going to be the same.
Because investment is an investment. Companies that have a history of long-term decision-making investment will have, I think, a greater success trajectory with AI as they were with human resources, and companies that believe in just taking the off-shelf model and deploying it in the same way they would outsource their development to the lowest common denominator without qualifying the company or the people will have the same kind of results. I mean, it's just a predictor of failure.
Dr. Sriram Rajagopalan (Agile Evangelist, Inflectra) - Right, absolutely.
Mudit Singh (Head of Growth & Marketing, LambdaTest) - Yeah, so that kind of also intersects like we're talking about, let's say, training and training of AI models. Now, this is a question that I kind of received when I was having an interaction with a financial company.
So they were trying to use AI tooling in their overall process, but the challenge was that AI because AI is learning and AI is also evolving step by step. The results they were getting pre-production and the results they were getting post-production were kind of different because, during that time, AI had changed. Right?
Adam Sandman (CEO & Founder, Inflectra) - It's non-deterministic, right, it's non-deterministic.
Mudit Singh (Head of Growth & Marketing, LambdaTest) - So how can companies adapt to this rapidly changing AI environment, specifically in an agile development, where we want to be fast, we want AI to give results fast, learn fast, but at the same time, we want it to be a predictable result?
Adam Sandman (CEO & Founder, Inflectra) - Well, that's maybe the issue. I don't know. In my perspective, the issue may be a misunderstanding of AI, then. Because in the same way that if you put humans into a situation, we wouldn't expect predictability at that level.
So if you put a team in place, and you start asking them to write stories for a new feature, and you start asking them to write test cases and do testing or write automated code, they would do it in one particular way. Now, that very same team, six months later, a month later, might do things slightly differently, depending on how they felt that day.
Depending on the who is the team lead of that person? So we don't we don't demand that level of predictability from humans, And I think because people are used to procedural software that has 100% accuracy, and that's always the good and the bad with computers. It could be 100% right and a percent wrong. It'll follow the instructions blindly, but AI models are new, like a neural network.
They're not learning the same way they're learning more like a human, and so I think people are expecting the wrong in some ways the wrong answer With AI, it's learning and getting better, but it's changing. I think so companies have to switch their mindset from I'm going to get predictable results every sprint so I'm going to get potentially improving results every sprint, or maybe deteriorate depending on the quality of the inputs.
And I think you have to, now how do you solve that is a difficult question. But you have to be prepared for that. It's not a simple procedural program anymore. It is a complex interacting system of data elements that's evolving. And I think you make two queries in ChatGPT today, you get two different sets of outputs.
If you use our plugin to generate test cases from requirements. For the same requirement, if you run it now and then run it this afternoon, it will be slightly different. I think that's the nature of AI. Sriram, what do you think?
Dr. Sriram Rajagopalan (Agile Evangelist, Inflectra) - I was just thinking about what one of my professors about 30 years back told me. I was studying and working at the University of Berlin. And I had an idea. How do I transduce the contents of a human brain onto a microchip so that you codify the input in the problem, and then the output comes like how Einstein should have thought, or how Sriram should have thought, or how Adam should have thought, even when none of us exist in that particular scenario.
And my professor at that point said the human mind lacks enough intelligence to understand itself. So we are 300 years from this particular point to even understand what our mind does to even qualify you now the way the mind should understand things. So the reason why I'm saying that is we don't understand our own intelligence.
And we have been in this world for so many years at this point, like humans, us humans. And now, AI does not have so much evolution at this point. Yes, it has been present for several decades, but it's never going to understand itself. So it's constantly learning. When you look at different kinds of algorithms used for problem-solving, neural networks for deep learning, and stuff like that.
When you take into consideration all these things, they are constantly learning it themselves, which means there is going to be less understanding about accuracy, not enough precision, and not enough recall. These are the key metrics that AI uses like recall, precision, and accuracy in terms of how it is doing good in the model itself.
So if we don't understand that, and then we immediately say, you know, 100% predictive results in pre-production and post-production, as Adam very nicely started this discussion with, that's a misunderstanding about AI, which means we had to go back to our previous question and try to learn ourselves and train ourselves again.
Adam Sandman (CEO & Founder, Inflectra) - I think the models are evolving over time. And, of course, people are now talking about the idea of detachable AI, where you could take a model, you could take it from the live training model and you could detach it onto a smaller powered device like a phone, run the model locally, in which case it isn't necessarily retraining. It is mostly a snapshot in time.
So I suppose one mitigation could be a team could take a snapshot of a model that's effectively frozen at a certain learning stage use that for a series of sprints or up to one release, and then refresh the model. So you're in some ways, it's like taking a training course and at the end of the course, you're now a different person.
So you would have, I think, predictability over a certain number of cycles, and you would have a little bit more control over how frequently it's being improved. That might maybe give you the benefit of predictability, maybe. Again, it depends a lot on, as you said, on how well the model is in terms of its precision. And I think demanding precision from an AI system by itself is probably a bad idea. You need to, particularly generative AI, it's very good at generating lots and lots of ideas.
And then you need either AI or humans or some combination, which is to then give feedback on those test cases to determine. So what you could even have is a situation where generative AI comes up with a thousand possible test cases, non-generative, but deep learning AI takes out those test cases, goes out to videos of user experience, goes out to historical data, does an analysis, and it then suggests that of these 100 test cases, 60 are good, 20 are useless, and some of the other ones might be potentially useful, and grades them.
So you have different AI models working together to improve, not necessarily the predictability, but give scoring and accuracy and feedback, like dynamic feedback, on the other model. So you have different AI models working together.
In the same way that a human team, you might have someone who's got the personality of a visionary on a Myers-Briggs scale or a very, very forward-thinking kind of person, talks to the person on the team who's the risk person, who's the worrywart, the, oh my God, the world's gonna end, and any given team, when you build a team, you know, when you build teams, you always wanna have those different personalities.
As a project manager back at Sapiens, I always remember, I would always be asked, who's the most worried person on the team, talk to him about all the risks on the project, and combine his ideas with the ideas of like, the vision on the team, who's like, we should do all these crazy new tech frameworks, put them together and you get something that actually is usable, useful, realistic, and probably will be delivered on time.
And in the same way, different AI models have different benefits. Generative AI is very good at generation, as it's named. Other forms of machine learning are very good at feedback, are very good at scoring, very good at quantifying data and providing analysis. Put those together, and now you're getting that integrated, dynamic, cross-functional AI team together with the human team.
Dr. Sriram Rajagopalan (Agile Evangelist, Inflectra) - Right. Yeah, absolutely. I cannot, you know, overstate this any more than what Adam was trying to say over here. It is very important for us to understand that, you know, Generative AI is the one that is actually taking the world by storm at this point, possibly because of ChatGPT.
Nevertheless, it is a multifunctional team. When we are looking at a team, we are having data scientists, business analysts, product owners, developers, designers, UX artists, and so many different thought processes coming together. One person is not doing everything.
Right! The same way we have to use different AI models for different kinds of scenarios and different kinds of problem understanding and then come back and say, based upon all these pool of the machine workforce providing ideas, let's go ahead and make this a little better and stuff like that.
So that's the better way. And also you mentioned about Agile and so I want to pick that also. Even within an Agile construct, we have these MoSCoW principles. I'm not sure if you are familiar with that, the must to do, should to do, could to do, and want to do. You stop doing something so that you release capacity and then within that you are trying to innovate.
So the concepts of I did this and is it working in a pre-production environment, not necessarily as a completely released functionality, but as a beta functionality. Is it working? Is it giving the desired effort? Are we missing something? And then taking it back and iteratively improving it. That's the whole idea of Agile, right?
So it's not fail fast; it's fail forward. That's an important thinking to keep in mind. So I think the whole idea should be brought in the idea of I am trying to do a qualified experiment and getting the feedback and then trying to constantly improve it. So don't aim for 100% perfection but try to aim for better perfection with every increment you are trying to do.
Mudit Singh (Head of Growth & Marketing, LambdaTest) - Awesome. So we have been going in this fact, like we have been looking at this, what organizations should do, what organizations should take care of.
I want to take a step, let's say, a little bit back or maybe a step sideways and would like to understand what your advice should be to the test manager, team leader, or team manager who is going to use this AI tooling, what preparation they, as an individual should do, what they should learn more about, what they, let's say, what time they should invest more in to get prepared for all of this.
Dr. Sriram Rajagopalan (Agile Evangelist, Inflectra) - So, if I can take my first cut at this one, understand the business goals. We keep talking VPs about minimum viable product, and what is the business strategy, and value stream mapping, and things like that. These things are not just words. There are lots of understanding that needs to go behind them.
So understand the why. Before you start using ChatGPT to write your paper or come up with scenarios and stuff like that. Think about the why. Who is our target? I used to do agile consulting, and I go back to some of these teams and ask them, who is our business customer?
And they will tell me the names of the persona. No, I'm not interested in the persona. I'm interested in who is our target customer base. They can articulate that. What's the purpose in developing a product when we don't even understand our customers?
So the why is extremely important and what field we are operating in. And then comes the what and the who and the when. And who is nothing more than an AI contributing here. So please focus on understanding the business goals, business objectives, and then how your product is aligned with that business strategy.
So if your business goal is to actually launch something to the moon and you are actually writing something that will actually go under the water, it's completely not going to be aligned, right? So making sure that you are trying to align your product strategy later with your business strategy.
And this is not going to be very easy because the people have to move away from the developer has to think as a tester, the tester has to think as an operational person, and all of them have to think like a business analyst. And so in project management world, we have been talking about the ways of working. And one of the things that we have been telling is we need to move away from the T-shaped skill.
T-shaped skills is a minimal requirement. It's not the maximum requirement. T-shaped skills of horizontal thinking and vertical domain knowledge is a minimalistic thinking. Since then, we have moved to the high shift, more knowledge, and the B shift, and a lot more knowledge. We've moved on to lots of these domains. So we've mentioned before cross-functional training, repeated training, ongoing training of employees and the data system and stuff like that.
So, making sure that you understand the business objectives and make sure that your product strategy is properly aligned with your business strategy and then making sure that you are continuously training yourself to develop that cross-functional expertise. These are the main three things that I would suggest to anybody in any workforce that is in the product development that uses AI at this point.
Adam Sandman (CEO & Founder, Inflectra) - Right. And I think to answer that, I think a 100% right. I think that makes sense. I mean, I remember back in my consulting days, the first thing we always asked any client was, you know, what was the business context? And which would be, what were your challenges? You know, what would drive it?
What was driving you to change from a, we need to change because there was this pain. Uh, what are the future, you know, upland, you know, suddenly uplift opportunities we want to get to. And then what's going to motivate us to make that change. And usually it's a combination of future benefits and current pain that you together gives you.
And then you would take that and decide, well How does the product move me from my current state pain to my future opportunities? What are my competitors doing? And how does it move us ahead of our competitors, for example? And in doing so, that helps you come up with product market fit and understand the time of your product, what's the total number of people, is a reliable business model.
So all these things get done before we even get to the test manager. But I think to answer the question about the test manager, the test lead, that's a little bit further downstream. Let's assume at this point that the team does know what the client's drivers are, understands why we're building this piece of software. The question then is, as a test manager, what do I do about AI? How do I use it sensibly? That's an interesting question, because the danger is, I think, I always think it was the tyranny of getting something done.
So you give a test writer some user stories or some ideas. Well, they're like, well, I've got to get this done very soon. So the easiest thing for me would be just to click on this button to generate some test cases. Quickly look over them. Oh, I reviewed the ones. I said that generally it's 20, 10 that are good, 5 that are bad, 5 that are interesting. I'm done. If the AI didn't exist, that test writer will be like, well, let me go and think about the problem a bit.
I think that's the problem. The test managers and the test leadership needs to get a hammer and bang into their team literally. Don't disengage your brain. AI is a tool. And so as an analogy, let's think about writing. So before we had spell checkers, we would all write emails, and we would have spelling mistakes. And we knew they would be there, so we'd review the content of our email very well, and we would make sure there were no spelling mistakes.
But some would get through. So then we have a spell checker. The spell checker means that we get this nice red underline, and so we now right-click and change all the words so there's no misspellings. So now there's been an increase in grammar issues, because people are choosing the wrong word. But because they're so used to that being done, that the grammar mistakes that it doesn't flag.
Now, of course, now that now the tools are getting smarter, they're checking for grammar. But even there, oftentimes it's false positives or they miss things or they misunderstand what you're writing. So the danger is the crutch means you disengage, of the AI in some ways, means you disengage your brain.
So I think the role for a test leader is to emphasize to the team what the AI is gonna do. It's gonna, as you said, for the MoScoW principle, it's going to release some time that you spent manually creating a bunch of maybe repetitive test scenarios. That's going to give you time. With that time that it gives you, that doesn't mean you should take on more user stories.
And so now you've got, instead of 10 user stories, that you've got good test cases for. We've now got 100 user stories, and we've got bad test cases. That's not a good use of AI. The key would be, can we use the additional time to make the test scenarios better, so that we have, instead of 10 test cases with good test scenarios, you maybe have 15 or 20 with excellent test scenarios.
So you're improving quantity and quality. The danger otherwise, AI is going to improve quantity and deteriorate quality, which is the opposite of what the effect you want to have. So I think that's the goal for test leaders, the test management test leadership, quality engineering leaders, is using AI to improve quality and efficiency and balancing these two imperatives.
Because we want speed, we want quality. And the danger is AI will be used to improve speed at the expense of quality which it doesn't have to. That is completely intentional choice, or if you don't make an intentional choice, it will be the unintentional outcome.
Mudit Singh (Head of Growth & Marketing, LambdaTest) - Yeah, so AI is another tool in your toolbox, in your toolkit, and it has to be used, let's say, with a brain, with a human brain. So it can give you results, but you should use your own intelligence to understand it is a good result or a bad result. So that is pretty important.
But again, it can help you accelerate things, but it is just another, let's say, tool set. So it's how you are using that tool set that will define the overall productivity of the AI tool as well. So you're generating negative test cases, but you should also look into the fact that if these are the only ones, is there's a possibility AI can miss negative test cases as well, which you know better because you are the kind of the business owner, business leader in that fact.
So you understand the business better than the AI model can. So that's pretty important. So with that said, I also want to take, like I would say, this would be kind of a last question. What are the toolings that you think are pretty that the organizations can use right now?
So we know that there is ChatGPT that you can use in Gen-AI, AI-based tools you see in the industry right now that would be pretty helpful that not just big organizations, but everybody in the software industry can start using.
Adam Sandman (CEO & Founder, Inflectra) - Do you want me to go, Sriram?
Dr. Sriram Rajagopalan (Agile Evangelist, Inflectra) - Yeah, go ahead, yeah.
Adam Sandman (CEO & Founder, Inflectra) - Yeah, so we talked to lots of different companies. I'm not going to mention the names. Some are competitors, some are customers. Sorry, some are partners. So, there are AI tools out there right now which can do requirements scoring. And what they do is they'll look at the requirements and they can now analyze them.
And this is completely different to generative AI. It existed three or four years ago. And they will look at the requirements and they will score them and come back with a feedback on how well the requirement is written. A simple example might be, it uses lots of words like it, they, rather than saying the customer or the approver.
And so the knowns are indeterminate or it may use words that are unclear. I don't know, basically it can rate how well that requirement is written and how easy could it be understand by someone other than the person who wrote it. Does it cover all the edge cases? And so that scoring is a very useful set of functionality.
And that right now was originally used for the purposes of helping improve human written user stories. I am sure as we use generative AI to write user stories, which I'm sure people are doing, these types of other AI can be very useful. And this already exists on the market.
So requirements scoring tools are good. Other things we've seen are tools that will look at the code commits. And I mentioned the code quality. They can identify where there's likely weaknesses, either because it may look at known patterns, like null pointers, or it may just look for historical when you commit this code, we had lots of tests that failed. When we committed this code, it didn't.
And maybe it will be able to look back and see which kinds of types of change have the most impact on negative outcomes. Other ones we're seeing on the automation space is interesting. Some customers of ours have been playing around with Copilot, and this is from Microsoft actually, so I can mention that. Some of the Copilot tools are pretty cool to play around with.
They can actually go out there and they can explain what the code does. They can actually write automation scripts, even in languages and tools that we didn't even know it knew. Amazingly, we asked it to write automation script in one of our tools, and it did. We're like, how does it know our script?
Well, it's in GitHub. It scans all the GitHub repos. There's tons of samples out there. So we were amazed. Clients were even using it. And they were showing us how we're using Copilot to write automation scripts in our own tool. And that was like, wow, that's mind-blowing.
Another interesting one our R&D team has been playing around with is some of the visual tools like Image recognition, image capture, the ability to take a picture of a page, be able to interpret what that page has. A lot of times you're doing test automation. You're dealing with a complex DOM structure if it's browser-based or maybe it's Citrix.
So there's lots of challenging automation tests which are very hard to automate using traditional test automation tools. And the AI is able to look at a page and say, oh, this is a list of books. This is a list of shopping cart. I understand that. I can test it.
So it's going to start to take on some of the roles of exploratory testing. In our initial R&D, we can use AI to do exploratory testing, which is very scary and very interesting. And I don't know, another podcast all by itself. But so I think there's tools out there to help various facets.
And as Sriram mentioned, cybersecurity, everyone's using it ready to do threat detection, threat analysis and hacking. So on both the red and the white hat side of the cyberspace, AI is already being used heavily. So I think those are some use cases I see in terms of different tools that people can look at.
Dr. Sriram Rajagopalan (Agile Evangelist, Inflectra) - Right. So just to extend, you know, all those use cases are completely 100% valid. And just to extend those thought processes, we've been talking about test-driven development, right? But many times people are not writing the test cases first before they actually write a piece of code. So, you know, I can come in and help right over here.
So as a developer is beginning to write a piece of code, as soon as he or she is writing, if “x” is greater than 5, oh, OK, immediately write a unit test case for this and develop that. and you are writing “x” is greater than 5, and it was x initialized before. So otherwise it could lead into memory problems and uninitialized variable being used and stuff like that.
So thinking about from not just test case generation, but also the code quality, the code quality. The compliance of the coding protocol that we have and stuff like that. That's one element that I'm already thinking here. The other one I'm thinking about is also the computing resources. How much CPU power will be required and what element, what extent of heat may be generated in the battery because of this. How much storage may be required?
So, you know, what is the throughput in the memory that is being transferred for the network and stuff like that. So these are some of the additional things that can come into. And, you know, in software world, we always call one of these things as exities, especially in requirements management world. We call this exities, which is the security, the portability, the reliability, and, you know, all those other things other than the functional test case requirement that you write.
Then also, you can come and start thinking in terms of, as part of the HIPAA or SOX requirement, what are some of the test cases that I may have to create as soon as I write a module that has very protected information? What roles can actually should access this? So immediately create a use case or test cases around that and evaluate that.
So these are thoughts that I'm thinking through. And these tools will emerge. And I'm not saying that there are tools immediately existing already to do that but these things will emerge and they will come through.
Adam Sandman (CEO & Founder, Inflectra) - Actually on that note, one of our partner have used this tool swiftly which actually which they build particularly for government contracts, i.e. US govt. Side of things, so three are a lot of strategies required to develop the project based on the regulatory environments.
Brazilian banking registrations, pick a country, pick a regulation. It can start to generate at least a lot of the compliance requirements without a human having to do that because that stuff is well known. It's qualified in law. It's also very difficult for humans to process because it's a lot of text. It's written by lawyers. It's actually a really good use case for AIs. Tell me all the requirements for building a new system that supports accessibility, like Section 508, or the new European law on accessibility. What do we have to do to comply with that?
You know, particularly for some of these non-functional requirements, which are well understood or well documented. AI is a really good candidate because they're quite stable, relatively speaking.
Dr. Sriram Rajagopalan (Agile Evangelist, Inflectra) - Yes.
Mudit Singh (Head of Growth & Marketing, LambdaTest) - Awesome, awesome guys. So we have hit the time limit. So, it was a really great discussion. Got to learn a lot. And thank you for sharing your insights on all of this, Adam and Sriram. Thank you for being part of the XP Series.
Adam Sandman (CEO & Founder, Inflectra) - Thank you very much. Thank you for having us. Check us out online on LinkedIn. We're always happy to have a conversation.
Dr. Sriram Rajagopalan (Agile Evangelist, Inflectra) - Thank you very much.
Mudit Singh (Head of Growth & Marketing, LambdaTest) - Awesome. So we'll include the LinkedIn details in the description below. I know this is a recorded webinar, but if you have any questions that you want to ask Adam or Sriram, feel free to include in the comments below and we'll be happy to answer them through them.
You can also directly DM them or their LinkedIn and get to know more. Thanks to everyone who has been tuned in so far. Thanks for checking out the XP Series. Subscribe so that you can check out the rest of the episodes and get updated on the new episodes that are going to launch in the coming weeks. Thank you.
Dr. Sriram Rajagopalan (Agile Evangelist, Inflectra) - Thanks for having us.
In this webinar, you'll delve into the heartbeat of modern software delivery and will learn how to optimize your CI/CD pipelines for faster and more efficient feedback loops.
Watch NowIn this webinar, you'll delve into the intricate psychology of web performance. Uncover the significance of prioritizing performance over design, understand why slow websites induce irritation, and examine the profound impact a 10-second response time can have on user satisfaction.
Watch NowIn this XP Webinar, you'll learn about revolutionizing testing through Test Automation as a Service (TaaS). Discover how TaaS enhances agility, accelerates release cycles, and ensures robust software quality.
Watch Now