XP Series Webinar

Impact and Potentials of GenAI to the IT Engineers

In this XP Webinar, you'll discover how GenAI empowers IT engineers, revolutionizing their approach to software development with unparalleled efficiency and potential. Explore the transformative impact of GenAI on testing in the digital landscape of tomorrow.

Watch Now

Listen On

applepodcastrecordingspotifyamazonmusic
Richard

Matthias ZAX

Agile Engineering Coach, Raiffeisenbank International

WAYS TO LISTEN
applepodcastspotifyamazonmusicamazonmusic
Matthias ZAX

Matthias ZAX

Agile Engineering Coach, Raiffeisenbank International

Matthias Zax is an accomplished Agile Engineering Coach at Raiffeisen Bank International AG (RBI), where he drives successful digital transformations through agile methodologies. With a deep-rooted passion for software development, Matthias is a #developerByHeart who has honed his skills in software testing and test automation in the DevOps environment since 2018.

Kavya

Kavya

Director of Product Marketing, LambdaTest

At LambdaTest, Kavya leads various aspects, including Product Marketing, DevRel Marketing, Partnerships, Field Marketing, and Branding. Previously, Kavya has worked with Internshala, where she managed PR & Media, Social Media, Content, and marketing initiatives. Passionate about startups and software technology, Kavya excels in creating and executing marketing strategies.

The full transcript

Kavya (Director of Product Marketing, LambdaTest) - Hi, everyone. Welcome to another exciting session of the LambdaTest Experience (XP) Series. Through the XP Series, we dive into a world of insights and innovation featuring renowned industry experts and business leaders in the testing and QA ecosystem.

I'm your host, Kavya, Director of Product Marketing at LambdaTest, and it's a pleasure to have you all with us today. Today's webinar will explore the impact and potential of GenAI in the realm of IT engineering.

To shed light on today's discussion, let me introduce you to our guest on the show, Matthias ZAX. Matthias is an accomplished Agile Engineering Coach at Raiffeisenbank International AG, where he drives successful digital transformations through agile methodologies.

With a deep-rooted passion for software development, Matthias is a developer by heart who has honed his skills in software testing and test automation in the DevOps environment. In today's show, Matthias will delve into the transformative potential of GenAI for development and test automation.

And, of course, we'll discuss how GenAI can revolutionize code review processes, provide guidance for setting up CI/CD pipelines, and even translate behavior-driven development, i.e., BDD, into runnable source code. You'll also get to learn about generating test automation locators and receive recommendations on how to leverage GenAI effectively.

So get ready to explore real world success stories and best practices in utilizing GenAI. Matthias, over to you. Would love to know more about your journey, and, of course, you can jump on from there on.

Matthias ZAX (Agile Engineering Coach, Raiffeisenbank International) - Thank you very much for this very kind introduction. And yeah, I'm very happy to be here to also contribute to this XP Series, kind of bring here my thoughts.

I know each and everyone is talking about AI, but still, I'm quite sure I have something that you can take out of it, even though everyone is talking about it. Yes, my journey is, as you heard from the developer to test automation engineer to coach, and maybe in the future, Prompt engineer, I don't know, let's see.

Let me share the slides. Today I want to talk about the impact of AI on all of us. I hope you can see the slides now. Just want to remove this one. Yes, you should see it. And so kind of the impact on us IT engineers, yeah, from GenAI, please. Yeah, that's the important thing.

And I want to tell you a story because I like stories. And in my opinion, it makes the thing more entertaining. A very good colleague and friend of mine, Rudolf Gertz, maybe you know him, we sat together some weeks ago, and we had a coffee, and he was like - hey, Matthias, maybe I should switch my job back.

You need to know Rudi is an educated carpenter and 30 years ago, he switched from carpentry to IT because he thought that robots would replace carpentry. And nowadays he thought that, oh, maybe he should switch back to carpentry because GenAI is replacing the engineer.

And that was a pretty interesting discussion that we had this was kind of. Hey, I should talk about this and I should challenge the AI what it can really do. So this is what I want to do today. I will also kind of answer them or give you the answer that I gave to Rudy if he really should switch.

But first let me also introduce myself a little bit more that, you know, some details, let me enable the bullet point. So I'm Matthias ZAX, as you already heard, I live in Austria. I worked there as an engineering coach at Raiffeisenbank International, which is a leading corporate investment bank in Central Eastern Europe.

Test automation is my profession and passion. I'm organizing the community of practice for test automation. So this is what I really love. Private-wise, I'm happily married and have an adorable daughter. I really like barbecue. And if you want to have a good barbecue, you need to do it on your own because you cannot really buy it or pay for it.

So I'm two times certified barbecue master. Funny side story. If you want to get in contact with me about any topics, please reach out on social media channels. I'm always happy to discuss. Good enough about myself. I showed this game about the slides because I thought it would maybe funny to even show the power of GenAI for slides.

So everything you see, not the text, this is coming from my brain, but what you see kind of the pictures, graphics, diagrams, whatever is generated with OpenAI, mid-journey or Google Germany. And everything is also afterward on GitHub. So don't do any snippings or copy-paste or whatever, just go on GitHub, and you will see it in there.

Good, let's start with the story because I like time travelers. This is very good, showcasing how the past went, and you can maybe predict the future, I don't know. At the time when Rudy switched to IT, this was in the 1960s and 70s, the mainframe area, kind of our IT industry was born. The IT engineer was born, we needed people who were able to produce very basic applications at that time.

So the first engineer was hired, so to say, maybe at this time, the engineer was called differently, but it was an IT engineer. 10 years later, you know, with the kind of windows that came into the game or so this private computing.

So rich people at the beginning were able to afford private PCs, but anyhow, a lot of applications were needed, which were running on those devices, and completely different approaches appeared. The IT engineer evolved, you know, the market increased.

10 years later, the internet boom, nowadays standard, but the way how we developed their applications, you know, web engineering, web engineering, HTML, et cetera, came up. So again, the engineer was even more needed, much more each and every company wanted to host their own website and so on.

Then you notice beautiful small devices that you're using, maybe you use them right now, you're looking at the video. This revolution is already in the market a lot because a lot of applications are nowadays based on or optimized for mobile devices because that's what we use daily, almost hourly.

So a lot of new technologies appeared, and a lot of new jobs were posted out of the market because we needed more and more developers. In the 2010s, there was the first time there was a change in history because now cloud computing appeared.

And you know, our awesome system administrators who cared about their PCs servers under their desk, whatever, very, we have a huge passion come kind of obsolete because we described our servers into a YAML file on the codes, please, and moved it to the cloud.

Yeah, of course, with new approaches to cloud computing and cloud engineering. Not all of them, but a lot of those system engineers became obsolete or needed to upskill or reskill.

So the first time, it became disruptive a little bit, but on the other hand, you know, DevOps, cloud engineering, it's nowadays a very huge area. So a lot of engineers were needed. What is now different in this decade?

GenAI appeared, you know, AI is, to be honest, around us, I guess it was born in the 1970s already, the first AI, but GenAI so that it can really produce something new is new. And it has a kind of impact on all the areas because it can write source code. It can write HTML, it can write Java, it can produce YAML files, and it can do an engineering job.

And that was also the time when I really thought, hey, maybe I should change. Damn it, it can do the stuff that I can do. And today, I want to check this. I want to first tackle the question if Rudi should switch his job.

And I want to say, hey, AI, come here, come here, buddy. Show me your tricks. Like I would have a dog, show me what you can really do. Because the hold of a dog can tell me a lot, obviously. And I thought about how to do that. And showcasing it on a full SDLC would make sense, in my opinion.

So kind of from the prototyping to the running thing. So let me show you the stuff for prototyping. Also, from a testing perspective, prototyping is very important to test at the earliest stage. And maybe you know this guy. This guy is Greg Brookman. He is one of the founders of OpenAI.

And this was the developer pitch for a developer's live stream for GPT-4. So that's a new model. And maybe you remember or not, it doesn't matter. He had this scene there where he showed the scratch book. And that was kind of funny because maybe to make it a little bit bigger, he scratched there, let's call it a very basic website.

He made it a picture and showed, hey guys, look, it generated me an order source code, and it runs now. It can produce websites. I mean, I immediately tried it after the pitch. It did not work at that time, but that's another story. Meanwhile, it works. So I did that. I thought, hey, a talk rating page would be a thing.

So I very roughly did a design of it, a kind of a flow diagram. We will log in with some checks, and then there should be talks with listed talks to rate on it, etc. Let's see how this works.

So I made it a picture and put this prompt next to it. So can you explain what you see in this picture and generate the source code in HTML, JavaScript, and CSS? It should look like a modern app.

So what did the AI interpret here? First, this beautiful picture, I liked it. So kind of the flow was good. So the main idea was understood. It even got the conditions so that there should be some empty field checks. And for the talks, it realized this is a placeholder.

So I got the info here. The data should come from a database, etc. So that was pretty impressive, but of course, you want to see how it works. So I have here prepared a video. So kind of the order files, what was delivered, it's the HTML here. You can see it looks not so bad. There is an empty field validation.

You can either something check here. Then it lists the talks. You can create new talks. That's interesting. I have not designed this feature, also, the rating is here with descriptions. If you click on it, it's. I don't know, not really a rating. I don't know if this is these hallucinations, but overall it was really impressive.

I mean, AI can read my handwriting. You can read my scratching. Just check the potential. And you know, I generated all these files, and then next to those, I asked the AI to please give me a shell script, which creates the folder structure. And it did all this. I know it took me five minutes, and it's runnable.

So you can already use that for customer discussions that you can already show something and get very fast feedback and test it. So, in my opinion, that's really cool. Good, but it's prototyping, nothing that goes to production. So let's use it for design and development.

And I just took out this part of the service and asked AI, hey, can you create a user story? So can you create a user story for the password service that I have drawn on the picture? And I did that. So quite a very good syntax of it, a very good structure.

It was a valid user story, you know, as a system admin, I want to have a service that validates the password, et cetera. It's compliant. It has seven acceptance criteria of them. Good. I guess if we would have designed it, we would have not done so much. It took some notes and risks.

You know that it should be fast and scalable, and it even had some test cases derived on the acceptance criteria, which are now here cut off, but anyhow, very good. Yeah, so for user cases, user stories, sorry, if you're in the refinement sessions, this could be kind of a body which you take to it, yeah, kind of a copilot as anyhow the most of the producers now calling their AI systems.

So user story is good, but of course, let's develop it. And here we see one side is OpenAI and the other side is Google, so Gemini and Openair went for the straightforward approach. You kind of each condition gets developed. It's acceptance criteria get developed, which is good and it worked.

The other Google took another approach. There is obviously a class called password validator where you can add those things and use it. That's in my opinion, a better approach, much more structured, reuses here in place. So I was able to learn because the AI had some approaches, which I was not aware of, but I liked it more.

And it fits our needs. So really good. Of course, you need to have the skills to validate it. And without development skills, yeah, I guess you would be lost about which one you should choose. But you know what was really surprising?

You know, GenAI is not the most intelligent thing we have, kind of, but it didn't write unit tests. Come on, it's like a basic development procedure, you should not skip this part, but it didn't do it. Yeah, it didn't do it because I have not asked for it. It was not part of my prompt.

And here comes a very important point, which you, please take with you because I asked for it, and I provide unit tests for have for each 100% code coverage. It did so because of a very basic implementation. The better the prompt is, or a good prompt will end up in a better output.

Also, sometimes you need to chain prompts, you know, you need to work with the context, but this is important. You need to know what you want so that the AI can help you produce it. If you don't know it, this will be really difficult. You will get something that is maybe not right or maybe not unit-tested.

So it's not complete. You need to care about the prompts that are for development, of course, come on, let's test it. Let's test it now. So I took this one here. It's a demo application from Applitools. I use this sometimes for writing some demos, and test scripts.

I want to do something now. I want to extract the locators based on the DOM. I want to generate Playwright scripts, I want to use the page object pattern and I want to use a data-driven approach so that I can parameterize the tests. So let me open here the browser.

Oops, where's my mouse here? So kind of if you go here, show the whole text, I copy the full HTML and this is what it did. It just put it here because otherwise, it would wait for the AI to respond. So I wrote a test animation with the Playwright, and I have the following DOM.

Can you extract the form fields as locators? So this is the DOM, which I've seen before, and I did that. I found blah, blah, blah, the following. So username, there is an ID, locator, CSS, or XPath.

So you kind of all options and even a small script in Playwright how to use it. Pretty impressive. Works really well. Then write a login taste and use the page object pattern. That's again from my side. This is the prompt. So it's changed, it's in the context.

And yeah, I do this. Step one, I will create the login page object. So it's a class login page. You can see here the constructed locators and then the dedicated functions to it and even the login function, which we can use them. And here you have a test case for that.

So we use this, we have our test case in before we launch the browser, and we close it here. Before we initialize the login page, we navigate to it, and here we do the login in the test case, and it tells us, hey, please do some assertions. Otherwise, it makes no sense.

Of course, they could not know because I didn't ask it for what to assert. This here we need to do either you put it into the prompt, or afterward you kind of develop this part of the code. Cool. You'd always add some explanations, but I guess you already know that. Next task, run the tests with multiple credentials.

They are stored in a test data CSV file. So first, it reads the CC file. I will not go into these details. It works. And then it modifies the test cases. So here you can see in the before all it reads all the test data into the credentials as an object.

And then it iterates over the tests or over one test here, very does the logins. And again, here the assertions you need to do by your own. So overall pretty impressive how fast this was. And it works. It kind of, it fits my needs. It was able to, I just put it there in a dome.

And I also tried it out with much bigger domes, to be honest, or HTML. And extract the locators. Of course, if these are not very complex locators, maybe this part you need to do on your own. It's a co-pilot. It will not always work 100%. But it's a good start.

It can help you in designing the page option pattern. It can help you parameterize your tests. You just need to tell it how you're doing it, right? Or how you want to do it. That's pretty impressive. Seriously, that was really good. Meanwhile, nowadays, I use it. It's a kind of daily work where I use AI to help myself.

One last point obviously, we need to also install, our application, we need to deploy it. And here, I mean, I have kind of basic knowledge about AWS or cloud engineering, but I will not be able to deploy it now in a professional way. And the cool stuff here was that I was able to discuss with JTPD in natural language.

So in English, I was able to discuss what kind of application I have and what my plans are. So what performance do I want to have, which kind of monitoring capabilities, and how much is my pricing? So we had kind of discussions about ECS, EKS or the Fargate service.

After all, I was able, or I actually was then able, to transform this into a YAML file, which of course, after setting up security groups and so on, was able to deploy it to AWS. That was really interesting, so I was able to do something that I was not able to do at all.

Of course, we have upscaling and research, definitely, but now I was very fast able to do that. But on the other hand, you need to be careful because in this case, I was not able to validate it. It could be that it's something that is super expensive or a fully overkill for my application, or it doesn't work at all. It has some bugs inside because I don't have the deep experience and knowledge here.

Of course, you can do something new, but if you cannot validate it, so you would need some additional tests, maybe that you have it test-driven. But here I will be really careful with doing something which you don't have the skills for.

Let me tell you a more few use cases for pipeline building. So continuous integration, continuous delivery, like GitHub actions, you know, especially if it's part of the repo, the AI can interpret the rate from the context. Of course, it can then build it properly.

So that's a very good use case test strategy. Here we use it also quite often in a company to have it as a discussion because it knows a common pattern. You know, if you have now a retail banking app, there are millions of them or thousands on the market.

So there is some data available that we can ask, hey, what should we consider? What are common risks if you have a retail banking app on Android and iOS and so on? So that's very good.

For monitoring AIOps, it's a prominent term here that based on data of your production environment that you do some predictive alerting or optimization documentations and the quotes here, please, here I talk about mainly about read me files and how to use things, not about, I don't know, user handbooks and so on, or other kind of documentation because, and we are already partly in this phase that AI can be fitted in with our kind of documents that we trained the model with data from, so kind of company data.

And you know, if you produce documentation with AI and in half a year you feed it in into the AI to ask questions to, this is a loop. You should be careful. You document the stuff you want to document. Just don't produce documentation because you can now.

And of course, a lot of more. Good, let me conclude. In my opinion, generative AI and you saw a lot of, or some use cases, it's super impressive, and you can kind of apply it everywhere. It's not locked down to IT.

Of course, for everyone, the GenAI for the IT engineer is super, but also for user stories, for test strategy, for designing applications, and also a part of the IT industry. It is something which has come or has come to stay. Definitely, I guess there is no doubt about it.

If you think back to one of my first slides, we had the time where cloud computing came. Yeah, so it replaced the system architect, the infrastructure guys. And in my opinion, prompt engineering is not the cloud computing of tomorrow. So this is kind of the new thing where we need to jump on and need to train us.

You saw that based on the prompt, you get different output. And if I would have the discussion with Rudi and he would ask for an answer, I would tell him, I mean, carpentry is a very good business. Yeah, if you want to do that do so, why not? But not because of this reason of GenAI now. I would recommend just educating, upskilling yourself in prompt engineering, leverage yourself onto the next level to be faster, better, high quality, and so on.

It's a very good 24-seven body, so to speak. And if GenAI will replace all of us engineers. Let me quote here, Dr. Fei Fei Li, who is Vice President of Kugelsfield Professor at Stanford University. And she said a really cool sentence, so let me quote her here. AI won't replace humans. So let's give it to her. But humans using AI will. I really like this quote, and I will say this opinion, so time to use it.

And yeah, thank you very much for your attention. I hope you enjoyed this session. Now we will have some questions. You will see here the links to the GitHub repositories. Just search my name, and you will find those stuff. If you want to know also something more, you can ping me on LinkedIn. Thank you.

Kavya (Director of Product Marketing, LambdaTest) - Thank you so much, Matthias. That has been a very insightful presentation on the potential of GenAI and the end code that you shared that humans using AI could, of course, replace humans. I think that's a great code, of course.

Now let's jump into the questions. The first question that I have is, in today's rapidly evolving tech landscape, do time constraints and skills shortages still justify avoiding test automation? And how can teams overcome these challenges?

Matthias ZAX (Agile Engineering Coach, Raiffeisenbank International) - Okay, let me also stop the screen share so that I can see you better. The thing is that, I mean the problem is that a lot of companies go now on AI and think it will replace the head count. Let's say like this, you know, I now have eight engineers, afterwards I need six.

But actually, that's not true. We engineers, our work has a lot to do with creativity. And now we have something that is supporting us here. I don't believe that we can reduce head counts.

This is now a new thing that we can leverage ourselves on the next level, we can be, and finally, we can work as we should work because each and every team has technical debt, each and every team. And maybe with this, we can get rid of that, but not to a full extent. So this is kind of how I see it.

Kavya (Director of Product Marketing, LambdaTest) - Interesting point, of course, and it is, of course, very interesting to see how AI is, at the end of the day, reshaping all the traditional notions as well as workflows. Thank you for sharing that insight. Moving on, what strategies can teams adopt to integrate automation seamlessly into their workflows and mitigate technical debt?

Matthias ZAX (Agile Engineering Coach, Raiffeisenbank International) - So to loop in here also AI because the other thing, I guess is most people do already. So including, I don't know, their SAS scanings and pull requests to your reef and so on. We in our company implemented now.

So we have the AI as a GPT available even with APIs. So we have it also integrated into some flows, which means that just to quote one team, they're using it as a mandatory pre-request reviewer.

So it will check the change source code if it has implemented some new technical data. This is kind of interesting. Sometimes good, sometimes not so good. But you know, the thing is in an evolving state, everything is in a very experimental state.

Of course, the thing works, but I'm sure the next year they will come with new models GPT-5, 6, 7, I don't know. Yeah. Or others from Germany, other competitors, whatever. Here is a time that we explore fast and that we loop it in as a co-pilot.

Of course you can use GitHub co-pilot or other code supporters where you get even faster feedback, you know, that you haven't really sneaked in in the time where you code. But also, for example, this team has it in the pull request and is checking if have you followed all the standards that you want to have here.

Kavya (Director of Product Marketing, LambdaTest) - Yeah. No, of course, thank you so much for that. I'm sure these strategies would be very useful for our audience as well. Moving on to the next question, how can AI empower developers and testers to achieve faster code feedback and, of course, kickstart CI/CD pipelines?

Matthias ZAX (Agile Engineering Coach, Raiffeisenbank International) - I mean, the fastest code feedback that you can get is through coding assistance, as I mentioned, Co-pilot or Codeo or whatever. I don't want to make any marketing, just name here the prominent one. This is also what we have and we make it successful.

Overall, we did some surveys with the developers and they liked it. In the beginning, they were not really happy about it. The first versions of it, to be honest, are also because the quality was not that good. We had to rework the code very often, which was suggested.

Meanwhile, it's good. I guess people will forget regular expressions very soon because the average does it. They just write a comment, which is good. Anyhow, I'm not a big fan of this kind of thing. So this is what I recommend.

So get the fastest feedback possible. CIA is, of course, good, but this takes time to build the application and run the pipeline. It's a difference if you get it in five minutes or five seconds. And if you can get it in five seconds, what you need now in the line of code, of course, do it.

Kavya (Director of Product Marketing, LambdaTest) - Well, thank you so much again for shedding light on it. I'm sure because, at the end of the day, developers and testers are all trying to achieve faster code feedback. And same goes for enterprises across the world.

Awesome, moving on to the next question. How does GenAI streamline code review processes and assist in generating test automation locators?

Matthias ZAX (Agile Engineering Coach, Raiffeisenbank International) - I mean, I kind of had it in my example of how to do that. This works quite well. Seriously, doesn't matter to be a Playwright, to be honest. This is just how I use it. And you also use Selenium or whatever. This is how I would recommend to use it.

You can also put in their locators and ask if this is a good one or a bad one. Is this a flaky one? How to optimize it? Especially if you use X path, you know it can. If you have put there an absolute XPath to it, ask to make it relative, we've stabilized, and so on.

This is really to get into the communication with the tool, so to speak. And this will make it better, definitely. You can also generate stuff. You need to then check if you like it and what it produces. Because very often, there are no IDs in the HTML, And this is how you get faster and better, in my opinion.

Kavya (Director of Product Marketing, LambdaTest) - And just to follow up on that, what are the key considerations that developers and testers should keep in mind when leveraging AI for these tasks?

Matthias ZAX (Agile Engineering Coach, Raiffeisenbank International) - Be in mind, especially for the testing, because I know a lot of developers who say, no, awesome, finally, there is something where I can generate the unit tests. So I don't need to write it anymore. I'm not a big fan of that, to be honest. I know it works.

I know it produces something, but if you let it produce based on your source code, it will produce your unit tests which go through. If you have an error there, it will produce a unit test with let it pass.

If you produce unit tests based on specifications, that's different. If you have an error in the specification, then again, so please be careful, especially for the testers, because here we do something where we have something that is special, which we want to develop to serve our customers.

So only we know it. The AI cannot know it. That's impossible. So you need to define it. Either you write it in the user story, in the BDD syntax, or whatever, or in the prompt, or you put it into a prompt. So be careful here. Yes, you can generate tests with it.

We have seen that with the login test, which I have shown you with the Applitools application. It also works for much more complex applications. We use it regularly, but we are very careful here that we define the tests often and let the source code generate or part of the source code.

Kavya (Director of Product Marketing, LambdaTest) - Thank you. Yeah, of course, at the end of the day, leveraging AI requires a lot of careful consideration and calibration from the users in Dasvel. And moving on to the next question, can you talk about how GenAI is facilitating the transformation of BDD feature files into executable source code?

Matthias ZAX (Agile Engineering Coach, Raiffeisenbank International) - I mean, I like BDD. I just have not seen so many success stories of it. Very often, it ends up that only the testers are doing that, which is not the intention of BDD because this should be a whole team approach.

You know, it should kind of guide you in how to write and structure your tests that business analysts, test the developer, customer, whatever, knows what you want to do. Yeah. So kind of you force yourself into a structure with given and then so to say.

But somehow the glue code was done, yeah, developed by testers and then it ended up that also the specification feature files are written by the testers. Nowadays, it's different, I guess, because with BDD, you have a formalized way and a structured way to design the test cases.

And with the GenAI, you can write code. And why is this not good? Because the BDD syntax is a good prompt because it defines it already, sometimes even with test data, with expectations, with assertions, with given, when, and then. And this is not good.

So this is something where we also try it out and have a good experience. It's not perfect, obviously, but I would recommend you to try it out. Define test cases in BDD syntax, play with the prompt, and see what you can achieve.

Kavya (Director of Product Marketing, LambdaTest) - Thank you so much for sharing that. Moving on to the next question, how can teams overcome internal resistance or skepticism towards automation adoption, especially in the context of AI-driven solutions like GenAI? Of course, you did mention that there's always some early resistance that happens, right? So, yeah.

Matthias ZAX (Agile Engineering Coach, Raiffeisenbank International) - Yes, you know, there is always resistance to change. So, to be honest, it's not that easy. Very often, for engineers, it's easier because they are innovative and want to try out new stuff. You know, there's a new tool, how many to try it.

More often, it's difficult on management business side that they understand that they need to give us some additional budget because it's not for free. The important thing is, and that's also where that we are here in the role to kind of, I don't know, make a sales pitch for that.

So convince the team that we need this now. It will help us. It will not replace anyone, but it will help us to produce kind of the best system. So it's tested, it's sustainable.

It's we can also operate and run it in two years. Not only we need to then sit together and redevelop it because technical debt is so high that we cannot change it anymore because two people left. So this is not what we want to have. So we want to have sustainable solutions.

And with this, we can achieve it. And these can be selling points that you say, hey, we need this now, or now it costs X, Y, Z, whatever. But we want it. It helps us. We will increase our performance.

Kavya (Director of Product Marketing, LambdaTest) - Great, thank you so much. Yeah, no, of course, and hearing firsthand success stories or seeing changes in key metrics, for instance, should definitely help change the perspective and lead to better adoption of AI at the end of the day for the organization or the team.

Coming to the last question of the day, for teams looking to integrate GenAI into their development workflow, what are the new, what are the initial steps and best practices that they should ensure for a smooth adoption process?

Matthias ZAX (Agile Engineering Coach, Raiffeisenbank International) - Depending on where you are, if you are Greenfield or already long running project or you even have a huge legacy system, I used it once to let it explain a class with 3000 lines of code. So just put it in and ask the AI, can you please explain to me what this class is doing?

So it depends. The use cases are kind of, as I know, endless. Yeah, of course, they're not endless, but it's a really broad range. So use it where you have at the moment some deficits or where you're at the moment. It can help you there to get up to pace faster and also use it where you are already skilled.

So you have to experience, you have to need skills that you can validate it because this is important. You only can, you will try something out, be very experimentally friendly now. So try out everything and very then works good, go a little bit deeper, including the workflow.

I mean, I'm not a big fan of processes, so try to include it with automation, either with a pipeline or with some coding assistance. If you need a really process, okay, depends on the company culture or so. So try to get the use cases for you. There are also really cool articles kind of daily appearing.

So, whatever channel you use for reading news, kind of pin the AI topic that you read once a week, at least an article about AI. So when you get into this topic you'll see other people sharing experiences about prompt engineering, maybe your language in Java, AI, GitHub actions, whatever.

So that you can see out and try it out. This is then how you will get better in AI, how you can use it better how you can also bring more benefit to the company because, at the end of the day, this is what makes the company happy. Yeah.

Kavya (Director of Product Marketing, LambdaTest) - Awesome. Thank you so much for sharing those insightful, practical approaches for that matter. That really helps. But it has been a great session. And as we wrap up the session, I would like to thank you for sharing your expertise with us.

I'm sure our viewers would have found each and every one of them very interesting. Thank you for joining us in this episode of the XP Series. And to our audience, stay tuned for more episodes of the XP Series, where we continue to explore the latest trends and innovations in testing in QA.

Matthias, before we go, is there anything that you might want to share with the audience as a parting note?

Matthias ZAX (Agile Engineering Coach, Raiffeisenbank International) - Of course. First, thank you very much. It was really nice session, very well moderated. I enjoyed it very much. Hopefully, a lot of people will get something out of it. And yes, yeah, happy testing and stay tuned.

Kavya (Director of Product Marketing, LambdaTest) - Awesome! Thank you so much, everyone. Subscribe to LambdaTest YouTube Channel for more updates on XP episodes. Have a great day, everyone. Bye!

Matthias ZAX (Agile Engineering Coach, Raiffeisenbank International) - Bye-bye, thank you!

Past Talks

The Myth of 'Best Practice'The Myth of 'Best Practice'

In this XP Webinar, you'll discover how embracing continuous learning and adaptability can lead to more effective strategies and improved outcomes in today's ever-evolving landscape of innovation and competition.

Watch Now ...
From Brainwave to Inbox: Avo's Whimsical Adventure through AI-Powered Test AutomationFrom Brainwave to Inbox: Avo's Whimsical Adventure through AI-Powered Test Automation

In this XP Webinar, you'll delve into Avo's magical AI-powered test automation journey, unraveling the whimsical adventures from brainwave to inbox, showcasing transformative innovation along the way.

Watch Now ...
Mastering User-Centric Mindset: Unlocking Your Potential as a TesterMastering User-Centric Mindset: Unlocking Your Potential as a Tester

In this XP Webinar, you'll delve into mastering a user-centric mindset, unleashing your potential as a tester. Explore innovative strategies to elevate testing approaches, delivering exceptional user experiences that propel product excellence.

Watch Now ...