XP Series Webinar

Simulating Real-World Scenarios: Balancing Precision and Practicality in Testing

January 31st, 2025

25 Mins

Watch Now

Listen On

prodcastspotifyMusicyouTube
Yam Shal-Bar

Yam Shal-Bar (Guest)

CTO, RadView

Yam Shal-Bar
Kavya

Kavya (Host)

Director of Product Marketing, LambdaTest

LambdaTest

The Full Transcript

Kavya (Director of Product Marketing, LambdaTest) - In today's digital world, ensuring software reliability and performance under real-world conditions is non-negotiable. Yet, simulating real-world scenarios presents a delicate balance between precision and practicality.

Testers must decide when to prioritize high-fidelity simulations and when to lean towards simplicity for efficiency. This session explores how testing teams can navigate these challenges while achieving meaningful results.

Hi, everyone. Welcome to another exciting session of the LambdaTest XP Podcast Series. Through XP Series, we dive into a world of insights and innovation featuring renowned industry experts and business leaders in the testing and QA ecosystem.

I'm your host, Kavya, Director of Product Marketing at LambdaTest, and it's an absolute pleasure to have you join us today. In today's session, we are unpacking a critical aspect of modern software testing, which is simulating real-world scenarios and how you could balance precision with practicality.

But before we get started, let me introduce you to our guest on the show, Yam Shal-Bar, Chief Technology Officer at RadView. Yam is a seasoned technology leader with a wealth of expertise in load and performance testing.

His journey spans roles at British Telecom in London and Amdocs, where he spearheaded large-scale development tools and teams. Now at RadView, Yam needs the innovation behind Web Load, which is a robust performance testing tool and partners with enterprise clients across industries to solve complex testing challenges.

With over a decade of experience, Yam is not only a thought leader, but also a passionate educator in the testing community. In today's session, Yam will guide us through the planning and execution stages of creating tests that reflect genuine user behaviors.

So, let's jump straight into the conversation. Yam, can you please share a bit about yourself and your professional journey with us?

Yam Shal-Bar (CTO, RadView) - Sure, so first of all, thank you, Kavya, for having me. It's a pleasure to be here. Yeah, so my name is Yam. So I'm Israeli, taught and learned in Israel. In my first year in the project, I was there.

I did various different projects for different companies. And then I started doing projects abroad. I was in India. And in London, you said, several years and in Paris, which was a lot of fun. Then I came back to Israel and started working for RadView and I've been with RadView for the last almost 20 years now.

It's been a long journey, so it's a very exciting time to be. Things are constantly changing in our, you know, the internet world. Things keep changing. So it's very exciting to keep up with the changes, you know, everything that changes in this world. Yeah, so that's about me.

Kavya (Director of Product Marketing, LambdaTest) - Awesome. Thank you so much, Yam. So before we jump onto the questions, I also wanted you to share a bit more about what RadView does if you have anything interesting to share about your work at RadView.

Yam Shal-Bar (CTO, RadView) - So, RadView, we're in the load testing business. That's what we do. We're concentrated on that. This is kind of our sole focus. We've been around for, again, many, many years.

One of the first tools out there to do that. Initially, the tools were looking at all different protocols, but we kind of saw that this web thing might get on. So, we were very focused and still are on web protocols, so that's what we do at Excel.

So, web load is our main thing, it's the load-testing tool, and we try to make it as simple and as easy as possible to make to generate scripts that are accurate and useful for load-testing purposes.

Kavya (Director of Product Marketing, LambdaTest) - Awesome. Thank you so much for sharing that. So moving on to the very first question, what are the biggest roadblocks to accurately simulating real-world user behaviors and testing?

Yam Shal-Bar (CTO, RadView) - Yeah, that's a very good question. This being asked a lot, people are trying to figure out how best to be real-world scenarios. That's the goal. So there are various things that you need to understand.

You need to think about what does it means real world? What are my real users are doing? So you need to think about all these different aspects that are different between what you're simulating because we're just doing a simulation and what the real users are doing. it has all these parts about planning.

What are the different things that people do? This is, you need to go back to your, either look at your analytics data and talk to the people that run the systems and try to figure out what are the people doing?

Are there always buying things, adding to the cart, or maybe they're just browsing, or maybe they're doing something else. Usually they would do, you can group those into common scenarios that people do.

So you need to identify those and that's kind of the first step. And then you need to think about, okay, so once you have this grouping, what is the difference between what they might be doing?

So that's kind of what we call parameterization, which is, yes, they're all buying products, but not the same product. don't want to have, that's not going to be a real-world situation where they're all buying the same product or logging as the same user.

So you start the diversity of the, so yeah, now you have scripts, and now the second step is to have the parameterization for these scripts. So this is kind of the big things. And the other thing you need to think about is what is the difference between what I'm doing here in testing world and to do actual deployment.

It could be big things or even small things like maybe I have extra monitoring things in the production environment. Maybe the environment has, you know, stairs party things that are happening. So this is all consideration that you need to think how does that going to affect my testing or not. So yeah, those are the big ones to think about.

Kavya (Director of Product Marketing, LambdaTest) - Thank you for sharing that. It's an interesting perspective, Yam. And also, I hope to hear about the challenges from someone with your depth who's been dealing with these scenarios from last 20 years at RadView.

So thanks for breaking that down so clearly. Moving on to the next question, how can we effectively balance the need for comprehensive testing with the time and resource constraints of real-world projects?

As far as I understand, I think this would be an upscale problem, especially for enterprise organizations, which they might have to solve. So what are your thoughts on that?

Yam Shal-Bar (CTO, RadView) - So, first of all, I think it's an excellent question. And the first thing you need to know is if you're asking the question, you're already on a good path. So that's a very good thing to just to know that there is a balance here.

So I think that maybe even asking the question already is a big step towards the solution. So conversely, if you don't do that, you just say, OK, I have to make the test as close as possible to production no matter what, so I'm going to look at every different aspect, and I'm going to make sure that it's exactly the same.

And I think what you're alluding to is right, there is a balance. So sometimes there's always going to be extra effort that you can do. You can do it endlessly. So, you know, we talked about the script.

So maybe most of your traffic is coming from three different scenarios but it's also going to be a very long end. So there is a very tiny percent of people doing something else and some even tinier people doing something else.

So how many scripts do you need? You can have three, but you can have also 150 to try to stimulate every single thing that people are doing. So I think asking that question is the right thing to do.

So you need to think what is my purpose? I want to make sure that my system behaves correctly under load. Then I want to make sure that I don't miss really big things. Okay, so in my scenarios, so you don't want to be oversimplistic like you said, if you're missing, I'll give you a real life example.

When we were dealing a lot with universities and they're doing course registration. So people are registering for courses. Some people, they unregister from the course. So they did the registration and they said, nah, that's not what I need. want to unregister. Now this is not the main scenario. This is a lesser scenario.

But we've seen many cases where these actual scenarios, the one that's causing the most pain because the server is very tuned to the adding courses, but deleting a course may be very bad for performance.

This is just an example of something, a scenario that you don't want to remove, but it still only happens 10% of the cases, but it affects greatly on performance. So that's kind of don't oversimplify it.

But on the other hand, it's not always useful to add all these, if it's less than 1% of the people are doing it, then it's not going to have any impact on performance. Then trying to get to this 100% accuracy is not always the right thing to do.

Everybody has limited resources, time, and you cannot spend endless time doing that. So sometimes it makes sense to just think about the question, I think, is the key. Am I doing this? So how much effort am I adding? How much value am I adding? And find the right balance to say, that's good.

Kavya (Director of Product Marketing, LambdaTest) - Awesome. Thank you so much for sharing that. It is interesting to note that something as simple as oversimplifying the scenario could lead to challenges. So it's looking at the broader aspect of how you can balance quality. And of course, balancing quality in itself is always a tough act.

So I definitely appreciate these strategies. I think it gives a lot of food for thought for our listeners as well. Moving on to the next question, are there any best practices for determining the level of detail required in a test simulation to achieve reliable results?

Yam Shal-Bar (CTO, RadView) - Yes, so I would say here know yourself. That's the most important thing. So there's no one answer that's right for all organizations. And I think it's very important for you to know your product and your organization.

And this is things like what I was saying about what are the different things that people are doing in my system. That's part of it. But it also has some technical sides to it. So, for example, do I need to test with the mobile phone, or do I just need to use the browser? It can really mean a difference.

If your application is a completely different native mobile phone that uses a different endpoint, then obviously, testing just the browser is not going to cover what the mobile phone is doing, and that may be significant.

But if people are using the mobile phone and they're accessing the same browser and everything is exactly the same, then maybe you don't need, you know, so that kind of knowing yourself is my advice to you. So get to know your company and your product and even the technical side of it of how it works, what's going to make it a difference.

And that would help you figure out where you want to put an extra emphasis and where you can say, no, actually, yeah, it's different, but that's unlikely to be important. So I'm going to test it a bit, but not going to spend too much time on that.

Kavya (Director of Product Marketing, LambdaTest) - Thank you so much. I think that seems to be a very actionable insight as well. Of course, fine-tuning the level of detail can, of course, I suppose, make or break the reliability of simulations as well, more or less.

Yeah. So moving on to the next question, how do the considerations for real-world scenario simulation differ between performance testing, QA testing and DevOps testing? I'm sure our listeners would definitely want to know more about this.

Yam Shal-Bar (CTO, RadView) - So, there is a definite difference, I think, in regular QA testing. think when you're doing functional testing, then you want to cover kind of broader different options, even if they're very extreme. If somebody is clicking on a button, they're not supposed to, but they're still clicking on it, and it's causing an issue, then that is something that you would want to check.

So I think that if we talk about scenarios, then in the functional world, you want to lots and lots of scenarios no matter how many crazy how crazy they are because they might happen, you want to still test them, and in performance testing, it's more consolidated you would more want to hit the ones that are making the biggest impact.

So it's not always the best thing to have a very too much broad Yeah, you want to concentrate on this thing that would generate the most load, so we're going to be most common scenarios. So that's a slightly kind of different perspective.

Kavya (Director of Product Marketing, LambdaTest) - Thank you for sharing that. And does it also differ from organization to organization, especially when it comes to consolidating scenarios for performance testing? Have you seen that there is a difference in say, mid-market or SMB organization to enterprises?

Yam Shal-Bar (CTO, RadView) - I think it's not the size of the company, it's more about the nature of the website and the nature of the usage. So you could have a very big organization, but they might be testing a smaller application that with not a lot of users and everybody's doing mostly the same thing.

And you can have a smaller organization that they have a very big and popular website and they still have very big demands for that can be also a very diverse website. The website itself can be that it has so many different functions and options and you have no choice but to have many different scripts for it. So it's more about, I would say, the system that you're testing rather than the organization you're coming from.

Kavya (Director of Product Marketing, LambdaTest) - Thank you, Yam, that definitely gives our listeners insight. Moving on to the next question, can you provide specific examples of when prioritizing precision in test simulations might be counterproductive?

Yam Shal-Bar (CTO, RadView) - Sure, yes. So as we said in the beginning, the balance, just thinking about the balance, is good. So the obvious example we're talking about is resources. So if you want to go to the, to be too accurate, let's say, you know, so it's about how much time are we spending and how many resources we put in.

The other aspect is that it's actually sometimes it's make it harder for you to analyze data. So I would recommend, even if you do get to a point where you're testing many, different things at the same time, because that's the most accurate, I would say don't start there. Start with a simple scenario, which is lot easier to analyze.

So let's say, again, you have 20 different scenarios that people can do, but 90 % of the time they're doing just this one thing that they're, you know, booking something, they're doing this one thing then I would say start with just this one thing and run the high load with, let's say that everybody's just doing that.

The reason is that when you have all these different scripts running together, it's kind of harder to see if the problem is, even if you do have a performance issue, when you have so many other scripts running around, it's harder to look at the data and see, this graph is not where it's supposed to be happening because you know, on average things look okay, but you have one script which is a problem.

So if instead of doing that, you at first run each one of these scenarios separately, it's easier to analyze the data. So that's one example of when, if you try to be too realistic, I would say in the beginning, then it makes it harder for you to find the problems. And if you just go to the simplistic point, at least in the beginning then you can say, this thing is obviously wrong. The graph is obviously getting to where it shouldn't go to.

Kavya (Director of Product Marketing, LambdaTest) - Thank you, that's a great example. Sometimes you know less is truly more when it comes to testing, so that is what sort of stuck with me. Thanks again for shedding light on how you can navigate those tradeoffs effectively.

Moving on to the next question. This is an interesting one. What are the ethical considerations that need to be taken into account, especially when simulating user behavior, particularly in areas like privacy and data security?

Yam Shal-Bar (CTO, RadView) - Yeah, well, that's a very interesting one, especially when we talk about real life scenarios, then yeah, that's another balance of things that you want to do. Data itself is something that is also can be different. in real life scenarios, people have different structures and different data points talking about whatever students or hospitals, you have patients, they all have different problems.

That is an interesting thing if you want to get as close to the real world as possible, so you want the data to be real, but sometimes you don't really want to use real patient data. This data can be secretive. So my advice would be, if you can, don't use real data.

So it is worth the effort of using it to have, you know that the data here is okay, even if somebody steals that, problem, it's not real people, not real problems. So if you can have fabricated data, it makes your life easier. The next thing you can do is anonymize, and take real data, but just change it a little bit, change the name, change the date, and if you have to, then you have to.

Sometimes you have to use real-world data, but if you do, then you need to take extra care about treating your environment like it is production. So you say, yeah, it's just for testing, but this is real people here, so you don't want this information to be leaked. So you need to take much, much extra care when you deal with this data.

Kavya (Director of Product Marketing, LambdaTest) - Thank you, Yam. I think the ethical considerations are always a crucial topic, especially for folks who are at senior level, that is including CTOs such as yourself. There's so many moving vlogs, I would say, that form the part of that security and privacy related aspects.

Yam Shal-Bar (CTO, RadView) - Correct. Yeah.

Kavya (Director of Product Marketing, LambdaTest) - So thanks for emphasizing the importance of it. Yes, that really helps. Moving on to the last question that we have, what resources would you recommend for testers who want to further explore the topic of real world scenario simulation?

Yam Shal-Bar (CTO, RadView) - So yeah, there's lots of information out there. You can check our website at RadView.com. You'll have some resources about it too. And the other thing I would say is also look outside for more advice, but also look inward, like I said before. Know yourself.

So I would think that also getting to know your own company, your own system better, that's also a very important asset that would help you when you try to make these decisions, the better you understand your company, your clients, what are they doing, how are they using my system, the better you're. You can get, you can make better decisions when you do that.

Kavya (Director of Product Marketing, LambdaTest) - Thank you, Yam. That's an excellent recommendation. It's also good to hear about the resources that can help testers get more expertise or deepen their existing expertise itself. Because again, XP Series is something that we have created so as to ensure that testers and developers are able to learn so much more from leaders such as you.

Great, Yam. It has been an insightful session. Thank you so much for sharing your expertise and walking us through the complexities and strategies of simulating real-world scenarios and testing. I'm sure that our listeners would have loved watching you and listening to your practical tips and industry insights. And I'm sure they would, of course, be able to apply it in their workflows as well. Thank you once again.

Yam Shal-Bar (CTO, RadView) - Thank you so much for having me, it was a pleasure.

Kavya (Director of Product Marketing, LambdaTest) - Yeah, absolutely. And to everyone watching, thank you for joining us today. Subscribe to LambdaTest YouTube Channel to stay tuned with us for more such insightful episodes. And we will, of course, be tagging Yam. We will be sharing his handle with you. So you could, of course, get in touch with him as well, in case you have any questions about the session today. Thank you so much once again, everyone. Have a great day.

Guest

Yam Shal-Bar

CTO, RadView

Chief Technology Officer (CTO) at RadView, Yam Shal-Bar, is a seasoned technology leader with extensive expertise in load and performance testing. Prior to joining RadView, Yam gained valuable experience working at British Telecom in London and Amdocs, where he led and managed large-scale development tools and teams. With over a decade of industry experience, Yam has collaborated with enterprise clients across various industries, delivering innovative solutions for complex testing challenges. At RadView, he drives the development of WebLOAD, ensuring it remains a robust and reliable tool for performance engineers. Yam is also passionate about educating the testing community and sharing insights on advanced strategies and tools.

Yam Shal-Bar
Yam Shal-Bar

Host

Kavya

Director of Product Marketing, LambdaTest

With over 8 years of marketing experience, Kavya is the Director of Product Marketing at LambdaTest. In her role, she leads various aspects, including product marketing, DevRel marketing, partnerships, GTM activities, field marketing, and branding. Prior to LambdaTest, Kavya played a key role at Internshala, a startup in Edtech and HRtech, where she managed media, PR, social media, content, and marketing across different verticals. Passionate about startups, technology, education, and social impact, Kavya excels in creating and executing marketing strategies that foster growth, engagement, and awareness.

LambdaTest
Kavya

Share:

Watch Now

WAYS TO LISTEN

SpotifyApple Podcast
Amazon MusicYouTube