In this webinar, you'll delve into the intricate psychology of web performance. Uncover the significance of prioritizing performance over design, understand why slow websites induce irritation, and examine the profound impact a 10-second response time can have on user satisfaction.
Listen On
Senior Developer Advocate, k6 Grafana Labs
Senior Developer Advocate, k6 Grafana Labs
Marie Cruz works as Senior Developer Advocate at k6, Grafana Labs. With over a decade of experience, Marie wears multiple hats as a tech blogger, an accessibility advocate, and an online course instructor at the Ministry of Testing, as well as Test Automation University.
Director of Product Marketing, LambdaTest
Harshit Paul works as Director of Product Marketing at LambdaTest. His leadership in marketing ensures that LambdaTest remains at the forefront of the ever-evolving landscape of software testing, providing solutions that elevate the testing experience for the global tech community.
The full transcript
Harshit Paul (LambdaTest) - Hello everyone! Welcome to another exciting session of LambdaTest XP Series. Through XP Series, we deep dive into the world of insights and innovation, featuring renowned industry experts and business leaders in the testing and QA ecosystem.
I'm Harshit Paul, Director of Product Marketing at LambdaTest, and I'll be your host for this session on “Fast and Furious: The Psychology of Web Performance”.
Joining us today is Marie Cruz, Senior Developer Advocate at k6, Grafana Labs. With over a decade of experience, Marie wears multiple hats as a tech blogger, an accessibility advocate, and an online course instructor at the Ministry of Testing, as well as Test Automation University.
Hi, Marie; it's an absolute pleasure to host you. How about you let our viewers know more about yourself?
Marie Cruz (k6, Grafana Labs) - Hey, Harshit, thank you again for having me today. I guess just to add to that, I've been doing software testing for over 10 years, but last year I decided to do a career switch, so now I work as a developer advocate for Grafana Labs.
I'm currently based in London, but originally I'm from the Philippines, and I do enjoy photography, so, like, outside of work. I like to take pictures, and read books as well. And yeah, just trying out a bunch of different cuisines as well. So big massive foodie.
But yeah, I'm excited to give a talk about this whole psychology behind waiting and why is it that people are irritated whenever they're being told to wait. So, there's a bunch of human factors and we'll try and link that back to web performance.
Harshit Paul (LambdaTest) - Right, and speaking of web performance, from internal stakeholders to external ones, performance is everyone's concern. And being everybody's responsibility, there are all sorts of opinions which come into play.
And Marie would help us understand the psychological aspects of web performance from different angles. Why does performance matter over design? Why do slow websites trigger irritation? What's the impact of a 10-second response time on user satisfaction?
We'll also look into, as Marie said — Tips to optimize scenarios where speed optimization is not as feasible and comes across as a challenge. Having said that, Marie, the stage is all yours.
Marie Cruz (k6, Grafana Labs) - Nice, so let me just quickly share my screen. As I've mentioned today, I'll be talking about the psychology of performance, and I'm not really gonna share any new performance tools, or I'm not gonna share any tips on how to integrate performance checks as early as possible, or any other technical guides.
And while those are all important things that we should know and I think that it's also equally important for us to understand the why. So why do we want faster websites and why is it a known fact that slow websites irritate us, and why is waiting not an enjoyable activity? So what can you expect from this talk? So first, I'll talk about the psychology of waiting lines and what we can actually learn from service businesses, and how we can relate that back to web performance.
And then next, I'll talk about the different factors that make waiting longer and unenjoyable. Then finally, I'll share some recommended guidelines to improve your web performance, both from the subjective and objective view. Click here to access the slides afterward.
But to start things off, I guess you might have heard this famous quote which goes — “The First Impression is the Last Impression”. And as we know by now, the first impression is really important because it influences our thought process about everything, even web performance.
David Meister actually wrote this article called “The Psychology of Waiting Lines”. And as part of this article, he actually included this quote from Federal Express Service. So the quote says, Waiting is frustrating, demoralizing, agonizing, aggravating, annoying, time-consuming, and incredibly expensive.
As humans, we've all been exposed to long waiting times. At a very young age, we've always been taught to wait for our turn when let's say, we want to play with a toy being shared, and this even extends to our adult lives when we try to buy the latest iPhone or even when you're trying to get the COVID vaccine.
And this even extends virtually so, for example, if you're in a situation where you want to buy a ticket to see your favorite band, you have to wait for a long time virtually to purchase that ticket. So waiting is an activity that we are all exposed to, but it doesn't mean that we actually enjoy it.
And here's just a personal story. So I remember last year we were eating at TGI Fridays with my daughter and my boyfriend's family, and I just remembered that we were waiting for such a long time. The kids didn't notice it since they were busy playing. My daughter was completing an activity sheet. So the atmosphere from the kids, it's very different.
And then you look over at all the adults, and all the adults were frustrated because we were seated late, the drinks arrived late, the food arrived late, and then even asking for the bill took such a long time that afterward, our perception for TGI Fridays has completely changed.
As part of the article that David wrote, The Psychology of Waiting Lines, he actually explained that the waiting experience in a service facility significantly affects our overall perceptions of the quality of the service provided. So the food that I ate from TGI Fridays was good. That was consumed well because the long wait for the service has still influenced my experience negatively.
And this can be explained simply by this first law of service. So according to the first law of service, “Satisfaction is Perception minus Experience”. So what this means is that if you expect a certain level of service and you perceive that service to be higher, then ultimately, you do.
On the other hand, if you expect a certain level of service, but then your actual experience is quite disappointing, then you become a dissatisfied client. So with my TGI Fridays experience, I was expecting this certain level of service because I've eaten there before without any waiting problems. But because of that one negative experience, that resulted in an overall dissatisfaction that, up until now, I've never set foot in another TGI Friday's restaurant.
And this also ties in nicely with David's second law of service. So he said that it's hard to plan a catch-up ball. Now, what this is saying is that any impression that's created in a service encounter will actually influence the rest of the interaction. So this means that first impressions do count.
So similar to websites, these two laws apply since most websites also offer a service online. Now, when we want to distinguish or when we want to talk about speed or time, I think mostly people talk about the objective measure. But actually, there are two sides to it. There is the objective side, so this is when we talk about measuring how fast a website loads in a specific number.
And then there's also the psychological side. So this is the way that your users perceive time. So even though time moves in a specific measure, the way your users perceive it might be different. So this is the psychological angle. Now from this psychological angle, we can think of waiting as divided into two different phases.
We have the active phase, and then you have the passive phase or the passive waiting. Now active phase, happens when you're engaged in an activity. So you're in a state of flow. You don't really realize that time is taking a long time because you're very engaged and you're in this state of flow that nothing really matters.
On the other hand, when you're in passive waiting, this is when you don't have control over the waiting time. So this is when you're aware that time is moving slowly because you're bored. And this explains why waiting in line, for example, feels like a boring activity.
But there are actually several different human factors that explain why waiting is not an enjoyable activity. So I'll explain some of these different factors, with the first one being that occupied time feels shorter than unoccupied time. So this can be explained simply by the saying, a watchpot never boils.
So something appears to go more slowly if you are actively waiting for it rather than engaging in other activities. So a perfect example of this is, let's say, waiting for your food to be microwaved. The waiting time it feels like forever, especially if you're not doing anything. So I know when I microwave my food, I try to occupy my time by doing something else. So I can feel that the time is moving faster.
Now if we relate this back to waiting on the web, if you visit a website and the actions are taking a long time without any feedback of what's happening, then waiting for something to load online feels longer and unenjoyable.
A common technique that different companies use is to try and occupy the user by showing something fast or relevant, by showing something that is useful to them immediately, and they trying to provide useful feedback even though the information is not presented yet.
So a perfect example is Slack, which we all know. So Slack provides this skeleton framework to show users that something is loading. So they use animations when messages are being loaded, and they have these different, like funny messages as well to make the users feel engaged so that the user feels occupied while they wait for, let's say, the background API calls to complete.
Another example, so this is the k6 cloud. This is still the legacy. So this is the legacy application, but it's the same setup in Grafana Cloud k6. Basically, whenever you run a test, especially a load test that has a high number of virtual users, the setup stage might take some time because it will try to load all the necessary resources that are needed for that particular test.
So to occupy the user's time, what we do is we show animations to also provide them with progress of what's happening so that at least they know that they're not waiting for anything that's not useful.
And then the second explanation, people just want to get started. So when it comes to, let's say, eating at restaurants, you feel valued already when you feel that you get seated quickly, even though the actual service of the food hasn't started yet.
So to fill in the gap, waiters hand out the menu to you immediately to give you a feeling that, hey, I remember you. We're gonna be, I guess, like quite a bit of time in terms of like giving you the main course. But here's the menu if you want to have some starters, if you want any drinks to start with, then you can get started.
Now, if I relate this back to performance on the web, what you can do is you should show something as quickly as possible so that users can see something quickly and feel that their experience has started. However, when you show something quickly, you should also prioritize critical content to make the user experience meaningful.
Because whenever a user visits your website when they see the information that they want immediately even though the rest of the page hasn't loaded yet, they feel that their experience has already started. So one way to do that is by tracking a very important core vitals called Largest Contentful Paint.
So Largest Contentful Paint, or LCP for short, it's an important metric for measuring the perceived load speed because when the largest content or when the main content has likely loaded. So if you have a fast LCP, this helps reassure the user that the page is useful because they can see the content that they want immediately.
The third explanation so anxiety makes waits seem longer. So a perfect example that I can think of is if you're a parent and you take your child to the doctors for, I guess, their routine vaccination. If you're a first-time parent, this can create anxiety for both you as a parent and your child if you are just waiting without knowing when the appointment will start.
So to overcome this, a common practice is for someone to let you know how long the wait would be, and it's also quite common to have activities for the kids so that they can get distracted. Going back to web performance, negative interaction can trigger someone's anxiety.
So, a slow website is actually one of the main reasons for causing stress online. If there are images that are not loading, or the page in itself hasn't provided any feedback after a certain action, this can quickly trigger an anxious user. So what can you do?
So going back to the web, you can try to make improvements such as optimizing your images so that the page can load quicker without having to sacrifice the quality of your image. You can also minify the CSS or JavaScript files so that it can reduce the load times and bandwidth usage on websites.
Another way to reduce anxiety is to improve the visual stability. So one metric that you can use to improve visual stability is the cumulative layout shift or CLS. So this is one of the core web vital metrics as well, similar to Largest Contentful Paint. And this is an important user-centric metric for measuring the visual stability of your pages.
So the lower your cumulative layout shift score is, the lower the visual instability is. So this provides a better user experience because it has reduced unexpected layout shifts. The next reason is uncertain waits are longer than known waits.
So if, for example, I told you That you have to wait, but then I haven't provided any reason why you have to wait. So your expectation is not being managed already, and then the waiting time feels longer, which again correlates to higher anxiety. So this is sort of related to the other reason.
So as an example, when there are train delays, but there's no indication of how long the delay will be, you get irritated more. However, if there is a time added next to, as to when the next train is most likely to come. So if it says five minutes delayed or let's say 10 minutes, you accept the delay better because at least your expectations have been managed.
Now, how can we relate this back to web performance? So one way to do this is you let users know what is happening. So when their expectations are managed, their experience is likely to be better. So you can also provide visual feedback in the forms of a timeline, and progress indicators to convey that there is a certainty to your user. Or, as you saw in the Slack example, even though the application hasn't fully loaded, there's a skeleton framework to show to users that something is happening.
Another example is providing feedback immediately, is you just have to explain it in human a way as possible. So as an example, you can see here that there is a very clear reason as to why the waiting is happening; there is a time there as to how long to wait for needs to happen.
So again, your user feels engaged, and even to them, they feel valued because at least they know that something is happening in the background. So imagine if you're on the page without any feedback and it's just an endless loading spinner, but then objectively, the time to wait is the same.
So users will feel that it is gonna take longer even though objectively it's the same because it's gonna make them feel bored, it's gonna make them, I guess like, in terms of like anxiety, it might increase their anxiety because they don't really know what is happening.
The next explanation is unexplained weights are longer than explained weights. So again, this is sort of related to the previous explanation, but the way you can differentiate this is, let's say you're buying some clothes, and you go to the till to pay for the item that you bought.
It's not serving any customer, but then they haven't called your name. So because they haven't mentioned, for example, any explanations in terms of why they're not serving you, you feel that you're not important, and then your anxiety is going high. And then some people as well, do get annoyed.
In these cases, but then from that perspective of the user, of that person waiting behind it, say, well, they might be doing something, but then they haven't communicated that to you. So because of this lack of communication, you feel that you are waiting longer than you're supposed to.
So again, if I use the k6 loud as an example, you can see that there are clear explanations for what's happening as to why the test run could take a bit longer than you expected. And imagine as well if these messages are not there. There's an animation, but then there's no human explanation as to why the wait is happening.
So you might feel that something is wrong in your connection rather than the application doing what it's supposed to, and it's just trying to process your application because there's no actual feedback being given to you.
Another great example that I saw online, so there is this airline, like a travel booking site called Hipmunk. So as you can see from the animation, they are showing a list of airlines being searched, which can explain the long wait. There's also a progress bar, but they're also adding a fun element to it. So there's a friendly-looking mascot to make the weight enjoyable.
Now, the next explanation, unfair waits are longer than equitable waits. So, a good example in the physical world is that if you're waiting in a queue to eat at a restaurant but then there are people who arrive later who were given priority sitting, this can cause you to become agitated.
This feeling of being agitated makes the waiting experience horrible as well. To sort of bring this back to performance on the web, to address this feeling of unfairness, we have rules like the first-in, first-out, and it's widely adopted to enforce discipline.
So if you've ever bought some concert tickets, you probably waited to buy some tickets to see your favorite artists. So with these first-in, first-out tickets, for example, if they go on sale at a specific time, you log into your account, but then whoever has, I guess, logged in first might have higher priority.
But then this can still have some issues online, especially with ticket bots occurring. So what companies like Ticketmaster, what they've done is they've introduced this concept of a Smart Key. So, to address this, rules like the first-in, first-out, are widely adopted to enforce this discipline while queuing.
So, if we translate this back to our web performance, you probably waited to, let's say, buy tickets to see your favorite artists. So, for example, if tickets go on sale at 10 am, you log into your account, and normally whoever logs in first, they do get priority. But still, in some cases, that is quite difficult to manage because of ticket bots.
So what companies like Ticketmaster have done is they've introduced this concept of Smart queues, which basically acts as a virtual line to prevent ticket bots from buying tickets in a matter of seconds. So from a web performance perspective, a recommended guideline, and this is especially useful if you are an e-commerce website and you're trying to handle the amount of customers that you have.
So one way to do this is by implementing this queuing system in which you process the customer's order in a manner like when they enter the queue to avoid any feelings of unwareness. This doesn't necessarily improve the actual waiting times, especially if you have a high number of customers waiting.
However, it still improves their experience because their expectation is managed in terms of how long they have to wait. And this can also help manage your website's traffic. So there is less chance of your website crashing.
So the next explanation, the more valuable the service, the longer the customer will wait. So a perfect example is people can tolerate waiting a bit longer at high-end restaurants because they know that the service will be more valuable.
Let's compare that with waiting at a fast food chain. So imagine you're waiting for 45 minutes to get food or a burger from a fast food chain; you'll definitely be experiencing your experience negatively because you shouldn't be waiting for that long if you're waiting at a fast food chain.
So back to the web, if you have, let's say, a feature that is valuable, and you know that it takes some time to process that. So as an example, let's see the airline example or insurance. Whenever you try to buy insurance, the first result is like through to most cases.
So the first result isn't the result that you actually trust because you want to make sure that you get all the best insurance. So you're willing to wait for that because you know that if you wait a bit longer, you might see cheaper insurance. So in terms of other sorts of services, it's the same with, let's say, you're applying for a mortgage.
If you suddenly get the result that, hey, you got accepted for a mortgage, you don't really trust that particular result, so you're much more willing to wait if the service is much more useful to you. So other recommendations, you can explain the wait, you can indicate progress visually, and as you saw with the Hipmunk travel booking site, you can also try and make it fun.
The factor that explains why waiting feels longer is that solo waits feel longer than group waits. So when you're standing in line alone, waiting for it feels longer than waiting in a group. Because when we're in a group, we are more engaged and the less we notice the waiting time.
Now, because we do most online activities solo, companies do need to come up with ideas to entertain us while we wait. A great example of this is the dino game that Google developed, which is built on Chrome. So whenever a user attempts to browse while they're offline, the browser notifies the user that they're not connected to the internet.
And then this dinosaur game is also displayed on the page. So it keeps you distracted, and it also keeps you entertained. So again, in terms of recommendation, it's already quite similar. So hopefully, by now you can see that there is a common theme to it. So make it fun and also provide immediate feedback to your users.
Now there are also other recommendations that you can utilize to improve the web performance. So I'll go over these other recommendations one by one. So the first one is you. You can load above the fold content first. So this is basically what your users first see when they load your website.
So what you can do is you can display the critical content above the fold so that these are the content that gets displayed, I guess, immediately to your users when they first visit your website.
You can also try to lazy load your content to reduce the initial page load for your page and then replace it with a placeholder image, or you could replace that with a skeleton to indicate to users that there is content to be loaded. So when they finally scroll down to that page, that placeholder or that skeleton image will be replaced by the actual image.
This is something that I guess we all know, but adding button states to your buttons also lets your users know that something is indeed happening. So this can also improve the perceived performance whenever they see immediate feedback whenever they interact with any of the buttons.
Funny enough, the importance of animations also does matter. I've got this image here, and to me, it looks quite similar. You've got the first image without the person below being picked up by the UFO. But then the image on the right has that extra feature, small changes, you could see that the image on the right is much more fun, so users are more likely to go with that particular animation basically.
So again, animations really do matter. So I've mentioned already that if you just use a lot of endless loading spinners or animations that aren't really providing that useful feedback, then those animations won't really be useful. So make sure to choose your animations wisely.
And then, finally, even the font you choose so it matters. So I've read this study from KeyCDN.com. So they have a blog post there that basically outlines the impact of different fonts on the overall load times of your page. So if you're designing a website and you have some custom fonts, you need to be careful that those fonts might load slower depending if your user has installed that font or not.
So it's much better to stick with fonts that are universal because you know that won't have any impact on the performance. Now, all the different factors that I discussed actually show that the waiting experience makes all the difference. So what actually drives us crazy is not the actual waiting at all.
So you could see that we're actually willing to wait. But it's when we start to experience boredom and anxiety that makes waiting unenjoyable. So how we feel when we wait often matters a lot more than the duration of the wait. Now, when we talk about web performance, objective performance is still very important.
Page speed remains one of the key indicators that you need to look out for regarding web performance. Jacob Nilsen actually wrote an article called Response Times, the Three Important Limits. In this article, he summarized that there are three important limits when it comes to response time.
So the first is 0.1 second. So this is the limit for having the user feel that the system is reacting to it instantaneously. So, for example, if I'm typing on a keyboard, I should see the feedback immediately, and there shouldn't be any special feedback necessary.
And then the second limit is one second. So this is the limit for the user's flow of thought to stay uninterrupted. And then anything longer than that, users will start to feel that, okay, something is happening, it's taking some time, but if it's one second, our limit of thought is still uninterrupted. And then finally, the third limit is 10 seconds.
Although I can argue that this is probably now much lower because we're very impatient nowadays. But basically, in this article, Jacob said that 10 seconds is the limit for keeping the user's attention. So anything longer than 10 seconds, you need to be providing feedback indicating when that particular operation needs to be done.
So ideally, we know that websites should load immediately. I think Google recommends 1-2 seconds. However, we're not all Google. And in some cases, it might not be achievable, especially if you've exhausted all improvements to improve the objective measure.
So what you can do is you can take that a bit further and also make improvements from the psychological side, so from the perceived performance of the application. So you've made some improvements; how do you then know that these are successful?
So how do you measure the perceived performance as well as the objective performance? From an objective point of view, it's quite easy because you can look at the metrics. So I spoke about the Core WebVital metrics, and there are a bunch of tools out there that can help you measure the Web Vital Metrics, such as Google Lighthouse.
But you also have to keep in mind that Lighthouse uses lab data which is collected from a controlled environment, and it uses predefined devices and network settings. So what you need to do is you should also complement that with tools that are using field data because this is actually what your users are seeing.
So tools like PageSpeed Insights, Chrome User Experience Report, and even Google Search Console can help you with these. From a Grafana perspective, we also have Grafana Faro which provides a real user monitoring solution that can help you keep track of the objective performance.
From a subjective or perceived performance, I think using the same metrics, so the WebVital metrics can give you an indication. However, what I would highly recommend is for you to ask your users directly how long it took, for example, for your page to load.
So you can conduct some surveys, or if you have a team that conducts some user research sessions, then you can participate in those sessions and then observe how your user feels while they are using your application. So some final words before I end this session.
When we talk about web performance, it's very common for us to make improvements on the objective side. And these improvements are all valuable. However, we shouldn't stop there. So it's very equally important to understand the psychology behind waiting, the human reasons why people don't like to wait, and how the perception of a performance is actually not the same as the actual performance.
So perceived performance then can be seen as the total of the expected performance, the user experience, and then the actual performance. So user experience is still very much an important factor. Now when you employ additional guidelines to make your website feel fast to your users, then you can ultimately improve the user experience.
And yeah, that is it. So I hope that after this session, you can come up with a new set of heuristics new set of test ideas that can also help you cover the perceived performance of your application.
Harshit Paul (LambdaTest) - Right!! Thank you so much, Marie, for that session. Tremendous slides, a powerful message, and pretty relatable as well to everyone, I believe. WebCore vitals have always been a part of Google's priority list as well because at the end of the day, Google also wants better user experience.
And if your site is not catering to that, you have a lot to work on. And the parts where we talked about performance not being achievable to some extent, there are some challenges. And those workarounds that you presented, especially the Slack example and other examples as well, where we are able to keep those, you know, user perception in mind and give them something to hold on to while they wait for the main content to come across.
So that was really handy. I do have some questions. I'll just quickly post them to you. Right? Speaking of performance, which technical factors significantly impact user perception of wait time? And how can they be identified and mitigated?
Marie Cruz (k6, Grafana Labs) - Yeah, so I think the really main thing is if there are any, let's say, background API calls that are taking a long time, especially if you're trying to process a high amount of requests. So from a user perspective, I think this can be mitigated if you try to provide some immediate feedback.
So whenever you try to explain to a user that. So as long as you explain it in a human, I guess, reason to your users, then they're actually much more forgiving than, let's say, without any explanation at all. So this also depends, I guess, on the application that you are testing because I know with some applications, like I mentioned, the mortgage or the insurance code.
So there are some cases where you actually want users to wait because you want to give them the best results possible. And again, I mentioned already a while ago that if, for example, you return the first result to your users, then they might feel that it wasn't actually the best result.
So some technical factors around background ABI calls take some time to complete. Again, that can be mitigated by providing feedback. I've seen a lot of websites as well that have a lot of unexpected layout shifts, especially if you're trying to access them on a mobile device.
So websites with high cumulative layout shifts, that means that they have a lot of elements that move a lot. So that can cause high anxiety to your users because, let's say, I've already navigated to the element that I want to focus on, but then suddenly there were other elements prior to that got loaded, and then suddenly my whole shift has been changed.
So then I have to find the element that I want to look at, that I want to look at again. If you try to reduce your cumulative layout shift score, and be smart with how you employ some lazy loading techniques, that can significantly improve the user perception.
So I think a common misconception or a common technique that people have done in the past is lazy loading; they just defer everything whenever it's needed. And while that's useful, because that can help your page move faster, but then from a user experience perspective, if you're loading them at a later stage, but then the view has suddenly shifted, then that can impact the user experience.
So a smart way to do it is maybe you can replace shift then, then that head-to-head user experience. So a smart way to do it is maybe you can replace a home resolution image with an actual image whenever the user needs it.
So be smart as well with how you employ your basic tools like a low-resolution image with an actual image whenever the user needs it. So be smart as well with how you employ your lazy loading techniques.
Harshit Paul (LambdaTest) - That makes sense. And speaking of this, how do we optimize our CI/CD pipeline for consistent and reliable performance across varying user traffic conditions, particularly during peak usage times?
Marie Cruz (k6, Grafana Labs) - So here at k6, we've actually written an automated performance testing guide. Basically, as part of that guide, we've added tips, for example, on how you can write different types of performance tests. One misconception that people might have is performance tests are all about load testing.
But actually, when we think about load testing itself, you can do smoke testing, you can do stress testing You can do soak, you can do spike testing. So depending on the performance test that you want to execute, you can Be smart about, for example, on a develop or on a pull request environment, I want to run my smoke test on each commit.
So the smoke test can just be validating the experience of one user. And the other important thing as well is you need to use tools that are compatible or that can integrate well into the workflow of your teams. So from a front-end performance perspective, tons of libraries out there can help measure the WebVital Metrics, and that can integrate into your testing framework of choice.
So you can have a look at Lighthouse, or you can have a look at, I guess, depending on what testing framework you're using if you're using Cypress, they have a plugin that can integrate to help measure the WebVital Metrics. If you're using Playwright, they also have some features for that, or if you're using K6, we have a k6 browser module that you can use.
So again, you have to choose a tool that can integrate well into the workflow of your teams. And depending on other environments that you have, so let's say you have a QA environment, you can then run your average load test.
And if you have a staging environment, so at least before you push something to production, you can have that increased confidence that, depending on the user traffic, you have a job for running average load tests, or you can even have a job that you can execute manually if you want to perform some spike testing or some smoke test or some soak testing.
The important thing to do, if you're integrating all these performance tests as part of your CI/CD pipeline, is you need to track the performance metrics continuously because you need to have a trend of data that you can observe, that you can understand so that you can make some improvements and that can also help you check whether the improvements that you have made will have some impact to your actual product.
So you can have a look at, I guess, how the trend goes downwards because the goal is to have the metrics, I guess, as fast as possible. So you can check whether or not the improvements that you have made have actually contributed to the speed of your application.
Harshit Paul (LambdaTest) - Right, and as Marie pointed out, we would be attaching the reference material to add more on top of this question over our YouTube channel where this video would be uploaded. So you can find that reference material in the YouTube description.
So, feel free to head out and check it over there. Heading on to the next question, how can teams integrate performance testing into DevOps? You know, ensuring continuous optimization aligned with rapid deployment cycles.
Marie Cruz (k6, Grafana Labs) - So with this, the first thing or tip I would share is to start with small improvements. If you can start early, I think that is always the goal. But if you're working on a project that is already midway, so it's already live in production, you can still start with small improvements. So you can focus on the pages that are in terms of traffic, so the pages that are high in traffic.
So you can look at some analytics that you have. So I've shared, for example, that you can use Google Analytics or you can use Google Search Console to check which of your pages are the most visited, and you can check what the Web Vital Metrics are for those particular pages and make some small improvements, I think that can make the experience a bit better.
So as long as you're making small improvements continuously, monitoring what the impact is to your application, and then maintaining a trend of data that you can then use, I think that's a good way all these performance requirements earlier. So I know in our industry, for example, we have distinct functional and non-functional requirements.
People might perceive that as functional requirements being important, and non-functional requirements are requirements that are good to have, but actually, we need to rephrase that and just call everything as requirements because, at the end of the day, performance or accessibility or security should also be discussed as earlier as possible because they should be as equally important as the functionality of your application.
So if you discuss this as early as possible, try to bring in your concerns to the team as much as like, even if the application isn't still fully deployed then that's much better rather than waiting for, I guess, everything right at the end when you've already deployed something. So make sure to talk about it early.
Harshit Paul (LambdaTest) - Yeah, I couldn't agree more, especially. So you gave a very interesting example about anxiety as a parent, right? I guess another anxiety example, which is very relatable, is when you deploy a hotfix, and you're waiting for that regression suite to come clean.
So as you said, better to keep these things lined up in the early phases rather than dumping everything towards the end, of course. Right? You know, speaking of the modern web, third-party services cannot be missed out, right?
So when we integrate third-party services into web apps, what technical measures should be taken to prevent performance bottlenecks and ensure that there is a seamless user experience?
Marie Cruz (k6, Grafana Labs) - Yeah, this is quite tricky because with third party scripts, you essentially, so these are out of your control because these are maintained by other services or other companies. But what you can do from your side is you can make an audit of all the third party scripts that you have and then really identify, okay, do I really need this particular script?
Because if visually that script is making your page look nice, but then it's making your page super slow, then you need to decide as a team which one is more important, and as you've seen in this presentation, sometimes the look of your application but then if it's slow, if it's sluggish, then the user experience is still not good.
So if you can remove any scripts that are not really essential, then you need to remove those. One other thing that I can think of is because I mentioned the lazy loading technique. So you can also try to employ some lazy loading techniques for the third-party scripts.
So one example that I can think of is whenever your page has loaded, you can use lazy loading to load the third-party scripts only when they are needed. So at least it's not slowing the page down all the time. You can also try to load them asynchronously, like in the background, so it doesn't really, I guess, interfere with the rendering of your entire page.
So by employing these additional techniques. Like, you can just lazy load them, for example, when your users actually need them. So at least you're only showing what's critical to them when they first visit the application. And I think if you're using a content delivery network or a CDN, I believe you can cache the third-party scripts.
Although I'm not exactly sure in terms of the technicalities of it, I think you can use a CDN to basically cache the script so that it doesn't really load everything from the source server. So at least it's cached. So that can reduce the latency and the bandwidth of the requests.
Harshit Paul (LambdaTest) - Wow, so yeah, those are all the questions from mine, and while my video is not up, I'm definitely making notes of what you have suggested throughout, right? So a lot of things for me to experiment with from the session, and I feel our viewers would also be feeling the same way.
So thanks a lot, Marie, for this entire session. Fascinating slides, powerful message conveyed to everyone and most importantly, thank you so much for taking out of your busy schedule and joining us.
Marie Cruz (k6, Grafana Labs) - Thank you so much.
Harshit Paul (LambdaTest) - Thanks to everyone who also tuned into the session. I would like to thank everyone who tuned in as well. Stay tuned for more exciting episodes in the LambdaTest (XP) Series. Until then, take care, happy testing. Bye-Bye.
Marie Cruz (k6, Grafana Labs) - Bye, thank you.
In this webinar, you'll learn the secrets behind how Codemagic, a cloud-based CI/CD platform, helps tackle the challenges faced by mobile app developers and QA engineers and pro tips for healthy workflow infrastructure.
Watch NowIn this XP Webinar, you'll learn about revolutionizing testing through Test Automation as a Service (TaaS). Discover how TaaS enhances agility, accelerates release cycles, and ensures robust software quality.
Watch NowIn this webinar, you'll learn tips from a recovering perfectionist on how to streamline bug reporting, provide developers with clear information, centralize bug tracking, and promote collaboration among project stakeholders.
Watch Now