December 19, 2024
36 Mins
Listen On
Heemeng Foo (Guest)
Director of Quality Assurance
Kavya (Host)
Director of Product Marketing
The Full Transcript
Kavya (Director of Product Marketing, LambdaTest) - Bugs are an inevitable part of software development, yet they remain one of the most challenging issues to address effectively. To achieve bug-free code, it is critical to understand why bugs occur in the first place. Factors such as inadequate requirements or definition and a lack of software craftsmanship are significant contributors.
However, for many QA professionals, these factors are often outside their direct control. That's where a filter approach to the software pipeline can make a difference. It could help address bugs at every stage of the process to minimize their occurrence. In today's session, we will explore how this methodology can redefine software quality practices.
Hi, everyone. Welcome to another exciting session of the LambdaTest XP Podcast Series. Through XP Series, we dive into a world of insights and innovation featuring renowned industry experts and business leaders in the QA and ecosystem. I'm your host, Kavya, Director of Product Marketing at LambdaTest, and it's a pleasure to have you with us today.
Today's topic resonates deeply with all of us in the QA and testing domain. Why do we have bugs, and why do they happen? I think this is such a question that everyone keeps on pondering about, isn't it? We are here to uncover the root causes of bugs and to explore actionable strategies to mitigate them.
Before we start today's discussion, let me introduce you to our guest on the show, Director of Quality Assurance at Rocket Lawyer, Heemeng Foo, or as some of you might know him, Chris. With over 20 years of experience in the software industry, Chris started as a developer and evolved into a leader of software quality teams.
His journey spans diverse industries like mobile, telco, payments, legal tech and digital culture. He's done it all. Currently, as the Director of Quality Assurance at Rocket Lawyer, Chris brings a holistic perspective to software quality. Beyond his professional achievements, he's an avid traveler, foodie and writer.
And of course, he shares his thoughts through engaging articles on media. So, we are pleased to have him with us today. In today's session, of course, Chris will delve into the reasons bugs manifest and why they continue to challenge developers and testers alike, and so much more.
So before we jump into the heart of the discussion and bring up the first question that we have for them, I just want to understand Chris from you. If I missed adding anything more interesting about you, please go ahead and share with the audience about your journey in the world of testing.
Heemeng (Chris) Foo (Director of Quality Assurance, Rocket Lawyer) - Thanks, Kavya. Really appreciate you guys reaching out to me to organize this webinar. And if you're wondering about my accent, I'm originally from Singapore. I currently live in the US, on the West Coast, in California. But I'm originally from Singapore. So if I sound weird, it's the crazy rich Asian accent.
Just to clarify, I'm neither crazy nor rich, but I sure am Asian. So that's that for me. The other interesting thing is growing up, I've always had this very morbid fascination for systems failure.
I don't know if folks remember, and I'm probably dating myself by saying this, but I used to really love this Discovery Channel Series called Air Crash Investigations, where they would feature every episode particular crash, and they would have the narrative of how the National Transportation and Safety Board go through the whole investigation.
And that's what struck me was that it was always a series of unfortunate events. It's not that the system was not designed well or whatever. But sometimes, it's just because certain assumptions that were initially made upfront did not hold up for that particular instance. Maybe it's climate, maybe it's weather, maybe it's some way that the plane was being used, blah, blah, blah.
And we see that a lot in software as well. And in fact, that was one of the things I was talking about at a recent talk based on my article, Sieve of Eratosthenes approach to bug-free code. And why is it that we have bugs, right? And I hope you don't mind, Kavya, me going through because I think it's worth clarifying.
So, you know, why do we have bugs, And I think for those of you who have been in the software quality space for a while working with software development teams. You notice that it comes from either number one; the requirements were either incorrect in some way or inadequate, or there was some mismatch in the requirements.
And then on the other side of the picture was how the software was built. They would make certain assumptions about what was the environment, maybe, know, certain aspects of cloud and all that, right? And you wonder like, is it that people are negligent or they are just dumb?
No, it's not. We have very smart people on the product. We have very smart people in software development. It's just that the situation has become so complex, and companies want to move so fast that it compounds the problem, right? We still want to make sure that we get good requirements.
We want to make sure that software is built well, but the environment itself has become so complex. And so we, in software, we have to understand the environment that we are operating in and figure out how to deal with that. in that article, I do talk about the pipeline approach, which is, if you look at it carefully, it's actually an inverted test pyramid with some added layers.
Now, the way to look at it is a water filtration system, right? How do you filter out drinkable water, right? You start with large rocks. Usually it's large rocks and then smaller and smaller rocks and pebbles and then eventually sand and you filter it.
So that different size particles will get lodged at the different layers. So similarly the approach, every layer we remove bugs or we remove the probability of bugs. And so that is my approach to addressing that issue. Now, then the question is bug-free code, you want to minimize the number of bugs?
Whether you can achieve bug-free code is another question, right? Because you may be able to achieve that at that point in time, but as things change, your environment changes, bugs will surface. So it's how to deal with that. That's the key.
Kavya (Director of Product Marketing, LambdaTest) - Such a nuanced story and analogy that you just shared, Chris, I think that is a great example that you related it with the very beginning of the Air Crash investigation documents that you saw. And yeah, I think thank you so much for giving us that brief before we jumped on.
So, moving on to the very first question that we have in place, right? In your experience across various industries, what's the most surprising source of bugs that you've encountered?
Heemeng (Chris) Foo (Director of Quality Assurance, Rocket Lawyer) - So I'll give the audience a bonus. I'll relate two stories. So the first, so I learned to code when I was 14. And for those of you who are familiar what the O levels and A levels are, essentially A levels is like your senior high school in the US. And I did computer science at A level.
So what happened was you know, as part of the course, you had to do a project basically to write a big piece of code. And so I did my project in Quick Basic on an XD. So again, I'm dating myself. And for those of you who are familiar with Quick Basic in the late 80s, you could run the program.
So essentially, it was simulation and then you could compile it into an exe file. So my program worked perfectly when I ran it on the simulator. But then when I compiled it to exe, it always crashed at a particular spot in the code. And I was like, why is it always crashing there?
And I really was able to find the actual line of code that was causing the problem. And the only way to solve it was to actually add a go-to statement so that if it hit that line, it would jump to the correct line of code. And I didn't understand the root cause of that bug till I was in college doing a course on the IBM 8086, 8088 architecture.
And then I learned about in the assembly code, a normal jump and a far jump because, at that time, amount of number of bits, and I may be wrong about the reason, but the long story short is you had to do a far jump if it was too far away in the code. And there was a bug in the compiler because it made a normal jump instead of a far jump.
So that's the first story. The second story had to do, I think this is much more recent, think 12 to 15 years ago. So in the earlier days of Android, there was a particular Android device, I think it was Android 2.0 or something. This device is called the HTC ChaCha. And it was an Android device with a keyboard, a physical keyboard, like a Blackberry that device always had issues with apps.
And so I used that for a stress test for the apps that I was testing. And it was very effective because you could flush out issues with the garbage collector and all that kind of fun stuff. And I didn't know why it was always causing issues until about a year later when I met with NX Qualcomm engineer.
And he said, that device. I remember that device. The company had rushed the chipset, and it had issues. They knew it had issues. And so that was one of the reasons why apps showed issues, which was not something that you would see on a more advanced device.
Yeah. So two stories. I hope this is interesting for you. But sometimes the lesson learned was, you know, we have to try to find a workaround. Sometimes, you know, you don't discover the root cause of the bug till much later. You try, but it may not necessarily be possible because there's some information that is hidden from you wouldn't be able to isolate, and you just have to find a workaround.
Kavya (Director of Product Marketing, LambdaTest) - Thank you so much for sharing these, Chris, because these are some fascinating stories, and it's really interesting how we have sort of uncovered the root cause, especially the first story that you shared, right? Years later, it sort of connected the dots for you.
Super insightful. Yeah. And moving on to the next question that we have, right? How can QA teams find the right balance between striving for bug-free code and the realities of development timelines?
Heemeng (Chris) Foo (Director of Quality Assurance, Rocket Lawyer) - Yeah. So that's actually a very great question. And the answer is also very simple. See, as QA, our job really is to discover the bugs, and we prioritize them based on the best of our knowledge, right? And we can advocate for the user by petitioning on their behalf.
But the reality is that the business, and by business, usually I mean the product team, decides whether those are going to block the launch or not. Of course, there is a wide spectrum of this because in certain cases, say, for example, in a regulated industry, a bug could mean that the product doesn't pass certification.
So, it becomes a blocker straight away. But then on the opposite end of the spectrum, we have the fact that companies need to get the user feedback and signals early in order to refine the product. So, some bugs are acceptable. They say that perfection is the enemy of good. And we sometimes have to accept the fact that there will be some bugs out there. But there's a bigger objective to me.
So it's up to us as leaders to make those assessments in collaboration with our product and business counterparts. Because, at the end of the day, quality is a team sport. We work with our stakeholders and our product and development counterparts to get the best level of quality that makes sense for the user and for our business. I hope that that clarifies.
Kavya (Director of Product Marketing, LambdaTest) - Absolutely. Thank you so much for sharing that because, yeah, we have heard this time and again that quality isn't a one-person job. So it's reassuring to hear how you have to make collaborative decision-making right in order to make the product work at the end of the day.
Heemeng (Chris) Foo (Director of Quality Assurance, Rocket Lawyer) - So we are definitely the, I would say like the facilitators, the coach, and we are the ones that ask the most difficult questions. And we should be the ones to ask the most difficult questions.
But at the same time, we have to understand that it's a running business, and the business is the one that helps us to succeed and pays for our bills and so on and so forth. And so nobody wants to have bugs, not the business leaders, not the product team or the development team, because it affects us all. So we all want to get the right level of quality. And that is where the collaboration comes in.
Kavya (Director of Product Marketing, LambdaTest) - Absolutely, and I think there are multiple pieces as well that need to fall right. For instance, it's also taking me back to the story that you were sharing about the Android device with the physical keyboard, right?
Just because the chipset didn't work out fine. You were able to detect some of the bugs because of it. So, can you elaborate on specific practices for developers and product managers to minimize bugs before they reach a QA?
Heemeng (Chris) Foo (Director of Quality Assurance, Rocket Lawyer) - Yeah. So, what I would say is, and I believe the audience are mostly in QA. So what I would say is, work with your dev and product counterparts to get the test plan review as early as possible.
Once you have some kind of a product spec, and usually at that time, your dev counterpart would have started working on some initial very high-level designs, right? Start working on your test plan, and as soon as you can have that sit-down conversation, that meeting of minds between your product counterpart your dev counterpart on the test plan review.
Because what I found is that until you know, things are specified as tests, right? Sometimes, the developer doesn't know what they actually have to build because the product specs could be vague. And in order to come up with the test plan, you need to understand the requirements and you need to be able to express the requirements as tests.
And when you do that, you start to realize, wait, hold on, this doesn't make sense or there's something missing here. And that forces the product manager to answer for that. And then that clarifies everything. And once the developer has something to work towards, it's the spirit of test-driven development.
What is test-driven development? Basically, you build to fulfill the test. Once you have the test specified, even at a high level, it forces the developer to say, hey, OK, I need to design to fulfill the test, or I need to build to fulfill the test. Then, it becomes a lot clearer.
And it's not something that the product manager downstream will say, that's not what I wanted you to build, right? Because it's clearly you agreed that the test would fulfill your requirements. So, it should logically make sense.
There will be some gaps in that process, but I found this to be very, very helpful in clarifying a lot of things upfront. I hope that answers the question. I won't say a roundabout, but I hope it does answer the question.
Kavya (Director of Product Marketing, LambdaTest) - No, that makes perfect sense. I think adopting a test-driven development as well as early test planning and aligning it with the entire development cycle with test cases that can definitely prevent a lot of headaches for devs and testers alike down the road.
Thanks for sharing how that proactive approach also set the tone of the quality at the end of the day. Moving on to the next question that we have, which is how organizations can leverage bug data to improve the development process and prevent similar issues in the future.
Heemeng (Chris) Foo (Director of Quality Assurance, Rocket Lawyer) - Well, I guess one way would just be to look for trends in the bug data. For example, things like why is this particular team or pod always having issues like this? So, you can do some clustering or grouping to see where the trends are. Or a particular component always shows bugs, It's vulnerable to bugs.
Maybe this particular module needs to be bubble-wrapped with more tests before shipping, or you need to have more gatekeepers and maybe more code reviewers for any change in the code. But I personally don't do a lot of that mining the bug repository.
To me, what I found to be really useful has been, number one, I try to attend as many incident retrospectives as I can. Like I said, I have this morbid fascination for systems failure. And then regular things with your customer success or customer support teams because they really know where the customer annoyances are.
So how many folks in the quality space have had this experience where you file a or you found a bug, you get very happy, or you found this bug, then the developers say, no user is going to hit that flow? But if it's coming from a user, it's coming from customer support; you can't refute it.
They have actually stepped on the mine. So yeah, it's very important to really understand what your users are seeing and the customer support team is always the ones that have the best sense of what's going on.
Kavya (Director of Product Marketing, LambdaTest) - I think that's a great insight because, at the end of the day, we are also building for the customer. So having their feedback and having those regular things with the customer support teams definitely helps you understand the pulse of what are the issues or the bugs that are bothering the customers the most, not just in terms of number but also in terms of severity, I think.
Heemeng (Chris) Foo (Director of Quality Assurance, Rocket Lawyer) - Right, right. And try your best as part of the executive team, if any of you are at that level, to celebrate the fact that your customers are sending you feedback. Because, you know some most customers will not even bother, they'll just move on to the next your competitor or some other provider for that service. So do celebrate them and help them out to resolve whatever issues that they are facing. It definitely pays down the road.
Kavya (Director of Product Marketing, LambdaTest) - Thank you, Chris, and even the bit that you were mentioning about have you, you know, et in incident retrospectiveness, right? The value of it, for instance, and looking for trends, if I can put it that way, trends or like patterns, right? That, again seems to be an action-driven strategy that I think QA teams can adopt.
Just out of curiosity wanted to know, you know, if there has been one of those incident retrospective sessions that sort of led to a very fascinating solution for you. Is there anything that sort of comes to your mind?
Unfortunately, I can't speak of them for non-disclosure reasons, but maybe one of these days, I will write a book because some of them are really, really amusing to find. But I think that there's this meme that's out there. I don't know if you have seen the meme, but they show something like a trough, a feeding trough for cats.
Kavya (Director of Product Marketing, LambdaTest) - Yeah. Of course. That would be great.
Heemeng (Chris) Foo (Director of Quality Assurance, Rocket Lawyer) - So they have nicely put three bowls for you to put cat feet. But then the way that the cats approach it is one will go this way and one will go that way. So you can design a system to suit the user. You think it's logical, right?
But how the user will use your system could actually surprise you by a lot. So, actually understanding how the user is using your system is very important. So, for those of you who have done high school chemistry, I don't know if you remember there's quantitative analysis and qualitative analysis, right?
So, your quantitative analysis is all your metrics. So, like, you know, the user flow metrics that you collect is one data point. The other one is the qualitative analysis, which is where you get input from customer support or even user feedback from the users.
Kavya (Director of Product Marketing, LambdaTest) - Great, thank you so much, Chris. think that is pretty insightful, of course, without you disclosing anything. Moving on to the next question, given limited QA resources, how can teams prioritize bug detection efforts to ensure that the most impactful bugs are caught early?
Heemeng (Chris) Foo (Director of Quality Assurance, Rocket Lawyer) - So this is from my experience. Try to automate the parts that are not going to be changed that often so that you can focus your most talented testers on the aspects that are changing frequently.
And the next point I have is actually I learned it from after having a conversation with the head of QA of quite a prominent fintech company. And she said, insist on a minimum unit test code coverage from dev teams. So she said that, OK, she would never assign any QA resource to any team that doesn't have at least 50 % test coverage.
So I thought that was a very interesting insight next is, and this is something that I'm pushing for, QA coming up with test cases and then having Dev implement them with QA doing code reviews of the test code. And then no feature should get deployed without the corresponding test. I know it's very difficult to implement because of the usually the rush.
Then what you do is you try to make sure that as soon as possible, you have the automated test because if you do not do that, what's going to happen is your backlog of automated tests keeps growing, and you can't catch up in manual testing you know not that people are not good or anything the problem is sometimes you might miss something, especially in a rush but automated tests will not miss and they are extremely unforgiving.
There are some AI tools now that make it a bit more forgiving, but the reality is that they're not forgiving. that in itself is not a bad thing because then you're forced to cover those tests even if they may not necessarily be needed. And then it's just a matter of managing the set of tests, automated tests to be run.
Kavya (Director of Product Marketing, LambdaTest) - I think these are some excellent strategies, especially for teams that are resource-trapped, right? And it also again further drives the point that there needs to be this collaboration between QA and dev for the test implementation that you just highlighted. Great.
And can't believe that it's already the last question for the day, which is basically. For our QA listeners who might feel like they are constantly playing catch-up on bots, what's one piece of key advice that you can offer to help them be more proactive?
Heemeng (Chris) Foo (Director of Quality Assurance, Rocket Lawyer) - So I would say, first of all, shift your perspective. Because the fact that the product has a lot of bugs means you have a job. And this was a very interesting conversation that I had with a senior quality engineer at a previous company. was like, why is it that we are always hired when things are really bad?
But that's the reality, right? It's when things are really bad that they will hire people in the quality space to solve it for them. Otherwise, if everything is good, why did you need to hire quality engineers into your team, right? Because at the end of the day, those of us in quality we do not build features. We do not fix bugs, right?
So, we are a call center, and we have to remember that. So having bugs, having to constantly play catch up on bugs, it's not necessarily a bad thing. It's good that there is a problem for us to solve so that the company is able to succeed. Next is, and this is something that I've been saying a number of times over the course of this webinar.
Quality is a team sport, Nobody wants bugs, not the business, not the product, not the development team. Everybody doesn't want bugs. And for those of you who went to high school or college with software developers, you will know they absolutely hate to be wrong.
They're always the smart kids who, when the math teacher asks you, ask the class who can solve this problem, they are the first ones to go up to the blackboard and start solving the problem, right? They hate to be wrong. So they don't like bugs. But the thing is, it's the reality of the complexity of the environment that we are in that these things happen.
We can really help them by tapping on their shoulder. Hey, hey, while you're working on this very challenging problem, don't forget these aspects as well. it could be this could be in the form of a set of tests for them to run.
Because they will be very appreciative of that because they don't want bugs. They don't want bugs. And the product doesn't want bugs because they want to make sure that the business does well. That's how they are measured.
So they don't want to annoy the customers as well. So everybody doesn't want bugs. So if we look at it from that point of view, we actually are working towards the same goal, just that we are focusing on different things.
Kavya (Director of Product Marketing, LambdaTest) - Thanks once again, Chris; I think this is such a powerful perspective as well. It's like reminding all the QA professionals out there that quality is, of course, a shared responsibility, and we should be looking at it as a valuable mindset that a shift needs to happen within the teams, right?
And yeah, not just QA teams but also the dev teams. Thank you so much for sharing that practical advice as well for the listeners, think I'm pretty sure it would be super helpful. Chris, thank you so much for an enlightening session. As we wrap up today's discussion, I just want to thank you for joining us. I know it is super early.
So, once again, really appreciate you walking us through the complexities of why bugs occur and how to approach them with practical and strategic solutions. I'm sure your insights have not only shed light on the root causes. But it's also empowered hopefully QA teams with actionable strategies that they can take away and implement within their teams.
Heemeng (Chris) Foo (Director of Quality Assurance, Rocket Lawyer) - I hope so. And here's the thing, right? I know that this is a particularly challenging time for a lot of folks in the industry. But what I would say is, you know, be the best that you can be the best at something. One aspect or a few aspects of the space, your professional craftsmanship or craftspersonship.
And let people know that you are good at that, right? And then when things pick up, people will be looking to build teams. They will need people who have the ability and the talent in those areas. You know if you have done your, I wouldn't say publicity, but you shared your knowledge, that will help you make your career runway a lot longer.
Kavya (Director of Product Marketing, LambdaTest) - Absolutely. Thank you, Chris. And yeah, a big thank you to our audience for joining us today. We hope you found this session valuable. If you have any questions, please contact Chris on LinkedIn. We'll definitely be tagging him in the post, so you can just reach out to him.
And for those who are listening, Subscribe to the LambdaTest YouTube Channel for more exciting episodes in the XP Podcast Series. Until then, keep exploring and innovating and test smarter. Thank you everyone. Have a good day. Thanks once again, Chris.
Heemeng (Chris) Foo (Director of Quality Assurance, Rocket Lawyer) - Thanks, bye!
Guest
Heemeng (Chris) Foo
Director of Quality Assurance, Rocket Lawyer
Heemeng Foo has been in the software industry for more than 20 years, first as a developer and then building and leading software quality teams. His experience spans the Mobile/Telco space, Payments, Digital agriculture, LegalTech as well as Digital Media. As a software quality leader, he has built and led teams from pre-series A, series B & C and also in an MNC. He is currently the Director of Quality at RocketLawyer and looks at Software Quality from a more holistic perspective. An avid traveler, foodie and writer, you can find his more recent articles on Medium.
Host
Kavya
Director of Product Marketing, LambdaTest
With over 8 years of marketing experience, Kavya is the Director of Product Marketing at LambdaTest. In her role, she leads various aspects, including product marketing, DevRel marketing, partnerships, GTM activities, field marketing, and branding. Prior to LambdaTest, Kavya played a key role at Internshala, a startup in Edtech and HRtech, where she managed media, PR, social media, content, and marketing across different verticals. Passionate about startups, technology, education, and social impact, Kavya excels in creating and executing marketing strategies that foster growth, engagement, and awareness.
November 07, 2024
50:10 Mins
October 24, 2024
30:11 Mins