Webinar: Man Vs. Machine: Finding Replicable Bugs Post-Release [Experience (XP) Series]
Yash Bansal
Posted On: November 17, 2023
14646 Views
12 Min Read
In the fast-paced world of software development, the pursuit of creating flawless applications is an eternal quest. However, the journey is fraught with challenges, especially when identifying and resolving bugs. Imagine yourself in a dynamic software development setting, where teams are pressured to meet tighter release schedules and work within limited budgets. This often results in software production that may contain undiscovered bugs, as there is insufficient time for thorough identification and resolution during development.
In the traditional bug identification landscape, teams face a cumbersome process. Bugs are tracked through various methods, creating lists that might be prioritized inadequately, leading to critical issues getting lost in a chaotic backlog. This backlog, a mishmash of bugs, ideas, and user requirements, becomes a bottleneck in the development pipeline. The communication breakdown between teams further compounds the problem, as the support and development teams engage in a back-and-forth cycle with users, attempting to replicate and resolve issues. As software evolves post-release, scattered feedback from diverse platforms and struggling to prioritize bugs can hinder a seamless bug resolution process.
But now, a paradigm shift has occurred as non-technical users are empowered to contribute to bug identification. Organizations can bridge the gap between technical and non-technical members by streamlining the bug detection process and facilitating higher team collaboration. Due to this, the software development life cycle speeds up with clear communication channels, thus reshaping the landscape of post-release bug identification and resolution.
So, without any further ado, let’s plunge into the depths of finding replicable bugs post-release and unraveling the complexities of bug identification in the current landscape.
TABLE OF CONTENTS
- About LambdaTest XP Series: Webinar & Speaker
- Impact of Tighter Release Schedules and Cost Constraints
- Traditional Bug Identification Methods and Pre/Post-Release Tracking
- Scattered User Feedback Across Multiple Platforms
- User Feedback Integration Affecting Development Life Cycle
- Cultural Shifts and Mindset Changes for Bug Resolution
- Q&A Session
- Wrapping Up! I Hope You Enjoyed It!
About LambdaTest XP Series: Webinar & Speaker
LambdaTest Experience (XP) Series includes recorded webinars/podcasts and fireside chats featuring renowned industry experts and business leaders in the testing & QA ecosystem. In this episode of our XP series webinars, our esteemed speaker is Jonathan Tobin, Founder & CEO of Userback.
Jon is not a typical CEO, but he’s a forward-thinking individual who believes in the perfect blend of technology and human interaction. Beyond business, his interests span a diverse spectrum, from the founder’s journey to the art of slow BBQ. Yes, you heard it right, slow BBQ. Whether you seek insights into customer-centricity, building software, or the finer details of BBQ, he is a valued voice.
Impact of Tighter Release Schedules and Cost Constraints
Jonathan highlighted that tighter release schedules, cost constraints, and internally reduced quality lead to more reliance on automation testing tools. And sometimes, but not always, the tighter release schedules and cost constraints could mean hiring less experienced engineers. With less experience, there are more introduced bugs in production, and then there’s less time for developers to do testing.
So then we end up with the users running into issues when they’re using the software and catching all of them. More bugs may be considered non-critical and then get placed into the backlog. The issue with that is when we’re prioritizing critical bugs, they get put into the backlog. It can change how product development is happening because we move on to the next project, and then there are still bugs sitting there that you know need to be resolved.
So, we look for tools to assist teams and rely more on third-party technology than people genuinely trying to find and resolve issues more thoroughly.
Traditional Bug Identification Methods and Pre/Post-Release Tracking
Jonathan mentioned that traditional bug identification methods are generally quite time-consuming, and people probably tend to take shortcuts. Most organizations want to do the right thing, identify issues, and go through the right processes. Teams can sometimes create lists of bugs, which they get prioritized once they have the list. This leads to bugs potentially being lost based on severity, which can mean that bugs don’t get fixed in time.
And around non-critical bugs being placed on the backlog, the backlog can fill up with bugs, and then they’re mixed with ideas and user requirements. So teams end up with a very messy backlog because it contains everything in that backlog, having a more focused effort on streamlining the bug detection process or the testing process for higher priority bugs, a higher level of cooperation with the engineering teams for pre-release.
Regarding the post-release process, having an internal SLA for bug resolution, clear communication with users post-release, and avoiding putting bugs into the backlog is interesting because it allows you to continue supporting customers while managing their user expectations while resolving the non-critical issues.
Scattered User Feedback Across Multiple Platforms
According to Jon, most organizations of any size have this problem, i.e., lack of visibility across the different teams. They need to realize that they are collecting feedback from their users in different ways. For example, internally, a development team collects feedback from QA testers because they’re going through and testing the product, collecting feedback through different surveys that they might be doing, like NPS or customer experience. The product managers run their surveys, and the marketing team runs their surveys.
Everyone’s collecting feedback from different mediums, and there’s no centralized place for that team to go to generally to find out what users are pointing out. The other challenge is that the non-technical team that’s been collecting feedback from the users through surveys, generally speaking, will happen if a customer raises an issue because they’ll say they were using the product the other day. That can often get lost in receiving thousands of responses affecting other customers. Still, it’s only known or surfaced by one team in the organization, and maybe not the team ultimately responsible for helping those users or helping fix that issue.
User Feedback Integration Affecting Development Life Cycle
Jonathan highlighted that according to research, 38% of developers spend over 25% of their time fixing bugs and 26% over 50% of their time. So, the problem isn’t fixing the bug. It’s gathering the required information to replicate and resolve the issue. And we know that developers don’t genuinely like resolving bugs or replicating issues.
Either the QA tester or the internal team member has an issue to support, which gets logged as a bug or a task in the project management tool, such as JIRA, and the developer receives it. They started working on the issue, and the developer couldn’t replicate it. Hence, it’s back and forth between the developer and the customer, and it takes longer to fix the issue once all that information has been gathered. So, using a tool doesn’t necessarily matter who’s logging the bug to the development team. The bug report is going to be consistent.
So, you’re just trying to speed up the replication process as much as possible. The biggest cost of resources is uncaptured, a stat from the state of software code report by Rollbar, i.e., 22% of developers feel overwhelmed by manual processes surrounding bugs. And what’s more worrying is that 31% say manual responding to bugs frustrates them. So, it is a simple fix with a huge potential impact.
Cultural Shifts and Mindset Changes for Bug Resolution
According to Jon, it’s an easy mindset change because the technology already supports this process. So effectively, you can transform non-technical users into an army of QA testers because the key to that is providing consistency and transitioning triage levels. So, the reality is that when anything is identified or logged by a non-technical user, someone is still in the middle who can review each issue, report, or request and make any adjustments to collect additional information before passing it through to the development team. This allows teams to have a gatekeeper on issues that might get logged as bugs.
In software development, users will always log something as a bug and say this is broken; it doesn’t work as it should. They’re not all bugs, and they could be a feature request. Having that gatekeeper in place lets you triage appropriately. Therefore, reassuring the development team doesn’t mean they will communicate directly with users because when an engineer receives an issue, they think it’s been reported by a user. They must share with that person, but that’s not the case.
Q&A Session
Q: How can development teams blend non-technical user insights effectively with internal technical expertise to resolve identified bugs or issues?
Jonathan: In my opinion, if all things are equal in terms of the data being provided with the bug reports, so as if internal reporters and the non-technical users are reporting issues, if everything’s equal and the context being provided is the same. I guess the internal team will always be providing a little bit more information but allowing the internal teams to add more context to the user insights before that gets escalated to development with that triage step. So you can better understand the user as the support person who’s triaging you a better understanding of the user so you can add more context around what the user is saying.
Lastly, the use of the perspective of the user is important because the user’s perspective is not always the same as the developer’s, the product manager or the internal team members’ perspective. Sometimes, we think about our products in very specific ways. And we think we’re developing our product this way because this is how we want our users to use the product. But the result is the users use the product how they want or think they need to. Having the user insight available helps us make better decisions when resolving issues because they relate to other product areas.
Q: Could you share any insights on how this approach not only aids in bug identification but also contributes to an agile development process or a DevOps culture?
Jonathan: I think by automating the collection and the delivery of the information of the issue to the internal teams with each issue submission or feedback submission, the non-technical users can give the internal, I guess, more technical team members everything they need to identify, recreate, and resolve the issues reporting and feedback tools, it reduces the traditional, I guess, that investigation time by, you know, up to 70% from what we’ve seen here, at least at Userback. And it means that our teams can maintain that iteration and release philosophy. And that’s core to the DevOps philosophy.
So, it slows the DevOps process when a business tries to turn a non-technical user into a technical feedback submitter or issue submitter for insights and feedback. For any business, it probably should be avoided. But let the technology make it simpler and more frictionless for those non-technical users to provide feedback and let the technical devs do what they do best, which is, you know. They’re there to code and not cross-examine users. Diving deep into your product flaws isn’t the user’s job. And if it’s too hard, they won’t provide any feedback. That’s detrimental to product insights and future builds.
Q: How do you foresee the future of using user feedback for bug identification and resolution, considering advancing technology and evolving user engagement behaviors?
Jonathan: So much data and tools are available, i.e., cross-browser testing tools like LambdaTest. Regarding bug identification, we are building deeper relationships with our users so they don’t feel like they may report an issue. Maybe it’s prompting them to provide feedback along their journey using the product because our users probably run into issues more frequently than our internal teams. After all, they’re the ones that are using it to do the thing that we built our software for. And prompt them along the way if we identify something in the data that may be causing an issue for our users. We can then ask them.
And because we know who that customer is, we find customers in our, um, user, user database that look similar to that customer and prompt for feedback from them along the journey. And we can gain better user insight into how our customers use the product and what they like and don’t like. Often, users will do one thing and say another thing. And, yeah, I guess it just provides a better customer experience overall.
Wrapping Up! I Hope You Enjoyed It!
As we conclude this riveting exploration of “Man Vs Machine: Finding Replicable Bugs Post-Release,” a heartfelt thank you goes out to Jonathan Tobin for sharing his invaluable insights. Jonathan’s expertise has illuminated the intricate path of post-release bug detection, offering a blueprint for testers to elevate their approaches.
Brace yourselves for the exciting episodes in the LambdaTest Experience (XP) Series webinars. Until we converge again for more revelations in the expansive universe of software testing, remember to stay curious and resilient and, above all, keep testing! Until next time, the journey persists, and we look forward to sharing more insights with you.
#LambdaTestYourApps❤️
Got Questions? Drop them on LambdaTest Community. Visit now