XP Series Webinar

Shift-Left: Accelerating Quality Assurance in Agile Environments

In this XP Series webinar, you'll learn about 'Shift-Left: Accelerating Quality Assurance in Agile Environments.' Explore the significance of Quality Driven Development (QDD) and strategies for advancing QA processes and optimizing efficiency in agile software delivery.

Watch Now

Listen On

applepodcastrecordingspotifyamazonmusic
Steve

Steve Caprara

Director of QA, Plexus Worldwide

WAYS TO LISTEN
applepodcastspotifyamazonmusicamazonmusic
Steve Caprara

Steve Caprara

Director of QA, Plexus Worldwide

As Director of QA at Plexus Worldwide, Steve Caprara brings nearly two decades of IT experience, seamlessly transitioning from hands-on testing to pioneering automation frameworks. Renowned for Quality Driven Development (QDD), Steve shapes software development by embedding quality. A transformative leader, he builds amazing QA teams, fostering a culture that values quality and automation. Beyond work, he indulges his tech enthusiasm, exploring innovative teaching methods, and passionately advocating for the wonders of QA in every encounter.

Mudit Singh

Mudit Singh

Head of Marketing, LambdaTest

Mudit Singh, Head of Growth and Marketing at LambdaTest, is a seasoned marketer and growth expert, boasting over a decade of experience in crafting and promoting exceptional software products. A key member of LambdaTest's team, Mudit focuses on revolutionizing software testing by seamlessly transitioning testing ecosystems to the cloud. With a proven track record in building products from the ground up, he passionately pursues opportunities to deliver customer value and drive positive business outcomes.

The full transcript

Mudit Singh (LambdaTest) - Hello, everyone. Welcome to another session on our XP Series webinars. Joining me today is Steve Caprera. To give you a highlight, today we're going to do a very deep dive into shift left and what it means to do a quality-driven development in the context of Shift Left. I will be your host today. I am Mudit, Head of Marketing here at LambdaTest.

And joining me today, as I highlighted, Steve. Steve has been a practitioner in the field of information technology for nearly two decades. He has been leading teams and has been in hands-on tester, specifically crafting automation frameworks. If you have heard about quality-driven development, that is something that is pioneered by Steve. He's the man behind it. And right now, he's leading large teams across various companies, and currently, he's director of QA and release management.

As we mentioned, today we're going to very deep dive into the shift left. What does it mean? How you guys can implement it? And also do some practical best practices that Steve can help us with. So, Steve, the stage is yours.

Steve Caprara (Plexus Worldwide) - Thanks, Mudit. Hi, everybody. Steve Caprara is a lifelong technologist and loves breaking stuff on purpose. That's how I got into QA and why I still love it today. So I consider it one of the best jobs because I truly do get to break stuff on purpose. They were talking about the shift left. And what does that mean within a QA or technology or any organization? And how do we implement that? Either on a large scale or on a small scale, right? We have fledgling companies that are just getting into this.

Or maybe they haven't gone through a transformation, and we have really experienced companies who maybe are struggling to get this idea implemented, right? So I always love this graphic. I show this quite often to peers and people in my organization that we all like to develop and put out great software, but how does that really happen, right? For at least in my experience, a lot of times, the business is above me or outside of IT says they want something and they just want the thing. They don't really care how it happens or why it happens. Just make it great and give us something that's awesome.

All those boxes are just decisions that are made somewhere along the way, and hopefully, it comes out great. But there's a lot of work to be done to make that happen, right? And I'm sure a lot of you out there have experienced some of the plight and hardships that come with trying to shift left. And what does that really mean? Because if it's not broken, don't fix it, right?

It's easy to be a waterfall. It's easy to test at some point. Why change it? And then, a lot of times, QA is kind of the afterthought. And again, apologies if anyone's a developer or development director. It's not meant to hurt feelings. It's just in my experience, this is a lot of things I've come across in that QA is really an afterthought sometimes and kind of a nuisance, right? We always find the problems. That's great.

People need to find those problems in advance of the customer, but it can sometimes slow things down. It becomes a bottleneck, right? We get almost all the way to the end, and now we're wondering, oh shoot, how are we going to fix all this? We need to get it out. We need to get it out, and then we fall into the date-driven development cycle. I'm sure nobody here I say that a lot of tongues in cheek has ever experienced getting test cases or development work thrown over the wall.

Steve Caprara (Plexus Worldwide) - Right? In a lot of organizations I've seen, development work occurs, and then it's thrown over and forgotten about QA picks it up, and we hope for the best. It's the cross our fingers type, uh, development style, which, and when I say development, that's generic, right? How we deliver across, um, a software development group. And that becomes problematic, right? All of a sudden, the QA is left holding the bag with all this code that has to be tested.

Hope to test it in time, and then unfortunately, we have to give it back sometimes with errors, bugs, defects, et cetera, and almost feel apologetic about it; hey, so sorry, you know, that thing that you wrote, it's broken. And that's, I think, a lot of where the shift left mentality has to change, right? So the process already works; why are we gonna change it? Well, how do we change it? In traditional organizations, it's very siloed, right? I don't know, the...

The image, I think, paints a very good picture. You have your business analysts, you have your quality assurance, you have your UI design, and you have your developers. And a lot of times in startups or newer organizations or even maturing organizations, they're still very siloed. A lot of development work is done on a Scrum team, and in some instances, I still see QA teams are separate entities.

They are, okay, we're gonna pitch it over that wall to the QA team and get it tested, right? And that's very waterfall. It's the very right side of the equation that you've now done the work and handed it over. And that's sometimes difficult to change that paradigm, right? To go to a cross-functional team. Some of you very well may be in cross-functional teams already, but are they, have they been reevaluated? Or when was the last time?

Do you have enough skill across multiple areas? Do you have, as a traditional Scrum team, you have your product owner, your Scrum master, your BA, your UA, and your developers all on the same team? Are they in the same pod? Are they disparate? And sometimes, some of you guys are across multiple locations and worldwide, and it's difficult to be in that pod. But those things are what's going to help you shift left, and I'll get more into that in a little bit.

Steve Caprara (Plexus Worldwide) - But that comes at a cost. There's a lot of fear associated with going into cross-functional teams. There's change, and change is always very hard, right? We all think we're great at adopting change. That is not so much the case when you try to implement change. I'm sure a lot of you have tried to make a change, and all of a sudden, you were met with a wall or a lot of pushback.

Sometimes when you go into cross-functional teams and you tell someone, or if you don't explain the why, you're all of a sudden going to instill a little bit of fear because you're now looking to train people on different skill sets. Maybe developers need to learn something about automation and QA or how to manually test. Maybe QA needs to learn a little bit about development work.

And that's scary for some, right? I don't want to teach them how to do my job. They, you know, what happens to me where I think the better paradigm, and this is where I think a lot of sometimes in the Google contributors get lost in it is if the company does better and we succeed, and we're delivering more and faster. There's more opportunity for me, right? I can now learn more about development if I'm in QA, or I can now go learn more about what if it's API testing, what if it's some other venue within testing that I've yet to play with. And that's kind of the, the way that I think leaders really need to start to push and develop within teams when they're trying to make them cross-functional is lead from the front.

This is why we're doing this change and how it's going to benefit everybody. Instead of, hey, we're doing this, go do it. That becomes a little more difficult. That's always met with pushback. Another problem that I've found when trying to make cross-functional teams is, if you've ever seen it, the heroes of an organization or people who like to hoard information.

They're the type of person who always likes to be the go-to. Oh, this is broken. We're going to go to, you know, Jim or Johnny, whoever it is, because they always fix it. They know what to do. And they love feeling that way. When someone comes to them, cross-functional teams start to break that away. And that's a hard chain because now you need the whole team to tackle something. You need the whole team to jump on something, and that shining star now becomes one of many who are able to tackle something.

So those are met with a lot of challenges. If you're a reader, one of the great books to either read or listen to, Audible, is The Phoenix Project. If you haven't read that, what a trip. It is an almost lifelike walkthrough of a maturing company and its DevOps practices, what cross-functional teams can bring to the table, and understanding where a limitation is or how to identify the constraint is really the premise of the book.

Mudit Singh (LambdaTest) - So, to give a highlight, we'll add the link to this book below. So whenever you are checking out the video, feel free to check out the link to the book below as well.

Steve Caprara (Plexus Worldwide) - Thank you for that. So QDD, we did mention that earlier. What is that? I've kind of coined the term QDD. We've heard of behavior-driven development, BDD, test-driven development, and TDD, but those are all based on actual development practices. I like the idea of quality-driven development, which starts with the mindset.

So, a lot of those are practices. This is a mindset and a shift in paradigm on how the organization or IT org will approach testing and quality, right? Cause quality isn't just the tester. And that's, I think, where a lot of shift left struggles come in is, well, it's testing's fault or fault. It's not really about the blame, but how did this get missed? Why did it get missed? You know, what's taking so long?

And then it's, well, quality is suffering because of it. If we take the approach that quality is something that we have as a mindset from the onset, we can approach our development cycle very differently. All right, so for shifting left, we're going to be able to, and if we have quality at the first part of our, or, you know, first in mind, one of the advantages is we'll be able to release faster. We'll have less churn, we'll have less code churn, we'll have less hot fixing in production, and we'll have less issues with releases.

All that starts to improve, and then you get more fidelity across your actual release quality. So I'm sure many of you who've been in maturing organizations have struggled with maybe getting software out faster based on a date that's driven, or because of whatever project is pushing your priorities, you have to get this out, you have to get this out.

And we never seem to make time to do it right the first time, but we always find time to do it a second time. Always very interesting to me. Okay, so if we take that time up front, we slow down a little bit to speed up in the long term, and you help the organization, this becomes a training exercise, how you train the organization and train your leadership to understand quality up front, as the mindset will give you far more speed, leverage, agility, velocity towards the end.

And ultimately, once that starts to happen, you change not just the perception of the business looking in saying, you know, why are these mistakes happening? But you're building the confidence and the sense of ownership in your own team. So now your team is prideful in what they're putting out. They know they're putting out quality code. They know they're putting out a great product, and they're more motivated to do so the next iteration and the next time.

So another good expression I like to coin is by Stephen Covey, beginning with the end in mind. If we want to have high-quality deliverables, it has to start in the beginning. Well, okay, what does that mean in the beginning? So I'm being very high-level in some respects, but where does that start? It starts with training the business on what the expectations are if the delivery teams all always acquiesce to, has to go out by this date, and we don't give a reason back as to why, and that can't happen.

We're always gonna be behind the eight ball. Instead, training and having those conversations, especially for those leaders listening in, is having those conversations with upper management. I understand that you want this delivered by this date, but here's the impact. Here's what it's gonna take.

You know, are you doing big room planning? Are you doing all of your cadences within Agile, Scrum, Kanban, whatever flavor you're doing, Scrumfall, you know, for those of you out there who haven't fully migrated over, those are the kind of conversations because otherwise you're gonna have siloed teams. You're gonna have those sprint plus one cadences that occur.

So what I suspect is a lot of teams are still in that old mentality of, we're trying to shift left, but we're trying to move too fast, and the developers are putting out way too much code, and we can't test it fast enough. So we're going to manually test it right now, and we're going to throw it to our automation team to do it later. And that becomes that sprint plus one cadence, and your automation is always a sprint behind. And that's actually shifting more and more, right? Cause now you keep testing further and further away from the release.

Again this theme of paradigm shifts is gonna keep coming up with this talk. I never see QA teams as disparate. I don't see a manual testing team or an automation team. It's all QA. It's all one team. And anytime in my organization that I hear developers, well, SDETs or QA engineers, oh, give it to the automation team. No, no, guys, we're all one team. we're all one team. Okay? It's not the automation team, it's QA. Give it to your peer, give it to our person over the wall, but it is not a separate team. We have to work together. And where does that happen?

Question for you to think over while you're watching this, are your cadences followed? Is your QA part of the refinement? And does QA have a voice in pointing stories? And I ask that because every time I go into a new organization, and I look at their practices, it's very much the same. The team as a whole is in refinement. QA is very quiet and doesn't really participate in those discussions. That needs to change. Okay, QA team, you are now a vocal part of this. If you don't sign off on something or you don't agree with the quality, you need to speak up. So at the onset of refinement,

Are you discussing the acceptance criteria? Do you fully understand that and push back on the product owner? I'm not sure how to test that. What does that mean? And really making sure that acceptance criteria are well-defined and written out so that I understand whether it's Gherkin, however, you decide to approach that. Secondarily to that is, are you pointing out stories? Now I'm not saying pointing bugs. I'm not a big fan of pointing bugs, a personal opinion, but are you part of the pointing during the refinement of the stories?

Because how many times for those QA out there has a story been pointed as, let's say, we're using Fibonacci, and it's pointed at one and a three? And as a dev lift, it's very light and small. From a QA lift, it's gonna be a five or eight because it's new functionality, you have to write new utilities for it, and it's not fully understood. So it could be a homing feature that's a quick lift from the dev side and a very large lit from QA that has to be accounted for.

So QA should be part of that point team and have at least been in the discussion, maybe fall somewhere in the middle. Dev says it's a one, QA says it's a five, and you fall on a three. Whatever the case may be, if QA isn't part of that upfront, you're not driven by quality; you're driven by devs, right?

Mudit Singh (LambdaTest) - And to add to this, Steve, we recently did a survey on this as well. So I'm not sure if you're aware of this, but we did a very big conference, test new conference in August. And one of the parts of that conference was a very big survey. I think the whole survey we are gonna release very soon. But one of the questions over there was how often are the devs or how often the QAs are part of the sprint cycle?

So what we did, I'll say, is still good to find out that around 75% of QAs are involved at the start of every spin, but they are still that 35% specifically in large enterprises, 25% of sprints where the QAs are not involved at the planning stage. They are provided that, yeah, this is what we have already built up. Now you go and test it, and it's a black box for them.

And then the whole process, as you mentioned, this gets it's not as optimum, right? So there are still those 25% big enterprises who have to move fast and start to involve the QA teams as early as possible.

Steve Caprara (Plexus Worldwide) - Yeah, exactly. And 35, I mean, that's a great number. That's actually certainly higher than I would have anticipated. But 35, you think about all of the companies out there, all of the tech, all of the development and QA engineers out there, 35% is still a large number of people who don't have any say. Um, so I mean, wow, that's pretty crazy. I appreciate you sharing these statistics with me. It's good to know. When does that come out?

Mudit Singh (LambdaTest) - We're going to come out, I think, within a week's time, if not already. So we're planning out and kind of optimizing, let's say, that the bias in the survey is not there. All the responses are properly recorded, and everything because of the community, by the community, for the community survey. So just ensuring that, yeah, all voices are there.

Steve Caprara (Plexus Worldwide) - Awesome. Something I didn't touch on, and well, I guess I can do it in the next slide, but well, sorry. Oh, I digress. Hold on. Let me move ahead here. So I like visuals if you can't tell. Another expression I really like is if all you have is a hammer, everything you see is a nail. Right. And sometimes that approach doesn't work. You're trying to hammer a screw into a stud, you're gonna have problems, and that screw is gonna start to bend.

It's not the right tool for the right job. So automated testing is another part of shift left, right? When the organization I'm currently at, when I got here two and a half years ago, it was fully manual. There was an automation suite of sorts that ran 24x7, and it caught four bugs in a year. I wouldn't say there's a lot of ROI in that.

So you have to hold. You have to keep the infrastructure running. Um, it's expensive to maintain. You're constantly reviewing test cases, all of that to catch four bugs that one person wrote some time ago. Excuse me. So it really wasn't a good setup, and it wasn't a good, um, track for success. In addition to that is giving me a little more than I probably need to. But what was funny was the organization used to deliver daily, right? It was released every day. Something was going out to production, and while that sounds pretty cool, like, wow, really CI/CD? Not so much.

So what was happening was it was more of like bragging rights. We did it because we could, not because we should. And even though there were three releases that went out to production, the first one was the release. The next two to three were all hotfixes because there was inevitably something broken. And there was a lot of context-switching, right? So the developers are working on sprint work, release, hotfix, fit in some scrum work between hotfixes, another thing is broken, jump back. It was chaotic.

And then QA is doing the same thing. How do you test it, test and regress, go back to your sprint work, and now another hotfix regress, it was madness. So we had to slow down to speed up. All right, we had to change our release structure and our release schedule to match the maturity of the organization. Sometimes, I've seen some organizations deliver quarterly and have a 16-week tail to certify a release.

On the other end of the spectrum, just madness, right? Because now you're taking forever to get anything to production. And by the time you finish regressing, what if you find bugs over 16 weeks? How much longer is it gonna take to fix them all? So that's one end of the spectrum. And the other one is what I was telling you earlier, releasing multi-times a day when you're not really in an organization or a maturity that's ready for it. And that all comes with time, and it comes with good practice.

So if you don't have the infrastructure in place, you don't have the pipelines in place, you're not gonna be able to facilitate shifting left. So you have to work with your infrastructure teams, you have to work with your development teams, you have to have DevOps on board to be able to have pipelines in place, right? Are you using ephemeral environments? If you're not, every environment's gonna become the bottleneck for your release. Excuse me.

So how are you testing the code that's delivered? Are you having the dev or the QA check on the local development? Because that's almost too far left in some ways, right? To have the QA do it. However, developers should be testing on their local for at least satisfaction of the acceptance criteria. They're not going to know the boundary conditions. They don't think like a tester necessarily. The really good ones do. But they may not think of boundary conditions or negative testing in such a way that QA does.

So those need to be put in place. Are you in a position to have ephemeral environments that spin up on the fly? Can you deploy that branch up to an environment quickly to at least do a smoke run against? Are you then able to promote that to one environment higher to be fully connected and now test upstream and downstream systems for your integrations? If not, that might be a place to start, right?

Because we all hear there's not enough time to do it. Again, back to my statement earlier, we never seem to find time to do it right the first time, but we always find time to do it a second. That's problematic. So work with your leadership to slow down and put the right processes in place. Okay.

Another thing that always gets in the way of automation is we need to get the release out, we need to get the release out training here. When we admit to billing out or building out our automation testing suite, we can now deliver faster because we're not spending all the cycle of testing it. That is not to say that manual testing doesn't have a place. It always will. In my opinion, there are always going to be times when manual testers are far better at boundary condition testing and exploratory testing than automation engineers in many ways.

Again, personal opinion. I've seen automation engineers very much into the engineering mindset, make it work. So they develop the thing and forget about the breaking part of it. So again, time comes into play. Reviewing it manually every time and just getting it out is a constant struggle that requires time to be dedicated to it. So how do you break out of that? One is to work with your Scrum Masters, work with your Agile coach, and work with your leadership on negotiation.

Okay, I know we have to get XYZ out. How do we set aside time each sprint for the development of X Maybe it is taking on some contract labor to assist with the build-out of a framework. Maybe it's making use of a technology that's easily plug-and-play to at least start on your automation journey, right? To start building out a smoke test suite or a regression suite. Get there first and show them the wins of that, right? Otherwise, it's nebulous.

You have to be able to put pen to paper and show them what the benefits are with them, right? The sales technique is what's in it for me. You got to show them what's in it for them. Once you can get a little bit of headway on that, you can start building that out further, right? Dedicate time, not just for whatever labor is creating the automation suite, but for the testers in each sprint to be testing.

So what I've also found is testers typically in fledgling organizations are kind of waiting around for development work to be done. When you're automating in sprint, okay, I like to get away, we want to shift left, we don't want to do sprint plus one. So if you're automating in sprint and that's going to be the cadence, then QA engineers, general refinement, aside from speaking up as to how it works, what's the acceptance criteria and understanding fully, they also need to start wire mocking, right? Are they putting the skeleton of their test cases together?

They may not know the exact locator, but they're gonna be able to understand the logic changes and maybe start writing out some of the utilities they're getting. They can start having their page objects, again, wired out and then filled in with the locator later so that once the dev work is done, they can plug and run, and then while the automation is running, they're doing some manual validations and assertions and checking for some of those boundary conditions all at the same time.

Steve Caprara (Plexus Worldwide) - And this goes back to those ephemeral environments. If you don't have pipelines and environments in place for that automation to run against, you're now taking up the time of the individual QA to run that. That's an expensive time for them to block their entire machine to execute that automation versus leveraging pipelines and ephemeral environments to do that for you, right? So if you're not making use of, say, Selenium Grid to orchestrate your test runs across multiple browsers or leveraging LambdaTest to do like that plug.

Mudit Singh (LambdaTest) - Yeah, definitely. I was just waiting for my time. As a marketer, I cannot just leave it and not plug LambdaTest over here when we are talking about an ephemeral environment, but you did for me. Thanks for that.

Steve Caprara (Plexus Worldwide) - I got you. Yeah, LambdaTest excels at that. That's why we leverage it. That's why we switched to it from a big competitor because of ease of use, the speed at which it operates, and the fact that it hosts basically our entire orchestration. And then the connection back, it's not a two-way connection; it's a single connection back to our environment for the application, which is actually a really cool setup.

And I’m not paid to say this. People like it because it is a great tool, and that's why we went to it. Now I got off track here, but regarding the automation side of it, there are so many things that can be put in place, and we can deep dive into that further in another conversation, but this is really about the overall idea of how to shift left and the things that leadership, contributors, managers need to start thinking about.

And the conversations they need to start having and putting together for their sales pitch to manage upper management as to why this needs to happen Let's see none of the benefits, right so this is it right here in a nutshell The further right you are you don't find out about the problem until it's too late. You've completed the sprint, or Well, that's a little too far, but say the developer finishes the story and is waiting on QA. It's not marked done, but the developer is already onto story number two or three while QA is testing it.

By the time QA finds a bug and goes back, the developer has to relook. They're now context-switching. That's, that's not great for their frame of mind, right? They're, they're focused on their current development task, and now they have to switch gears and go back to a story that they thought was done. So, getting testing earlier into that cycle where, again, QA is part of that refinement. I'll keep going back to a lot of these same concepts and reiterating myself because it's true.

That's where the focus needs to be. How soon can we get it tested and get that feedback loop right back to the developer immediately that there's an issue? Um, now I, there's, for those who are struggling for QA to make it known, I know there's gamifying that, like where people will game the system and write bugs.

just to have a high bug count. I hate that. It's not about the bug count. It's about the quality of the code that's delivered. So, less focus on how many bugs are generated and created by the QA team. More about how many bugs are stopped, right? How did we get this fixed and delivered and moved along to increase our velocity to deliver more next time? And again, while that tester is doing the initial test, that developer could have kicked off the automation suite.

Again, if you had your pipelines in place, just keep taking this one and extrapolate it one step further. First, it was pipelining in place and ephemeral environments just to be able to run automation. What if now, as we get to cross-functional teams, you have made your framework in a way that is similar language, so now you're in the same language as your development, and you actually provide tooling to the developers to test their own code instead of testing it?

Steve Caprara (Plexus Worldwide) - Okay, so now they can incorporate modules of testing into their own development work, whether you wanna put it in the same project or repo, that's a whole nother conversation that we can get into another time. But they can now start adding to automation. They can start adding tests. And before you guys jump through the screen and like freak out that, you know, did you just say that developers are gonna write tests? Like that's a QA function. Yeah, I get that.

But if we're truly being cross-functional, why wouldn't we? Why wouldn't we provide them with the tooling? They may not write, you know, that's still our framework as QA. They may not write all the utilities and the things that we need, okay, to keep that framework alive, but they can add tests. They can be adding UI elements and locators all day long that'll assist in getting this done further. Because a lot of times I've heard, well, we can't develop anymore because we're waiting on QA. Why? Why aren't you adding to, right?

So that feedback loop then starts to get shorter and shorter. And now you're starting to bring in more and more developers into the fold. And the whole team is writing code, whether it's automated or application. To get even further left, now we're going to get crazy real quick. What about data ingestion? What about seed data, right? This is something that is often not thought of, but a really great way to shift left.

What I mean by that is a lot of times, we have automation frameworks that have hard-coded data values in the task, right? You're doing the send keys, and you're sending the login information. Okay. Static data, um, take it a step further than some people use these massive Excel sheets of data that's comprised for their test automation. That's a nightmare to maintain. It really is. Right. I mean, who likes having to use Excel to manipulate test data and then the framework, that's kind of a nuisance.

Well, why, you know, then you take it a step further than that. You're like, okay, we'll connect to the database. We haven't done that, let's do that. Then you set up your connections or whatever, and you're now connecting to the database for your data, but it's more dynamic. But that's good for more historical data that's still somewhat static, right? Then you take it a step further, as you're progressing into your shift left, are you leveraging APIs? If not, why?

Why not generate your data on the fly using APIs? And now you're exercising that code. You're validating through the UI, and you have your database connection to validate that it went all the way through. So now you're hitting three tests and one almost, and now you can go deeper. But again, shifting left to make your testing as dynamic as possible in small chunks.

And I can go into a whole tangent on that, but those are the benefits of getting things shifted left more and more, being able to get your constant feedback loop, being able to test faster, and being able to deliver higher quality in a more consistent loop. And ultimately, that all leads to success. So hopefully, that covers a little bit of what people are looking for to shift left. If that was too high level, people have questions, I'm happy to take Q&A or questions via email that I can answer later, whatever works.

Mudit Singh (LambdaTest) - So, I have a burning question right now, Steve. We have heard this a lot as well when we got a chance to connect with teams, specifically teams that are starting up in their journey of automation. So when is the right time to shift left? Is it something that is only for big enterprises that have a lot of automation in place? Or what is the right time? When should the team start thinking about, let's say, shifting left?

Steve Caprara (Plexus Worldwide) - YESTERDAY!!!

You're never going to. So it's a really good question because I think a lot of companies and teams and organizations struggle with this because they're trying to time it right. They want to be able to step in right at the right time, or they want to wait because, well, now is not the right time. But in two months, it will be. It's never going to be the right time. There will always be that new project, that new hurdle, that new roadblock management change or changes. All of that happens all the time.

To be honest, the best time is actually an inception. When you're starting out as a new organization, start with the end in mind. Stephen Covey's, you know, my reference earlier. But, you know, say you're, that's where we were. We had an application that's been around for multiple years, and it was still manual testing. And, you know, well, it's hard. It's, again, there's going to be a lot of pushback. You're going to have a ton of pushback trying to get shifting left and getting automation in place.

But you have to do it, or you never will. Um, and that takes some intestinal fortitude to be able to, you know, step through that and push back and, you know, start making the changes again. It can be gradual, but you know, 1% every day, a hundred days later, you're a hundred percent improvement. If you never take that first step, you're never going to get there. So to all of you who are struggling, when do I do that? Do it now. Start small.

If you're brand new to automation or you're brand new, pick up like Selenium IDE or connect to Lambda test for some of your testing needs on how to do multi-browser testing. Start and then start recording some of that. Start small and build it up. And then what you do is you get some success. So if you're a small team and you're not sure where, do a POC request from your leader, the opportunity to do a POC.

Hey, I think I can speed this up if I build out some automated testing. Can you give me a sprint to do that? Can you give me two hours a day to work on that? Negotiate that time, and you'll be amazed at what you can accomplish when you take that initiative and start doing it.

Mudit Singh (LambdaTest) - Another thing I wanted to want you to throw more light on, I think we have been discussing this, that there is a role for manual testers as well. So whenever somebody talks about shift left, people automatically start thinking about automation testing, making everything automated as much as possible, like automating even the automation part of things, which is there, I agree, and it is required to move fast. But there's a role for that manual tester.

So you mentioned that, yeah, there should be a guy who breaks stuff, right? Not just work on automation and make everything work. There should be the people who should break stuff. So what is their role? What exactly are the things that they should focus on?

Steve Caprara (Plexus Worldwide) - So manual testers, I think, should focus on actually building their automation skillset, number one. I mean, again, we're all looking for career growth, right? And what better way than to enhance your career and abilities by starting to pick some of that up? That being said, that is not the best end-all. That is not the only avenue to do that. Manual testers have a massive part in the testing lifecycle. Even here, where I took a manual team, I made it almost extensively an automated team.

I still have manual testers within, again, our team. It's not separate, but manual testers still play a vital role, and they catch a ton of stuff because they think so differently. I had somebody come up to me yesterday with a potential release blocker because he found this boundary condition that nobody in the world would have thought of. And we're like, okay, not a release blocker, but awesome find.

And it's funny because the better the manual tester, the more it's an ongoing joke here at my company, and it's just kind of fun because every time this person comes up in a room, all the developers go, not again. What did he find now? But It's done in jest because they value his input. They value how much he finds and always find the nuanced items that other people haven't thought of right, even the people who are automating on his team. They don't always catch the same things because they don't think that way; they'll find other stuff. They'll find more integration issues and more breaks that are on regression, but he finds the boundary conditions on the functionality.

So, manual testers out there who might be struggling with scaling up or are wondering where your place is gonna be in this new automated world shine as manual testers. Like knock the socks off of people with what you find and show that you have the ability to think differently than the rest of the pack and how you approach them.

Mudit Singh (LambdaTest) - Awesome, awesome. I think we are strapped for time here. But again, Steve, I really want to thank you for all the knowledge that you have imparted to us. This deep dive into Steal and Shift left me with something that is really eye-opening as well.

This is something, I'd say, a topic that has been talked about a lot, but practical examples, that way of, I would say, doing a deep dive into how you can implement it and what are the things that you should look out when you are implementing it. Specifically, the things about manual testing were pretty great. And I think everybody who has listened to all of this got a chance to learn a lot of stuff. Again, Steve, thanks for your time.

Steve Caprara (Plexus Worldwide) - Thank you, this is great. I really appreciate the time and the opportunity to talk to you all.

Mudit Singh (LambdaTest) - Awesome. Thanks everyone for joining us today, and looking forward to seeing you in the next XP webinar. Bye Bye.

Past Talks

Client Feedback & Quality Assurance in Web Design for AgenciesClient Feedback & Quality Assurance in Web Design for Agencies

In this webinar, you'll learn about three pillars - web design, client feedback, and quality assurance crucial for crafting exceptional customer experiences in today's dynamic digital landscape.

Watch Now ...
Democratize Automation to Build Autonomy and Go-To-Market FasterDemocratize Automation to Build Autonomy and Go-To-Market Faster

In this webinar, you'll explore how democratizing automation is redefining organizational dynamics, cultivate autonomy, and go-to-market faster than ever before.

Watch Now ...
Man Vs Machine: Finding (replicable) bugs post-releaseMan Vs Machine: Finding (replicable) bugs post-release

In this XP Webinar, you'll delve into the dynamic world of 'Man Vs Machine: Finding (replicable) bugs post-release.' Discover effective strategies and insights on navigating the evolving landscape of bug identification beyond the development phase.

Watch Now ...