Changing Role of Quality Engineering in the Age of AI [Testμ 2024]

LambdaTest

Posted On: August 23, 2024

view count917 Views

Read time15 Min Read

In the age of AI, the role of Quality Engineering (QE) is evolving from traditional testing to a more strategic function that ensures overall software quality and performance. Quality Engineering is no longer just about finding bugs; it’s about integrating quality throughout the development lifecycle, leveraging AI for intelligent testing.

In this panel discussion, Ben Douglas (Senior Manager of QA Engineering, Tripadvisor), Gary Parker (Senior Test Architect, Betway Group), Jamie Paul Lees (Head of QA, UCLan), Manjula VK (Senior Engineering Manager, QA, The Estée Lauder Companies, Inc.), and Sravani Gurijala (Director of Mobile Applications, Align Technology, Inc.) explore how AI is transforming quality engineering. They dive into how AI is reshaping organizational trends, processes, and roles while also addressing the challenges and biases inherent in AI models.

If you couldn’t catch all the sessions live, don’t worry! You can access the recordings at your convenience by visiting the LambdaTest YouTube Channel.

How Will AI Reshape Development and Testing Roles?

Sravani explained that AI tools are already becoming integral to everyday tasks in software engineering. For instance, AI is present in test automation tools, such as LambdaTest, which features predictive analysis for test script failures. This technology helps automation engineers quickly identify the root causes of issues and offers insights into resolving them.

Looking forward, Sravani anticipated that AI could significantly impact the field. Tools are emerging that generate test cases based on acceptance criteria and create unit tests, as seen with Copilot. This shift might blur the lines between developers and QA engineers, leading to a convergence of roles.

As AI takes over routine tasks, manual QA roles could evolve to focus on strategic value, potentially combining coding and validation tasks into a single role. While this may seem ambitious, Sravani believed it was a plausible direction for how AI could reshape the industry.

AI significantly impacts the field of QE; to leverage the capabilities of AI, you can use the KaneAI feature offered by LambaTest; it can automatically generate test cases and create unit tests based on acceptance criteria. This blurs the lines between developers and QA engineers, potentially merging their roles. As tools like KaneAI become more integrated, they streamline testing and redefine job responsibilities within the software development lifecycle.

Manjula then highlighted how AI was already impacting daily tasks, sharing her personal experience with Teams Copilot, which took accurate meeting notes and saved her time. She pointed out that AI had accelerated development, resulting in more applications and, consequently, more testing.

This shift transformed QA roles, moving from oversight to evaluating AI’s reliability and accuracy. She also mentioned that QA would need to adapt by acquiring new skills, particularly in data modeling.

Ben agreed with the previous points and shared his reflections from attending various panels and presentations. He mentioned that there were many valuable insights about the future of QA, particularly regarding whether QA would need to test models instead of just functional features.

He recalled a demo of an AI product for QA that was not impressive, but the presenter made a memorable comment. The presenter highlighted the difference between an engineer and a coder, suggesting that AI’s evolution would make engineers think more critically and creatively, rather than just focusing on coding tasks. Ben explained that AI’s impact would push QA professionals to become more like engineers, designing solutions rather than just executing tasks. He mentioned that manual testers might shift to overseeing AI engineers, while automation engineers would focus on verifying and analyzing AI results.

He shared his own QA perspective, emphasizing his skepticism towards anything developers claim works without verification. Ben anticipated that the role of QA would involve extensive verification of AI outputs to ensure accuracy and functionality, even if AI systems claimed that changes were acceptable.

Jamie appreciated how Sravani discussed shifting left and combining roles. Over the past decade, tech teams have become more development-focused, with QA becoming more consultative. Some organizations have started employing a single person for testing, making developers responsible for automation and testing their code.

He believed AI would push this trend further, making automation more accessible to developers and increasing opportunities for skilled automation engineers to engage in more engineering tasks. He anticipated a potential merge of roles, including more pair programming.

Looking ahead, Jamie saw AI enhancing the robustness of platforms, improving quality, and increasing speed. He cited examples where AI can create basic solutions, such as websites, out of the box, allowing teams to focus on refinement and additional features. He said that while there is still much work to be done, AI would likely contribute to building more stable platforms.

What Trends in AI Are Noticed in Organizational Processes, Security, or Roles?

Gary shared that AI adoption in his organization has evolved from being a technology department-led initiative to a broader, organization-wide movement. Previously, AI efforts were limited to developers and QA teams experimenting with new tools, but now everyone is involved.

There is a lot more support, including learning materials and workshops, aimed at ensuring that all employees have a foundational understanding of AI and its potential benefits. He mentioned that while AI tools have expanded the capabilities of the team, daily job functions have not changed significantly.

Gary also emphasized the importance of not overusing AI. He advised against relying on AI for tasks like writing emails or creating documentation without proper proofreading, as this can lead to issues.

Sravani then added that while it’s crucial not to overuse AI, it has become an important organizational and industry-wide tool. AI can enhance productivity for both developers and broader teams. She highlighted how tools like Copilot have made handling meetings easier by summarizing notes and articulating action items.

For managers, AI helps sift through emails and summarize key points, making communication more efficient. She mentioned that while AI boosts productivity, it’s not yet significantly altering job roles or responsibilities. AI is currently more of a productivity hack rather than a transformative force in role definition.

Additionally, Sravani emphasized the potential of AI in improving data visualization and communication. As teams become more global and asynchronous, better visualizations can help developers present ideas more effectively and reach a wider audience.

Jamie shared that AI adoption at his university has been slow. He mentioned feeling underwhelmed by current AI solutions in testing, noting that if AI tools only address minor issues like flaky locators, they may not significantly impact the field. He emphasized the need for patience as AI develops and integrates more deeply into everyday tools.

He also observed that AI is increasingly embedded in commonly used platforms, such as search engines, and is expected to become more integrated into existing tools. While he does not see a significant impact on roles and responsibilities yet, he anticipates that as AI solutions improve, they will be adopted for their potential to save time and increase quality.

Regarding recruitment, Jamie mentioned that chatbots are increasingly used to handle application forms. He advised candidates to avoid relying solely on chatbots and instead use their own language to stand out in applications.

Despite Shorter Test Automation Times, What Challenges Remain?

Manjula addressed the ongoing challenges in QA, noting that the issue of limited time for testing has been a persistent problem throughout her career. Despite efforts to emphasize the importance of thorough testing, deadlines often remain unchanged, leading to shortened testing cycles and less coverage.

With the rise of CI/CD, the challenge has intensified as development cycles become faster and more frequent. This rapid pace makes it difficult to keep up with testing demands, leading to compromises in quality due to inadequate testing time, changing requirements, and insufficient testing environments.

Manjula emphasized that AI, while smarter and quicker, adds pressure to keep up with testing more applications in less time. She sees these challenges as opportunities to advocate for increased automation coverage and a more prominent role for QA in the development process. She highlighted the need for QA to have a stronger voice in decision-making and to be involved earlier in the process rather than at the end.

Jamie then addressed the current challenges in testing, particularly the impact of shorter test and automation times. He identified two main types of problems: technical issues, such as those related to automating captured data quality, and the time required to write code for automating tests.

He also believes that AI has the potential to significantly help with the latter. He pointed out that AI could improve the efficiency of writing test automation code, potentially serving as a tool to assist with code creation or even automatically fixing code. This could be a major area of growth as AI tools evolve, possibly becoming a go-to resource for addressing challenges in automation.

How Can Organizations Enhance Value Through Better Tools and Process Improvements?

Gary emphasized the importance of going “Back to Basics” in QA. With advancements allowing for faster generation of test cases and code, the focus should shift to improving the quality of these tests. He highlighted the need for tests to be atomic, efficient, and valuable, aiming for a quick feedback loop.

In his organization, Gary highlighted the significance of reviewing and optimizing existing test suites. He suggested that sometimes removing ineffective tests is more crucial than adding new ones. Additionally, he mentioned the importance of monitoring and visualizing test trends to ensure continuous improvement and maintain quality standards.

Gary also stressed the need for setting SLAs (Service Level Agreements) and monitoring quality metrics to avoid regression and identify any declines in test performance. Regular oversight helps in detecting issues like increasing test runtimes that may arise over time.

Jamie pointed out that while AI solutions can help save time, the challenge often lies in resource constraints rather than a lack of work. The main issue is prioritizing and managing a large pipeline of tasks.

He suggested that saved time from AI could be redirected towards future enhancements, bug fixes, improved automation, and project work. The hope is that organizations might use some of this saved time to invest in staff development and upskilling. This would enable employees to contribute more effectively and provide additional value to the business.

Jamie acknowledged that the impact of AI on time management and quality improvement can vary depending on the specific company and its priorities.

How to Identify and Address Biases in Current AI Models?

Jamie explained that bias in AI models often stems from the data used to train them. Understanding the data can help identify where biases might exist, but this can be challenging. He said that AI models, including chatbots, can sometimes make broad generalizations about topics such as gender and race, reflecting biases present in their training data.

He advised approaching AI with a critical mindset. You should question and analyze the outputs you receive, balancing your own opinions with those of colleagues and experts. Jamie emphasized that bias is not new and that we encounter it in everyday life. The key is to be aware of potential biases, question unusual or incorrect information, and continuously seek to learn and form informed opinions.

Ben highlighted that AI-generated content, such as those from tools like TripAdvisor’s AI for generating travel recommendations, is based on historical data. This data often includes inherent biases and can be difficult to test. He pointed out that AI models may not always provide consistent answers due to the nature of the data and the underlying algorithms.

He agreed with Jamie that biases are inherent in AI and that AI’s ability to generate unexpected or incorrect information, known as “hallucinations,” adds to the complexity of testing. Ben emphasized that while AI can produce useful outputs, it’s essential for testers to remain skeptical and verify the results. This involves cross-checking answers and being aware of the limitations of the data used to train AI systems.

What Skills and Tips Are Key for Future Quality Engineering?

Manjula emphasized the importance of staying relevant in both technology and industry trends, particularly understanding AI and its impact. She highlighted that critical thinking is crucial, as it allows individuals to question and understand the reasoning behind processes and decisions. Effective communication is also vital, especially when conveying messages about product quality.

She encouraged looking inward and reflecting on personal and team practices to improve and adapt. Manjula also stressed the value of building and maintaining human connections and trust, noting that no technology can replace the credibility and relationships formed through direct interactions.

Jamie emphasized the importance of focusing on foundational skills rather than solely chasing AI advancements. He said that AI is not the only significant development and highlighted the need for technical testers who are proficient with SQL, JavaScript, automation principles, and strong test fundamentals. Jamie advised individuals to focus on learning areas they enjoy, as passion makes learning more effective.

He shared his personal experience learning Italian, emphasizing the value of practice. He encouraged setting personal goals based on where one sees their skills evolving in the next 5–10 years. He recommended leveraging resources like Test Automation University and the LambdaTest website for hands-on learning. Jamie also offered his support to those seeking guidance, inviting them to connect with him on LinkedIn for further assistance.

Ben highlighted that the core responsibility of maintaining quality remains unchanged, even with AI integration. He emphasized that while AI might handle repetitive tasks like regression testing, testers still need to ensure their applications work as intended, whether by reviewing AI results or manually testing to experience the application like a real user.

He reflected on the evolution of testing, mentioning that formal training for manual testing was not available in the past, and it often relied on having a mindset focused on breaking things to prove they were flawed. He acknowledged that AI might take over some repetitive tasks but stressed that testers still need to verify and ensure accuracy.

Ben also pointed out that AI is still in its early stages, and its role and impact on testing could rapidly change. He encouraged testers to maintain their critical thinking skills, think about quality holistically, and stay adaptable as new tools and approaches emerge.

Gary then shared his journey as a manual tester, emphasizing the importance of hands-on experience over formal courses when learning new skills like programming. He reflected on his early career, feeling pressured to learn specific programming languages like C, and how that mindset shifted over time.

Gary underscored the value of learning by doing, encouraging testers to dive into building and experimenting without waiting for permission or formal readiness. He highlighted that many resources are now available, unlike five to ten years ago. He encouraged testers to step outside their organizational bubble, seek insights from the broader community, and gain diverse perspectives.

He advised testers to form their own opinions through direct engagement and exploration rather than just following others’ advice, stressing that the “why” behind learning is just as crucial as the “how.”

In the end, Sravani emphasized the importance of mastering both traditional and new skills for QA professionals. She highlighted that understanding core software engineering principles and validation techniques remains crucial, even as AI tools become integrated into everyday processes. Sravani pointed out that AI can help speed up automation and manage environments, but it doesn’t solve all problems, like unclear requirements or environment availability.

She stressed that software engineers and QA must look at the entire software delivery cycle and understand how AI tools fit into it. For QAs, learning basic coding skills is essential as manual-only roles are becoming obsolete. Instead of performing repetitive tasks, which AI can handle, QAs should focus on adding value by acting as end users who validate the product from a usability perspective. This shift allows them to be more creative and redefine their roles in the evolving software landscape.

All in All

Wendy, the host of this session, expressed gratitude and appreciation to the panelists—Manjula, Jamie, Ben, Sravani, and Gary—for their insights and contributions to the discussion. This panel discussion emphasized staying relevant to industry and AI trends, developing critical thinking, and mastering both traditional and new technical skills.

Continuous learning by doing and engaging with the community were highlighted as essential. As AI handles repetitive tasks, QA roles should focus on higher-value, creative problem-solving.

If you have any questions, please feel free to drop them off at the LambdaTest Community.

Author Profile Author Profile Author Profile

Author’s Profile

LambdaTest

LambdaTest is a continuous quality testing cloud platform that helps developers and testers ship code faster.

Blogs: 174



linkedintwitter

Test Your Web Or Mobile Apps On 3000+ Browsers

Signup for free