XP Series Webinar

Building a Test Automation Framework for TV Apps & Scaling at FX Digital

In this XP Webinar, you'll discover how FX Digital built a robust, scalable test automation framework for TV apps, addressing challenges in Smart TVs and set-top boxes to ensure high-quality delivery.

Watch Now

Listen On

applepodcastspotifyamazonmusic
Martin

Martin Tyler

Head of Quality Engineering, FX Digital

WAYS TO LISTEN
applepodcastspotifyamazonmusicamazonmusic
Martin Tyler

Martin Tyler

Head of Quality Engineering, FX Digital

Martin leads the development of cutting-edge test automation tools for Set-Top Boxes and Smart TVs. With a background in automotive product development and aviation, Martin has honed his expertise in both manual and automated testing. At FX Digital, Martin initially built a robust test framework for Android and Apple devices, later expanding efforts to create a vision-based framework for TV devices. His leadership drives innovation in testing technologies and processes across the industry’s diverse landscape.

Mudit Singh

Mudit Singh

Head of Marketing, LambdaTest

Mudit Singh, Head of Growth and Marketing at LambdaTest, is a seasoned marketer and growth expert, boasting over a decade of experience in crafting and promoting exceptional software products. A key member of LambdaTest's team, Mudit focuses on revolutionizing software testing by seamlessly transitioning testing ecosystems to the cloud. With a proven track record in building products from the ground up, he passionately pursues opportunities to deliver customer value and drive positive business outcomes.

The full transcript

Mudit Singh (Head of Growth & Marketing, LambdaTest) - We’re live. Hello, everyone. Welcome to our LambdaTest Experience (XP) Series. So, usually, in our Experience series sessions, we have talked a lot about mobile app testing and web app testing. We have talked a lot about automation testing and general quality engineering in general.

But we have really not talked about one very important digital content platform. It's extremely important in today's world, even though I think this would be the one platform that you would be using every day, but we'd really talk about it. We really never talked about it on the XP Series.

So, I'm talking about smart TVs and the apps that go with these smart TVs. Now TV apps like Netflix, and Prime, you guys have already heard about it, have been using it. They are kind of an integral part of our daily lives. But unlike web or mobile applications, these apps come with very unique challenges due to the diverse hardware and software ecosystem of these devices.

It's like the age of Internet Explorer all over again. A lot of incompatibilities in between. There are a lot of complexities that arise specifically in building up a robust test automation framework, specifically for Apple TV, Roku, and different types of Android TV devices as well. So in today's session, joining me today is Martin Tyler.

Martin is the head of quality engineering at FX Digital. Now, FX Digital is a digital transformation agency that has focused mainly on building smart TV apps for platforms like Discovery and Eurosport. FX Digital is one of those platforms that is tackling the challenge of transforming quality engineering for TV apps, specifically across the automation of TV apps.

And Martin has more than a decade of experience, specifically for working for satellite companies, and even for FX Digital. He's going to share with us how FX Digital is building state-of-the-art vision-based test frameworks for smart TV devices.

So, first of all, Martin, thanks for joining us today for the session.

Martin Tyler (Head of Quality Engineering, FX Digital) - Thanks very much for having me. Yes, it's really great to be on the podcast.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - This is really exciting for us as well because, as I said, this is something that we rarely talk about. But this is something pretty much everybody consumes a lot of content across smart TV devices, including apps like Netflix, and they are making billions of dollars. And we never really think about how the engineering of building these kinds of devices goes in the backend.

So let's start with that specific thing in purpose. Like, so let's talk about Martin's experience beforehand. Like I know we have worked in platforms like automated industries and aviation industries, building apps for them.

So, how have you adopted your experience in working in these very diverse industries and then used that experience in building TV apps and testing those TV apps? So what were the key factors when you were thinking about building a robust framework for TV app testing that you took from your past experiences?

Martin Tyler (Head of Quality Engineering, FX Digital) - Well, I think, one of the really big ones, the first one was that I came from a QA background, not a dev background. So, that really helped me understand the challenges that the QA's are facing. so when we, when we were building everything, we were building it with, with the QA in mind.

You mentioned that I was involved in, I was involved in product development in aviation, where we were developing, like car electronics. and then later in, a company doing in-flight Wi-Fi.

And both of those things are quite difficult to automate. I kind of like having seen how those have been automated and worked on some of that automation myself. So I kind of had that experience of like how to automate the un-automated, I guess. And, you know, I think people normally, they listen to my experience and they go, they must have been really hard aviation and automotive to automate those things.

But actually, EVs are much more difficult because they're super locked down. The companies don't want you to be able to get the information necessarily off them, not to stop you from automating, but they just, they're just worried you're going to, you know, still, still the content, right? So, they're quite locked down.

So we had to really understand how to get the information off of those devices in the first place before we even thought about any of the automation. So yeah, that's, that's one of the big challenges. And I think one of the things that I learned, from my previous experience.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - So, actually doing a very deep dive into the hardware of the TV. So, in fact, I will say this is the first experience that you already had that it's not just the software side of things. You also have to look holistically at how the tooling can be built over the top on the available hardware so that you get the best logs, results, and everything out of it to actually understand the quality of the application.

So that's a pretty important learning, I would say. And of course. This would be a pretty big challenge considering the diverse ecosystem of the TV app devices as well, right? So we have, there's an Apple TV. I'm quite sure those guys would be pretty, they would have pretty much locked down all of their hardware and anything that you can get out of it.

And things like Android TV, Roku, and Amazon Prime stick, like there are so many different devices. So, how do you account for the different operating systems, screen sizes, and manufacturers?

Martin Tyler (Head of Quality Engineering, FX Digital) - So yeah, there is there is lots of different devices, but I think the ones you've mentioned are actually relatively easy to automate anything with like Android and with tvOS, they're essentially like mobile phones really. So you can kind of treat them the same way you can use Appium, and you can get the information from them quite easily.

When you start to look at Samsung TVs and LG TVs, we get some really weird and wonderful boxes from around the world like fetch, which is a set top box in Australia, Comcast boxes out of the US sky boxes in the UK.

Now, that's when the real challenge starts because the web drivers aren't necessarily available, and where the web drivers are available for those devices, they often can't get screenshots from the devices. And because we're doing everything vision-based, we need to be able to get those screenshots off of the devices.

So yeah, that's kind of the first challenge, really. In terms of the resolution, we test everything from the design. So, what we're doing is taking a screenshot of the application as it's on the real device. And then we're taking a screenshot from the designs of the specific element.

So, not the entire page. We're just looking for a specific element, much like you would do traditionally; you'd look for an XPath of an element or an idea of an element. We use the image as the identifier.

Then, what we call our image comparison API will return the location of that element. It will return like the percentage match. So we have a threshold that we work to. We don't work to a hundred percent because we don't want really flaky test cases, and humans can't tell it's a, you know, if it's a couple of pixels out, it's not gonna, it's not really gonna make any, any kind of difference to you.

So yeah, that's how that works. And we scale everything up or down to 1080 px because that's what our designs are done in. So that's how we handle those different screen resolution sizes.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - So it's basically a robust set of frameworks around visual regression testing.

Martin Tyler (Head of Quality Engineering, FX Digital) - It's yes, everything is done completely revision based. And the reason that we do that is FX digital user a tool framework called lightning JS. And that we don't just use that, but we use that a lot. And that makes use of the GPU rather than the CPU.

So it uses WebGL under the hood, which is basically a language that renders images on your GPU and you don't get the DOM. So you can't interrogate the DOM like you would traditionally.

So that's why we had to go down this, this visual route, but also our dev team, much like we're not scared of new technologies, our dev team aren't scared of new technologies. So we wanted to build a framework that it doesn't matter what underlying technology was being used. We wanted it just to be completely agnostic.

So that's where the visual stuff came in. And then, actually, it's really kind of like we started off doing it for the WebGL reasons. But actually, when we've started to use this in practice, it just makes so much more sense because that's how a user is actually using the device.

They're not looking into the DOM. They are looking at the screen and looking at what's on there. We've seen issues like, for example, a shader on a particular skybox behaving slightly differently because every one of these devices, every single model year, is generally pinned to a different version of the browser.

So you've got lots of different browsers, and people don't throw their TV devices away. They keep them for six or seven years, much longer than you would for a mobile phone. So we're testing against many different versions of these browsers and any, and this particular Skype device, the shader was behaving differently to all the other ones.

Now, if we'd been using traditional automation, we would have gone, that element is there that exists. And it would have been absolutely true. It would have existed. But because the shader was behaving differently, I could see that it actually wasn't as per the designs just on that particular box. All the other ones were behaving as expected.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Nice, so, that's a completely very different approach. So, we do a lot of browser testing. know people are doing a lot of browser testing, mobile device testing, and visual regression testing. It's something that is also a part of the ecosystem people do a lot, taking this as a, usually visual regression testing is a second approach. They start with dom-dom testing. They do more of a browser testing and then go on to let's, okay, let's also do a visual regression approach.

But you did it the other way around. The visual elements came first. That's pretty interesting. So let's talk about maintenance of such a system. So you mentioned a few challenges, but when I have to really scale this up, when I have to really scale this across different devices, different type of operating systems, so how is this scale maintained? And specifically, when the company that you're working for, is FX Digital as their portfolio keeps growing, how's that scale is being maintained?

Martin Tyler (Head of Quality Engineering, FX Digital) - Yeah, it was a huge challenge. We went from doing completely manual testing to we run around 300,000 automated test cases a month now, always on real devices. And yeah, there was lots of challenges to getting it to that point. We started off running everything on a Mac mini, which soon ran out of capacity.

We'd built this framework from the ground up so there was lots of, of course with any piece of software, right? There was, there was bugs in that in the first place. So we have this process that we went through with the QA team at first, where we would sit with them and we would train them because they were, they were all manual QAs, right?

Or a lot of them didn't have necessarily automation experience sat with them, explained, like how, how things work, but also found, bugs in the framework, and fi fixed those over time. I'm happy to say that we were in a much, much better position now. We host a lot of our microservices on the cloud. and we're able to scale horizontally with that.

We're able to scale up those, those services. I checked just before I came on actually, our image comparison API had around 200,000 API calls last night. So, yeah, we've kind of like, like, yeah, learn, learn how to build those, those really robust, microservices as well.

Our framework itself is completely containerized. So it all runs in Docker. So we have one Docker container per device that's running in our lab. think we're up to around 50 or 60 devices now. So it's quite a few. And we've invested heavily into high-end service on premises because the devices are on premises. And if you have a web driver running in the cloud, for example, you can imagine that's a bit of a recipe for disaster.

Your internet connection goes down for a few seconds. yeah, we've really, we've really spent a really long time making sure that that is main, like, you know, really well maintained. of course, documentation, you can't really, you know, you can't really build a piece of software this big and then expect to be able to scale the team without really good, documentation. and finally, they, you know, always willing to try new tech, try new ways of doing things.

We're not stuck in our ways. We're always looking for a better process of doing things. We're happy to rewrite things. A few months ago, we rewrote our image comparison API, for example. We'd written everything in JavaScript and wasn't necessarily the best way for that API. So we rewrote that in Python. We're always looking at ways of optimizing, basically.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - So how many devices you guys have in your, let's say, home office where you have, let's say, thousands of different TVs, hundreds of different TVs, different type of mobile device, like different type of set-top box? You must be out, let's say.

Martin Tyler (Head of Quality Engineering, FX Digital) - I mean, in, in the, the company, I don't know the exact numbers. I was definitely more than 200, but that was, that was a number that was taken, you know, a couple of years ago.

So it's definitely way beyond that. and then, yeah, in terms of, terms of the lab, maybe we can share some, some photos with you and you can, you can show, show the viewers that. but yeah, we've, we've got, I think about 50 or 60 devices and we're, we're always adding more.

We've just taken on a really big new project. and I've just been handed a huge list of devices, lots of Android ones, lots of Samsung ones. so yeah, looking forward to getting those into the lab. We actually, a few months ago, moved as well, moved offices, because we'd, we'd run out of space.

We had like, we kind of like inherited this little room at the back of the back of our office. and, yeah, one of the big reasons for the company moving in the end was that the lab just wasn't big enough. It wasn't the only reason, but that was, that was a big reason.

And now so we've got a really huge space where we can have lots of devices in there again having and I get you guys I'm sure know this very well, but having all those devices on premises is a real real challenge Heat for example TVs kick off a lot of heat air cons as we've learned are air cons not necessarily 100% reliable. We have had problems with that going over.

So we've got like fall over aircon units now, the way we're installing. We've invested heavily into our network infrastructure to have that many devices online streaming at the same time. So yeah, the list of is endless really, all the things that we've, we've done other things as well, like they can they can be smaller things as well.

Like, for example, if you want to main if you want to quickly plug into a set top box device, it was getting a bit cumbersome dragging a TV over, plugging it in the back. So we've got a TV mounted on the wall now with like a wireless HDMI sender, and we're able to just plug that in anywhere in the lab.

Little things like that make the maintenance of such a system, you know, much better. It may say it may sound small, but once you start adding lots of those little things up, you really reduce the amount of time that you have to maintain each device. And it makes it much more scalable.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - So I now really know who to call when I have to buy my next Smart TV, right? That would be you the first guy. Yeah, so moving on, let's talk about performance. So I'm quite sure for a very end user perspective based applications, performance is one of the key factors that you guys would be checking.

So for those so many diverse devices, specifically, as you mentioned, you were working for a platform that is based out of Australia. So how are you testing for performance specifically on different devices and on different networks to ensure a seamless digital experience for all the end users?

Martin Tyler (Head of Quality Engineering, FX Digital) - So yeah, I think that's a really good question because a lot of the devices that we work with FX Digital are really low-performance. That seems to be our kind of special specialty. We do other stuff. We've built, you know, Apple TV apps and Android apps as well.

But yeah, think customers come to us a lot of the time because of our expertise for those really low-end devices. And we've developed a few different methods of measuring those. So, we measure a metric called TTMU, which is time to minimum usability.

And we actually built an algorithm that does that completely using vision again. That's what we tend to try and lean towards. So what that does, works with set-top boxes, actually not with TVs, because we can't get video feeds from them, but it will measure the time until the app has completely loaded, basically.

And we can measure that then on a daily basis to see trends. We take memory consumption from devices where we can and other metrics as well it's important to know, think it's not possible to get that information from absolutely everything. You can't get that off every other device.

If the API is simply on there, it makes it impossible, but we, we know from experience that if you measure it on as many devices as you can, the devices that you don't measure it on, you know, you can, you can infer that from, from those other measurements. we're looking at trends really over time on the devices that we can measure it from. That has served us really well in making some really performant devices for our customers.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - That's pretty interesting and it's also important for device manufacturers to include those kinds of capabilities, you mentioned, APIs and stuff to actually pull off the data and records from the device during the testing phase.

Martin Tyler (Head of Quality Engineering, FX Digital) - Exactly. Yeah, I mean, there is some new technology that's being released or a new standard that's being released that's been pushed by a few of the big players called device automation bus, which hopefully will simplify getting information off of the off of smart TVs.

But of course, people like I said earlier, people aren't replacing those devices. So we're still going to have probably six, maybe even more years where even if it was released tomorrow on every single device, we'd still have you know, six years of problems ahead of us.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Another thing that I found really interesting, so you mentioned a metric called time to minimum usability if I'm not wrong. So how you are mapping that through a visual base? Are you taking, let's say, screenshots after every millisecond and then comparing them to a minimal viable rendering or something like that?

Martin Tyler (Head of Quality Engineering, FX Digital) - Yeah, we take a recording of the application starting up, and then we analyze that recording. So it goes through kind of various stages if you think about how an application loads, like I said, a smart yeah. If you think about how an OTT application loads, you get like a splash screen generally.

And then you get the one that actually loads onto the landing page. You get a page with lots of, lots of colors, and we can detect the fact that there are lots of colors on the page and then no movement. And then from there, we know that the application is fully loaded.

And we've tested this across a variety of applications that we develop and also ones that are just freely available. And it works really nicely. I think that's the other nice thing, though, about the fact that we've built all this technology from the ground up. We can kind of modify it for different customers as we see fit as well.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - All right, so this is another interesting application. now when I'm taking it out in this regard, OTT apps open up. So there's a tada of Netflix, and then it opens up the application. So, how do you account for things like personalization in an OTT application?

So whenever something opens, the top screen is usually very dependent upon what kind of things you have watched or the location and all of those things. There would be a lot of personalized information because a lot of changes because of the personalized information that would be coming on the home screen. How you account for that?

Martin Tyler (Head of Quality Engineering, FX Digital) - So yeah, we know what to expect in terms of the, like the username we, that's, that's all kind of, accounted for in terms of the information coming from, the, the CMS that's always, always going to change, certain things we don't automate, because, I think that's really, really a really good point to, you know, drive home automation is never going to replace manual testing completely.

There's always going to be a place for manual testing as well. And if it takes longer to automate it than to manually test it, actually, you're better to manually test it. So yeah, a lot of the stuff that maybe is changing on a super regular basis, we would just test manually.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - That definitely makes sense. So now let's switch gears a little bit. Let's talk about the things that are something that everybody's talking about. In fact, right now is AI. So AI/ML, all these models, new technologies, charts, ChatGPT, Generative AI, and even cognitive AI-based features. They are doing a lot specifically in field around mobile and web app testing.

So, how's that impacting the world of TV apps? So, how do you see the role of AI ML in TV app testing? And what are the challenges, specifically, challenges and opportunities that you see using these technologies in the future?

Martin Tyler (Head of Quality Engineering, FX Digital) - So our vision algorithm is all machine learning-based. we use machine learning quite extensively. AI, we are starting to use that. So one thing that we're quite passionate about at FX is not just automating the testing, but automating the phases of testing.

So writing the test cases, analyzing the test cases, and maintaining them as well. So we've got quite a long roadmap set based around how we're going to use AI in that. At the moment, we've started working on an AI algorithm that is able to write test cases.

So, framework, our framework has a number of predefined steps that the QA team can use. They can also go out and make their own custom steps, but that's kind of another story. but so we've, we've given, an AI, an AI model, the steps that it can, that our framework can use. And then we're writing some code that will actually write those test cases.

So the idea being eventually we'll have, we'll have, be able to hook that up to a, a web driver and allow the, the, the AI to crawl through the, all the different screens on the application and then write test cases based on the screenshots that it takes.

That will be very useful when we inherit an application, maybe that's not being built from the ground up by us, where we want to automate lots of different, like an application that's been built already. And then we're looking at other ways of using AI as well, like analyzing those test cases. We'll always have that human element, that human touch because I think it's really important to still have that.

But to be able to look at the test case and go where the dream really is, the QA is able to look at the results and go, actually, no, that's fast, the correct behavior. And the test case just then updates itself. So we've got that in our roadmap as well, so kind of like being able to analyze and maintain that using AI.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - So this is again a very visual first way to do the testing.

Martin Tyler (Head of Quality Engineering, FX Digital) - Yeah, exactly. And I think I think it's probably good to mention as well that we have started to offer our test automation services, not just as part of our application building, but to other companies as well.

So yeah, that's where, really, if they come to us with an app on lots of different devices that they've made, and we can very quickly write the test cases using AI for that, you can imagine that that could become quite powerful for them.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Yeah, that's definitely a very unique approach. In fact, the product that we recently came out with that was KaneAI. It's again an AI-based test authoring agent. So we also focused on the visual element first, finding out the visual element, like finding out the elements visually rather than through locators or DOM elements.

So, that has helped us really scale up the application. So we have four or five more minutes. So let's talk about what are the core learnings that you have in building up this enormous new innovative framework that advice that you will give to other companies and other team members who are building up similar kinds of frameworks.

Martin Tyler (Head of Quality Engineering, FX Digital) - So, I think the biggest lesson that I learned is if you don't already have automation in your organization, don't underestimate the difficulty of changing a process in a business. We kind of really focused on the QA team and liked how we could help them.

And they loved it. They were on board with it from day one. like understandably, like other departments maybe weren't so up for it. Like for example, the project management team; you're asking the team to do more work, right? Yeah, they're trying to get a project across the line and you're trying to essentially give them more work to do.

So you really need to educate, educate everyone in the business about the benefits of, of test automation. I think we really underestimated how difficult we underestimated that we've needed to do it in the first place. And then we underestimate how difficult we kind of changed the process.

I remember actually one day we did, like you know, lots of lots of different of these activities like training people around the business. And we did lots of work. As I said earlier, like scaling up our services and one day walking into the office and seeing like the QA team with a load of with the rest of the project team and having automation logs up in one of our 1:1 meeting rooms was like, wow, we've actually done this, which which is really great.

So yeah, for anyone thinking about going from manual to test automation don't underestimate that. In terms of like TV apps specifically don't end underestimate the difficulty of getting info from these devices like they're they're locked down. They're not definitely not designed for automation.

You know, I don't think anyone's even necessarily thought about automating on these devices when they've been designing them anyway. So we had to build a lot of the web drivers from the ground up, and we've developed lots of kinds of ways and methods of doing that.

Having a lab with devices is super hard. Like, as I'm sure you guys know firsthand as well, right? Yeah, if you don't have to have the devices in your, in your, in your building, then yeah, just don't do it because that that that on its own is a massive learning curve, like how you store the devices, how you keep things cool, how you, you know, all the networking around that as well is really super difficult.

So, I think that they're some of the real key lessons that I've learned. I could probably keep Karen talking to you all day about all the things we've learned. And I think that the most, the biggest lesson that we've learned as well is that people and processes are much more important than technology.

When we initially built this technology, we thought, there you go to the QA team; we'll just hand it over to you, and it'll be fine. That's not the case. Like you need the people to be trained, you need the process built around the test automation before it really starts making an impact on the business really.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Nice. And we are really hit on the mark. just thanks for Martin for taking the time out, walking us through your unique experience of building up TV apps. And I'm quite sure it has been a pleasurable experience for you as well, both talking with me and building up the whole framework for FX Digital. So again, thanks for taking the time out and joining us for the show today.

Martin Tyler (Head of Quality Engineering, FX Digital) - It has been great. Thank you very much for having me.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Awesome, so this session, again, as I said, would be available on YouTube. Feel free to subscribe to our platform to get more such sessions in your inbox. I will also leave Martin's social info below.

So, if you want to ask any questions, feel free to add to the YouTube comments or reach out to Martin directly. Maybe he will be able to help out. Again, thanks, everyone, for joining us today. Until next time for our next session. Happy testing.

Past Talks

Upskilling Quality Engineers for the New Age: A Success Story in SDET TransformationUpskilling Quality Engineers for the New Age: A Success Story in SDET Transformation

In this XP Webinar, you'll explore how SDET transformation is reshaping the skills of quality engineers, empowering teams to meet modern software demands with agility, innovation, and enhanced expertise.

Watch Now ...
Creating Reliable and Scalable Test Automation FrameworksCreating Reliable and Scalable Test Automation Frameworks

In this XP Webinar, you'll explore best practices for building reliable and scalable test automation frameworks, ensuring faster development cycles, enhanced code quality, and improved test coverage across diverse platforms and applications.

Watch Now ...
GenAI for Quality TransformationGenAI for Quality Transformation

In this XP Webinar, you'll explore how Generative AI is revolutionizing quality transformation by automating testing, enhancing accuracy, and driving innovation in software development. Discover cutting-edge techniques for achieving unparalleled software quality.

Watch Now ...