...

Synchronization between Agile Embedded Software Development and Test Automation

Around 70% of companies today rely on agile software development1, and in the embedded domain most organizations also use agile methods in practice. As the name suggests, agility primarily means flexibility. However, this flexibility also increases the need for close communication and systematic synchronization between development and testing.

In this article, we focus on the synchronization between agile embedded software development and the development of a test automation system (TAS).

We first look at the agile software development lifecycle, examine for which types of projects a TAS makes sense and for which it does not, explain the basic architecture of such a system, and finally show how product development and TAS development can be meaningfully synchronized.

The Agile Software Development Lifecycle (SDLC)


Before we can understand how these two processes can be synchronized, we first need to understand the building blocks of an agile software development lifecycle.

  1. Requirements Gathering
    • In this phase, all relevant stakeholders collaborate closely to identify the goals, expectations, and priorities of the project.
  2. Requirements Refinement
    • The collected requirements are translated into concrete, actionable tasks. Based on these, test cases and acceptance criteria are later defined.
  3. Design & Development
    • The software architecture is designed and implemented in short, iterative development cycles (sprints). Each sprint delivers functional increments of the product.
  4. Testing
    • Testing is an integral part of every iteration. Unit tests verify individual software components and are written by developers. Integration and system tests ensure that individual components or the system as a whole meet the specified requirements. These tests are typically designed and executed by dedicated testers.
  5. Release
    • The developed software increments are regularly deployed to the production environment, often via over-the-air (OTA) updates.
  6. Feedback
    • Stakeholders and end users continuously provide feedback, which is often consolidated by the Product Owner (PO). The insights gained feed back into the further planning and development of the product.2
Figure 1: Agile Software Development Lifecycle (SDLC)

In the overall embedded development cycle, hardware development is added alongside software development. In practice, early prototypes are often built on manufacturer-provided development boards, while missing functionality is implemented using additional modules or supplementary boards. Hardware development therefore typically runs in parallel with software development.

With the introduction of a test automation system (TAS), a third gear is added to the process. We now have a development cycle with three moving parts.

Figure 2: Components of the Embedded Development Cycle

To work efficiently and effectively, it is essential that these three gears mesh smoothly and remain synchronized.
In this article, we focus specifically on the synchronization between two of these gears: embedded software development and the development of the test automation system (TAS).

Manual vs. Automated Testing in Agile Embedded Development


The traditional approach developing software first and testing it only at the end can work in some cases, but it is not agile. In agile software projects, the key question is not whether testing is continuous, but how continuous testing is implemented. By definition, an agile development process tests software continuously in order to obtain feedback as quickly and as often as possible. The goal of this rapid test feedback is to detect defects early and thereby accelerate the overall development process.

In general, there are two approaches to testing: manual and automated testing.
In practice, a combination of both is ideal. Even in very small development projects, unit tests should be implemented, and these are typically executed in an automated way.

The central question arises at the integration and system test level: Should the majority of these tests be performed manually or automatically? The term majority is used deliberately, because automated tests cannot replace all manual tests, and manual tests cannot replace all automated tests. Exploratory testing, for example, relies on human intuition and experience and therefore cannot be meaningfully automated. On the other hand, test types such as performance or stress testing are practically impossible to carry out without automation.

As a result, manual and automated testing play complementary roles in the test process. An effective test strategy combines both approaches in a targeted way to reliably assess both functional correctness and system behavior under realistic and extreme conditions.

As a result, manual and automated testing play complementary roles in the test process. An effective test strategy combines both approaches in a targeted way to reliably assess both functional correctness and system behavior under realistic and extreme conditions.
In the embedded domain, automated, hardware-near integration and system tests are particularly important. Hardware-in-the-loop (HiL) test environments are a common form of test automation system (TAS) in this context, although their setup can be very complex depending on the system.

Therefore, a careful evaluation is required to determine whether the use of a HiL test environment is economically justified.
Due to the high relevance of HiL testing in embedded systems, this topic will be addressed in more detail in the remainder of this article.

Is a HiL Test Environment Worth It?


To determine whether a HiL test environment is financially worthwhile, two key questions need to be answered:
How large are the expected savings, and how high is the required investment?

The potential savings mainly depend on factors such as:

  • The time required to execute a manual test case, including test setup
  • The number of test cases
  • The number of test executions

These savings must be compared against the investment required to build and operate the HiL test environment, which depends on factors such as:

  • The effort required to set up the HiL environment
  • The average maintenance effort for the HiL environment
  • The number of non automated test cases
  • The average effort required to develop (automate) test cases
  • The number of automated test cases
  • The average maintenance effort per automated test case3

This model is intentionally simplified and considers only the direct comparison between manual and automated testing. Indirect economic effects such as shorter development cycles, reduced defect costs due to earlier fault detection enabled by faster execution and higher test frequency, or faster time to market through earlier feedback are not included in this calculation. In practice, however, these effects further increase the overall economic benefit, even though they are difficult to quantify.

If the expected savings already exceed the investment costs based on the factors above, it is very likely that a HiL test environment will pay off economically.

The following figure illustrates the development of cumulative investment and cumulative savings over several sprints. The point at which the two curves intersect represents the break-even point of the HiL test environment that is, the moment when it starts to generate a positive economic return. This shows that HiL test environments are particularly beneficial for projects with long runtimes or high test frequency.

The figure is intended purely for illustration. Depending on the size of the initial investment and the level of achieved savings, the break-even point may shift earlier or later accordingly.

Figure 3: Return on Investment of a HiL Test Environment

A HiL test environment is therefore not a “magic” solution that automatically saves time and cost or that is mandatory for every project. First, the HiL test environment itself has to be developed and set up, and even after commissioning it requires continuous adaptation, extension, and maintenance. This is especially true when the product or system under test (SuT) changes significantly.

In general, the following applies: for very small development projects, a HiL test environment is usually not worthwhile. For larger projects or products with longer lifetimes, however, a HiL test environment contributes directly to cost savings through reduced manual test effort, and indirectly through faster and more frequent feedback to the development teams.

Architecture of a HiL Test Environment


To understand the synchronization between the development of a HiL test environment and product development, it is necessary to look at the fundamental architecture of a HiL test environment. This includes in particular the role of the Test Automation Framework (TAF), which is a central component of every self-developed HiL test environment, as well as its interaction with other key elements of the development process. Together with its interfaces to these elements and with the laboratory and network infrastructure, the TAF forms the HiL system. The HiL system, in combination with the System under Test (SuT) and the associated software and process components such as configuration management, test management, and test case repositories, is referred to in the following as the HiL test environment.

The ISTQB defines the General Test Automation Architecture (gTAA) as a generic reference model for test automation systems.4 As the name suggests, it represents a general standard architecture that is primarily intended for structural classification and illustration. The following figure shows a gTAA-based architecture adapted for self-developed HiL test environments.

Figure 4: gTAA of a HiL Test Environment
  1. Test Automation Framework (TAF)
    • The Test Automation Framework (TAF) forms the technical core of the HiL system. It controls the execution and evaluation of test cases by orchestrating communication with the System under Test (SuT) and the laboratory and network infrastructure. The TAF itself can be divided into several functional blocks:
      • Core Libraries and Logic: This layer contains the generic, system-independent functions of the TAF. These include fundamental mechanisms for test control, logging, error handling, and general utility functions that can be reused across projects. In addition, this layer includes generic algorithms and evaluation logic used to process, analyze, and interpret test data, regardless of which specific SuT the data originates from.
      • Control and Observation of SuT-Independent Infrastructure:
        This layer covers the connection and control of laboratory equipment and basic communication mechanisms, such as power supplies, measurement instruments, or relay cards. These components are not specific to a single SuT and can be reused across multiple projects.
      • Control and Observation of SuT-Specific Infrastructure: This layer refers to specialized hardware and interfaces that are required exclusively for a specific SuT. Examples include programming devices for flashing firmware, special measurement or analysis hardware, and proprietary protocols that are used only within the context of that system.
      • Control and Observation of the System under Test (SuT): This layer includes all protocols and interfaces through which the TAF communicates directly with the SuT. It covers both dedicated test interfaces and standardized access mechanisms.
  2. System under Test (SuT)
    • The SuT is the embedded system or product being tested and consists of the real hardware on which the software runs. Typically, this is the same PCB that will later be used in the production system; in early development phases, evaluation boards or prototypes may also be used.
    • The SuT is controlled and monitored by the TAF via one or more interfaces. These interfaces are essential because they enable test execution and the collection of data required for test evaluation.
  3. Network and Laboratory Infrastructure
    • The laboratory infrastructure supplies the SuT with power, allowing controlled power cycling and reboots, which are especially important after flashing or updating firmware. Other components of the infrastructure are used to observe the SuT and to drive it into defined states.
    • Measurement devices include oscilloscopes, digital multimeters, and logic analyzers, as well as equipment used to actively stimulate the system, such as waveform generators or relays.
    • In addition, network connections and services such as MQTT brokers are often integral parts of a HiL system.
  4. Test Cases
    • Test cases define which features and properties of the SuT are to be verified. They are executed by the TAF, and the resulting system behavior is evaluated. In practice, test cases should be kept separate from the TAF, as this significantly increases the reusability of the TAF across multiple HiL test environments.
  5. Test Management
    • The test management system is used to manage, track, and evaluate test results. Via dedicated interfaces often implemented as Python or Bash scripts or as part of CI/CD pipelines. Test results are transferred to external systems where they are stored and analyzed.
  6. Configuration Management
    • Configuration management determines in which HiL test environment a test is executed. It specifies, for example, which hardware is used, which firmware version is flashed onto the SuT, which interfaces are active, and which parameters apply. In small HiL setups with only a single SuT, formal configuration management may be not necessary; however, as project size and complexity grow, it becomes essential for a stable and scalable HiL infrastructure.
    • Based on this information, the TAF loads the appropriate test environment and ensures that tests are executed in a reproducible and consistent manner.

Interim Conclusion


Figure 4 shows that significant parts of a HiL test environment especially the interfaces to configuration management and test management are largely independent of the System under Test (SuT). These components can therefore be designed and implemented at an early stage of the project, even before the actual product development has started.

Within the Test Automation Framework (TAF) itself, there are also components that are only weakly coupled, or not coupled at all, to the specific SuT. These elements are particularly well suited to be designed and implemented early, providing a stable foundation for later, system-specific extensions.

Figure 5: gTAA of a HiL Test Environment from a Reusability Perspective

This modular structure of the HiL test environment also enables the reuse of core components across projects, as illustrated in Figure 5, without having to redevelop large parts of the system or introduce unnecessary complexity.
In contrast, SuT-specific adapters and test cases form the variable project layer that must be created individually for each product.

Practical Approach – Synchronizing TAS and SuT


Once the fundamental decision for or against a HiL test environment (being a concrete implementation of a Test Automation System (TAS)) has been made, the next key question arises: When and how should its development begin?
This decision is critical and requires careful consideration.

In general, the development of a TAS should never be ad hoc or unplanned. A TAS is an embedded system in its own right, even though it is tightly coupled to the System under Test (SuT).

Testability as a Product Requirement


Testers should be involved already during the definition of SuT requirements. They can assess whether requirements are actually testable and contribute to the product architecture by promoting observability and controllability, for example through additional test interfaces that enable later automation.

  • Observability describes the ability to access and monitor internal system states through suitable interfaces and measurement points.
  • Controllability refers to the ability to drive the system into defined states and control it externally.

Considering these aspects early leads to significant time savings during testing and substantially reduces the occurrence of so-called flaky tests, tests that yield inconsistent results under seemingly identical conditions.

When to Start Developing the TAS


From a technical perspective, TAS development should start as soon as the first product architecture has been defined. In agile projects, this architecture is often extended or refined over time. While extensions are usually uncritical for the TAS, major architectural changes can be problematic: if components are removed or fundamentally redesigned, corresponding adaptations are also required in the TAS.
However, as described in the next section, this risk can be significantly reduced through appropriate measures.

The first step is therefore to understand the product architecture and derive requirements for the TAS from it, followed by designing the TAS architecture itself. Close collaboration between developers and test automation engineers is essential here: TAS and SuT must be designed to fit each other.
The TAS must be tailored to the SuT, and the SuT must be actively optimized for testability. Without coordination and compromises such as providing dedicated test interfaces the overall project success is at risk.

The following figure illustrates the synchronization between manual testing, TAS development, and SuT development in an agile context.

Figure 6: Synchronization of the development cycles

Note: The illustrated manual iteration cycle reflects the tester’s perspective5 on the operational test case lifecycle. Cross-process activities from a test management perspective such as test planning, test monitoring, and test closure are intentionally omitted, as they operate on a strategic level and across projects.

In addition to the System under Test (SuT), the Test Automation System (TAS) itself must also be tested. In most cases, complex integration or system tests are not required for the TAS. However, at least unit tests should exist for the libraries, helper functions, and algorithms used within the TAS to ensure stability and maintainability.

Risk Minimization – Development Order within the TAS


To avoid unnecessary rework, the SuT-independent components of the TAS should be developed first in the early iterations, before implementing SuT-specific components. Automated test cases should only be created once the corresponding manual tests have already been defined and executed. These manual tests also serve as a valuable source for deriving additional TAS requirements, since they reveal which capabilities the TAS must provide for test execution. However, they are less critical for the very first TAS iteration.

In early project phases, both architecture and requirements of the SuT tend to change frequently. This approach therefore significantly reduces the risk of rework.

What Should (Initially) Not Be Automated


Special care is required when selecting which tests to automate first. Highly complex test scenarios should be automated only in later phases and preferably executed in separate test suites. Due to their many dependencies, such tests are typically more fragile and require significant maintenance effort in early iterations.

In practice, early iterations are dominated by manual testing. As the TAS becomes more stable, an increasing number of tests are automated. As already stated, it is neither feasible nor desirable to automate all manual tests. The goal of test automation is not to replace manual testing, but to automate repetitive, well-specified test cases and allow testers to focus on strategic testing, exploratory testing, and tests that are difficult or uneconomical to automate.

Conclusion


Agile embedded development only reaches its full potential when development and testing operate as tightly integrated processes. In embedded systems where software, hardware, and test systems evolve in parallel clean synchronization between the SuT and the TAS is crucial for quality, speed, and economic efficiency.

HiL test environments represent a particularly common form of TAS in embedded development, as they enable hardware-near testing under realistic conditions. They are especially valuable for products with many iterations, high test frequency, or long lifecycles. What matters is not whether automation is used, but what is automated, when, and to what extent.

A modular TAS architecture, the separation of stable core components from SuT-specific elements, and the early integration of testability into product design form the foundation of a sustainable and scalable HiL test environment.


Sources


  1. Global Six Sigma USA, LP, „The Guide to Agile SDLC: A Modern Approach to Software Development“, published January 7, 2025, accessed January 6, 2026, https://www.6sigma.us/six-sigma-in-focus/agile-sdlc-software-development-life-cycle/ ↩︎
  2. cf. GeeksforGeeks, „Agile SDLC (Software Development Life Cycle)“, published July 23, 2025, accessed January 6, 2026, https://www.geeksforgeeks.org/software-engineering/agile-sdlc-software-development-life-cycle/ ↩︎ ↩︎
  3. cf. ISTQB® Certified Tester Test Automation Strategy Syllabus Version 1.0, p. 38 (bottom) ↩︎
  4. cf. ISTQB® Certified Tester Advanced Level Test Automation Engineering Syllabus Version 2.0, p.23 ↩︎
  5. cf. ISTQB® Certified Tester – Lehrplan Foundation Level 4.0.1a, p. 24 (bottom) ↩︎
Picture of Bertran Ziyadov
Bertran Ziyadov
Holds a B.Eng. in Electrical Engineering with a focus on Embedded Systems from Berlin and is an ISTQB-certified Test Automation Engineer. He advises companies of all sizes on designing test automation strategies for embedded systems, with a particular focus on developing automated, Hardware-in-the-Loop (HiL) test systems.
Share
Read More
Scroll to Top
Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.