Model-Based Verification

Verification of a function can often be as time consuming as its development. It is therefore important to have appropriate tools support where it is possible to automate and reuse as much as possible throughout the process.

A commonly accepted theory is that the cost of fixing a bug rises exponentially as the project progresses, which is why performing verification as early as possible is imperative in order to lower the risk of bugs surviving until the final release. It is also easier to troubleshoot issues when functions are tested in each step of the model-based development process.

Within model-based development and testing, the model of the system is the central part. The system consists of both of the model of the function which is developed and, more importantly, the model of the plant which is to be controlled by the function. This plant model can be simple or very complex, depending on what is needed. In some cases, a set of parameters may be enough; in others, it may be necessary to use advanced dynamics and logic models to create a correct feedback behavior.

With a well designed model-based process, it is possible to use the same plant model throughout the entire toolchain, with only small adaptions in order to match subfunctions.

Related Content:

When a first model of the algorithm or control system has been created, it must be verified that it operates as intended and that it meets the requirements.

Simulation possibilities usually come with the modeling environment, but to be able to start a simulation, some type of stimulus is required. In some cases it may be parameters that must be entered, but in many cases a feedback system is required where you also simulate what is to be controlled and any auxiliary functions. The model that recreates the behavior of the environment is called a plant model.

When simulating the model, it is appropriate to keep implementation to floating point numbers to introduce as few sources of error as possible. If the target system is an integer unit, limitations such as scaling and the data type interval come in and affect the result and if they are not configured correctly, the behavior will differ significantly from what is intended.

Since modeling tools have different strengths, it is also advantageous if you can include models from different tools, e.g. through the standard Functional Mock-up Interface (FMI) and connect them for a common simulation.

Simulation where a model represents the algorithm you develop is called Model-In-the-Loop (MIL) Simulation.

Related Content:

When auto-generating code, the code will represent the model’s behavior. This does not necessarily mean that the model and code will exhibit identical behavior. For this reason, simulations should also be performed where the finished code is used, called Software-In-the-Loop (SIL) Simulation. This is done with the same external model as in MIL. It is also possible that you do not have a model for all or parts of the code base.

Even if the target system is a floating-point processor, there are often large amounts of calibration parameters and look-up tables where integers are used to reduce memory usage. It is also possible for the code to use calculation libraries other than the model.

To see what consequences, for example, integer implementation has, it is appropriate to simulate the code (SIL) and compare the result with the simulated model (MIL), such tests are usually referred to as “back-to-back”. It is a great strength here to have the model in floating numbers, which makes it easy to identify potential problems and optimization opportunities.

Related Content:

To verify that the compiler and processor do not introduce any errors, it is possible to connect an evaluation card in the simulation.

The code is then executed on the target processor while the external model (same as in MIL / SIL) is run on the computer and the result is fed back in each time step. This is called Processor-In-the-Loop (PIL) Simulation.

By doing this, you can also get valuable information such as memory usage, execution time and stack usage for the function.

Related Content:

If there are several functions that are developed independently of each other, the integration must be verified. Instead of doing this in connection with implementing on the target system, the integration can initially be done on the PC.

By performing a simulation of the integrated code on a PC, you have achieved virtual validation of your control unit. This is a key component for testing the impact of an updated function.

By using simulation environment and test execution tools that support standardized interfaces such as ASAM XIL-API, the tests created can be reused on different platforms throughout the development process.

Some advantages of performing the integration tests in a virtual environment:

  • Integration tests before the target hardware is available.
  • Disconnection from hardware implementations such as drivers and electrical interface.
  • Faster-than-real-time executions
  • Excellent debugging possibilities

Related Content:

Once the integration-tested code is implemented into the target hardware, it’s time for the final verification. Parts of this verification is performed connected to the actual system to be controlled, but there are also many advantages to performing the test in a lab environment where other parts of the machine are simulated. This is called Hardware-In-the-Loop (HIL) Simulation.

HIL Simulation opens up the possibility of creating automated tests that can be evaluated in a deterministic way for reuse in different software and hardware versions. Here it is possible test both functionality and electrical interface. A large part of the functional tests can be reused from previous MIL and SIL testing.

A large part of the software is usually diagnostics and to verify it you have to inject errors. It could be things like short circuits or cable breaks or completely incorrect values.

In this scenario, the plant model used in previous steps must be supplemented with an I / O model that maps signals to physical outputs on the simulator that correspond to real sensors, actuators and network interfaces.

HIL simulation can be done both for testing a control unit but also for connecting several units to test distributed functionality and network, common practice is to perform both.

Related Content:

In the verification process, repeatability is key. To achieve repeatability for so-called regression tests, automation is needed.

Most test tasks can be automated with test scripts, but scripts are often difficult for a tester to understand and it is difficult to get an overview of what is intended for reuse in different test cases. When automating tests, the framework for the development and execution of test cases is important. A good framework for developing test cases provides basic functionality to, for example, reading from and writing to a real-time application, record and reproduce data in the form of graphs, tables or pure numbers in a report and so on.

In order to maintain and develop an automated test environment, it is advantageous to have well-defined roles and areas of responsibility with clear interfaces for information transfer. An automated test process is built up by a number of processes. The responsibility for the test process is owned by the role “Test manager”. It is the test manager who has the overall picture and coordinates the underlying processes for the test developers, framework developers and simulation environment developers.

With an automation environment that supports standardized interfaces, the tests can be reused throughout the development process.

Related Content:

Testing activities are a significant part of ECU development projects. Today, it is common with requirements-based testing where the purpose of the test cases is to demonstrate that the target system meets the requirements set.

In an iterative development environment, the requirements are continuously updated, which means that with each iteration, one must evaluate whether a test case is still valid, if it must be updated or if a completely new test case is required. This process leads to dependencies between versions of requirements and test cases.

For the test activities, the requirement is static in each iteration, while the implementation and generated results of the test case can be updated. These dependencies need to be addressed in order to maintain traceability. A test case is defined by a set of attributes such as name, execution environment, implementation, results, etc.

A major challenge is managing all of these dependencies and data. A database is needed, but also a meta-model for how things are connected so that there is a traceability between the requirements version and the version of the test case so that it is possible to recreate the test environment that was relevant for a specific version.

A test management tool must therefore be able to keep all relevant information, but also use it through connection to test environments. In the end, it is scheduled which tests are to be run and a report is given on how it went. Through version-dependent connections, it is possible to follow how different software versions meet different versions of the requirements.

Related Content: