The following pages provide an example of what model-based development for functional development may look like, from specification to verified product. Working in a model-based manner offers the possibilities of short iterations, early verification, and a high degree of reusability through the process.
Development begins with simulation on a computer and moves on towards verification of the algorithms on a prototype system connected to the system to be controlled. When, after several iterations, this has resulted in a well-functioning system, the code is adapted for the target system. In order to verify that the requirements are met with controlled and repeatable tests, the target system is then connected to a simulator that simulates the system to be controlled.
A first step is to create a model of the functionality to be implemented.
To assess whether the function has a desired behavior, it is beneficial to perform a simulation of the model. This requires stimuli for the model. If, for example, it is a logic function, it may be sufficient to connect several signal generators in the model but if it is, for example, a controller a feedback system is needed. A model of what is to be controlled is then needed, a so-called plant model.
The external model should be detailed enough to recreate the behavior, and the level of detail differs greatly depending on what is being implemented.
A plant model allows the creation of a feedback system which can simulate the behavior of both the developed function and the environment and thereby verify that the model fulfills the desired functionality already during the design phase.
When the functionality is verified in a simulation environment, it is suitable to verify that the developed algorithm works in practice with the sensors and actuators that are available.
The test is preferably performed on a high-performance system where limitations in computational capacity do not limit the behavior and where there is room to run non-optimized code. Running non-optimized code allow for inspection of each individual signal in the model in order to analyze the behavior.
Before building new production hardware it is also advantageous to first verify the need for electrical interfaces, thereby eliminating the need for creating several versions.
Generally, not everything works as intended on the first attempt; however, with a good toolchain to be able to quickly change the model, compile it and download it on the real-time platform, it is possible to swiftly move towards a working system exhibiting the intended behavior.
When the applicationon a real-time system with a physical connection to the controlled system, the next step is optimizing its performance. In order to do this, it is necessary to control parameters, but also to have the possibility to view and log signals.
With a good measurement and calibration tool connected to the real-time system, you can quickly make an intuitive layout with visualization of signals. It is then possible to, for example, compare step responses in order to tune a controller or simulate input from a user interface and see that the system reacts in the way it is intended.
When the model, after several iterations, recreates the desired behavior, it is time to implement it on the target system. The target system generally, for cost-related reasons, has limited computing capacity and thus the code created for it must be optimized. If, for example, only integer arithmetic is supported, input and output signals must be scaled by a certain factor and these scalings must be inherited and adjusted through calculations.
Model-based development allows a production code generator to create the C code based on what is specified in the model. The code generator keeps track of e.g. how different scalings shall be interpreted and the user only needs to verify the behavior. The production code generator also optimizes the code so that nothing is ever calculated if it is not used and avoids unnecessary intermediate storage in variables.
To analyze that the optimized production code created by the model really corresponds to the desired behavior, it is desirable to simulate the code (Software-In-the-Loop) and compare it with the simulated model (Model-In-the-Loop). The code generator does not in itself introduce any errors, but since the user has the possibility of influencing the code, using e.g. fixed-point implementation, it needs to be verified that the scaling does not lead to quantification effects.
By having the model as a basis for the code, it is simple to switch from fixed-point to floating-point implementations or from standard C code to AUTOSAR.
Usually, you do not implement the entire code base model-based manner; drivers and other low-level calls are implemented in a classic coding environment. It is then simple to export the generated C files in order to integrate them into the rest of the project.
In larger projects, the code must be integrated into a software architecture, such as AUTOSAR. By running integration simulations against virtual hardware, it is possible to identify any problems and verify the behavior at an early stage. Read more on this interaction in the software architecture process example.
Once the final control unit is ready, it must be verified. This can be done in the specific machine where it will finally be installed, but there are many advantages to performing the tests in a laboratory environment where the other parts of the machine are simulated. Here, the environment model created during model development is reused. This enables the possibility of creating automated tests that can be recreated across different software and hardware versions. It is possible to test both functionality and electrical interface but also diagnostics by injecting electrical errors.
The connection of the finalized control unit to a simulator is called Hardware-In-the-Loop (HIL) and there is more to read about this in the process example of verification.