dSPACE offers you a comprehensive high-performance solution for in-vehicle data acquisition, advanced data insight, and integrated data ingestion.
The robust, modular, and flexible in-vehicle data logger from the AUTERA product family offers you a solution for all your data logging challenges. AUTERA features a unique combination of high computation power and a wide data bandwidth (50 Gbit/s) in a compact form factor, allowing you to connect more sensors and interfaces via one single box.
AUTERA AutoBox lets you design your data logging system faster and specifically to your requirements. The broad range of supported buses and networks, including raw data interfaces for camera sensors, simplifies the integration and therefore minimizes your effort.
As the complementary software for onboard data acquisition, you can use either RTMaps from Intempora or a software product of your choice with a provided API. With RTMaps, data from multiple heterogeneous sensors (camera, lidar, radar, automotive buses, networking, communication) is captured and recorded in a synchronized manner with high precision timestamping. Each data sample is captured with a time stamp at its own pace. This ensures that all data is time-correlated. RTMaps is a component-based software and offers you a comprehensive and easy-to-use graphical user interface while also providing an API for process automation.
For easy control of the data recording and the manual data tagging during the test drive, you can use the driver application RTag from Intempora, which can be installed on a laptop or a mobile device such as a tablet PC. The tablet can be attached to the dashboard for easy access.
RTag is a mobile application that is used for logging data and annotation use cases in addition to the data logging software (RTMaps) that runs on AUTERA. RTag allows you to control the recording (start/stop) and lets you monitor the status of the vehicle sensors. Moreover, RTag can be used to annotate data while driving. Annotations of relevant situations can be added by tapping a user-defined button on the screen, for example, to add tags, such as ’roadworks‘, ’temporary speed limit‘, or any other user-defined tag. This description is useful for filtering the data and allows for easier navigation of the recorded data in future searches. If the tags are synchronized with the recordings, relevant situations can also be found more quickly among the large data volumes using the Intempora Validation Suite (IVS).
For efficient data ingestion from the vehicle to your on-premise or cloud-based data center, dSPACE has developed the AUTERA Upload Station.
The AUTERA Upload Station is an open platform with high-bandwidth interfaces that allows for easy integration into your data pipeline and speedy data ingestion.
The hot-swappable AUTERA SSDs ensure a very simple process for the data storage exchange and thus minimize idle times in the data garage or in the field.
If you have or plan on having a larger fleet of test vehicles, dSPACE provides you with a control and management application that helps you make full use of your fleet. The application is a web-based solution that lets you continuously monitor the status of the recording in each individual fleet vehicle.
In addition, system and configuration updates can be efficiently executed over the air in multiple vehicles at the same time.
You can use tagging at the source and advanced algorithms (supported by a high-performance GPU) to filter and reduce recorded data volumes to a relevant subset. You do not want to pay for the data that does not add value to development and validation. Filtering does not only save you costs during data ingestion, storage, and transfer, it also reduces the valuable time for data ingestion and thus shortens the data cycle. Particularly interesting situations detected in the recording can be privileged to shortcut the path to developers by wireless transmission during the drive. A definition of the situations that you want to record can be supported by artificial intelligence (AI) algorithms. In this case, developers and test engineers will not have to explicitly list specific aspects.
Every solution is as individual as the requirements. Every model-based algorithm is as good as the training data. Thanks to our experience in the bus and AI domain, established data pipeline, computing infrastructure and synergy with annotation, dSPACE can help you integrate the solution and quickly adapt it to your requirements.
For data logging, dSPACE offers specific engineering services in:
- Raw sensor interface adaptation (TAP mode)
- Sensor configuration
- Integration of data ingestion
- Data filtering and redundancy reduction
In accordance with data protection and privacy regulations such as GDPR, APPI and CCPA, critical personal information must be automatically removed from street scene data collected during real drives prior processing it further.
The Identity Protection Anonymizer depersonalizes recorded image data (street scenes) by detecting and blurring faces and license plates in a wide range of situations. The image anonymizer powered by multiple deep learning algorithms supports single- and multi-GPU platforms to enable high image throughput and detection rates. The solution is flexible in its use, customizable to cover edge cases, and fully integrated into the UAI data enrichment pipeline.
3D Annotation Tooling
The UAI Annotator covers all your needs of fast, high quality and reliable labelling of 3D bounding boxes or cuboids for LiDAR, through a highly automated tool designed to give your labellers superpowers. Thanks to superior 3D point cloud handling millions of points are processed smoothly, navigated with ease, and integrated seamlessly into the workflow.
2D Annotation Tooling – Bounding Boxes
The most commonly used annotation type is 2D bounding boxes. They are easy to apply to machine learning models and faster to annotate in comparison to other annotation types.
Unlike segmentation, bounding boxes may also contain invisible parts of the classified object by approximating occlusions or truncations. Due to the inherent instance-awareness of bounding boxes, your algorithms will get a better understanding of the concept of specific objects and -if you want- track certain objects throughout the sequence. Bounding Boxes are most often used for testing and validation of new sensors or for tracking of objects in sequential data.
2D & 3D Semantic Segmentations
Since the world is not made out of boxes, we are also offering a more precise method to annotate your data – semantic segmentation.
Depending on the raw data, bounding boxes can contain noise in the form of background and occlusions. This is tackled with semantic segmentation, where each pixel assigned to the class of your selected objects will be annotated. It is therefore the closest to a true representation of reality in 2D & 3D space, regarding class assignments. It also is more versatile, since it is very easy to distinguish between objects, e.g., road, lanes and curbs and track instances of them throughout the sequence. It is also possible to annotate classes that are not instantiable, e.g. groups of pedestrians. Pixelwise & voxelwise semantic segmentation is often used for training the neural networks on annotated images, videos or scans.
2D & 3D Annotation Tooling – Polylines
Polylines are used to annotate road lanes and other open ended or closed objects, so they’re recognizable to perception algorithms. Polyline annotation enables the accurate detection of the path ahead of an autonomous vehicle.
They are also used for self-localization of vehicles in High-Definition (HD) Maps. Polylines are an essential part of training data sets for reliable and safe self-driving AI models.
- Realistic simulation based on real-world data
- Testing critical situations and edge cases
- Ready-to-use scenarios for the dSPACE simulation tool chain
- One-stop solution for scenario generation and large-scale simulation
Validation and safety assessment of autonomous driving systems require large scale simulation with thousands of realistic and right scenarios being performed to ensure maximum test coverage. It seems unrealistic that developers can handle the huge effort manually, because they have to imagine critical scenarios and create them on their own in limited editors. In contrast, a large number of interesting scenarios is already available as measurement data collected by customers during numerous real test drives and are ready to be extracted to bring the real-world complexity into simulation.
The Scenario Generation Solution provided by dSPACE and UAI enables the creation of realistic scenarios based on recorded real-world data for large-scale testing of functions for autonomous driving. The creation process consists of scenario extraction and generation. The scenario extraction is based on pure 2-D front camera data and powered by sophisticated algorithms from UAI. For even higher accuracy, further sensor data like lidar point clouds, radar detections and HD-Map data can additionally be used. It results in high accuracy object and trajectories descriptions which are taken as input data for further processing. During scenario generation, the input data is transformed into replay and logical scenarios as well as road networks with 3-D scenery – all ready to use in the dSPACE simulation tool chain. Furthermore, logical scenarios can be parametrized to allow for a high number of variations and cover a large part of the required test space. Optionally, object lists can also be used as input data for scenario generation.
Validating ADAS/AD components using recorded real-world data
Validation of sensor, sensor fusion, and perception components is a challenging task. Due to their complexity and criticality in the ADAS/AD field, higher requirements are imposed on their safety and accuracy. To master this task, recorded real-world data is used as an essential part of their homologation and release strategy. Contrary to synthetic simulation data, the level of realism and fidelity in recorded real-world data cannot be exceeded. Fengco and dSPACE offer multiple solutions for the validation and testing of ADAS/AD perception and sensor fusion components with means of replaying recorded data or as referred to as data reprocessing, data resimulation, and data playback.
Achieving high precision and realism is a demanding task. The system under test (SUT) must be fed with the recorded data exactly as it would be during a real test drive. Heterogeneous data streams, including sensor and network/bus data, must be synchronized during the data replay process although their package/message size might differ. Moreover, the sheer amount of streamed data is steadily increasing. Complex real-time adaptation of the recorded data to pass the end-to-end encryption and security checks in the SUT is also challenging. In addition, efficiently scaling the test to feed millions of driven kilometers continuously and to orchestrate multiple test systems is crucial.
The dSPACE toolchain offers dedicated solutions for each data replay use case, whether the SUT is physically available (HIL) or still in the early software development phase (SIL), and whether it is a simple sensor component or a complex central computation platform for autonomous driving.
All dSPACE data replay solutions ensure synchronous sensor and bus data ingestion under minimal jitter. Moreover, through the modular approach, system upgrade to fit new bandwidth requirements is an easy task. Seamless transition and interoperability between pure software environment (SIL) and physical SUT (HIL), and between closed-loop and data replay tests is also ensured. This interoperability simplifies ramping up the SUT with closed-loop synthetic data to the initial state of the recorded data, where the data replay process starts.
dSPACE data replay solutions support a wide range of standard and customer specific bus and sensor interfaces. Moreover, using the dSPACE real-time systems ensures real time adaptation of the recorded data with security and time information, and live monitoring and manipulation of the replayed data.
The following use cases show some typical data replay tasks and the suitable dSPACE solutions for each of them:
- Sensor Components
Use recorded real-world data to validate the behaviour of camera, lidar, and radar sensors in a hardware (HIL) environment. - ADAS/AD Platforms (ECUs)
Test perception and sensor fusion functions on ADAS/AD computing platforms with recorded real-world data in a hardware (HIL) environment. - Virtual ECUs and Software Components
Test perception and sensor fusion functions in a full software environment (SIL) prior to target deployment.
Closed-Loop HIL Testing of Multi Sensor Systems
- GPU-based sensor models in MotionDesk to generate raw data for multiple sensors and stream all data to the Environment Sensor Interface Unit
- Environment Sensor Interface Unit to separate the incoming data according to the individual sensors and to insert time-correlated data behind the respective sensor front ends
Autonomous driving systems will be using multiple camera, radar, and lidar sensors. To integrate all these sensors in a HIL set up for release testing, using several camera boxes, over-the-air radar test benches, for example, might go beyond the scope of the project. In addition, it is essential to accurately synchronize the stimulation of the individual sensors. The solution is the Environment Sensor Interface Unit that supports the time-correlated feeding of sensor raw data to one or more camera or lidar ECUs, for example.