Sie sind auf Seite 1von 215

Proceedings

Torsten Bertram Hrsg.

Automatisiertes
Fahren 2021
Vom assistierten zum autonomen
Fahren
7. Internationale Fachtagung
Proceedings
Ein stetig steigender Fundus an Informationen ist heute notwendig, um die immer
komplexer werdende Technik heutiger Kraftfahrzeuge zu verstehen. Funktio-
nen, Arbeitsweise, Komponenten und Systeme entwickeln sich rasant. In immer
schnelleren Zyklen verbreitet sich aktuelles Wissen gerade aus Konferenzen,
Tagungen und Symposien in die Fachwelt. Den raschen Zugriff auf diese Infor-
mationen bietet diese Reihe Proceedings, die sich zur Aufgabe gestellt hat, das
zum Verständnis topaktueller Technik rund um das Automobil erforderliche spe-
zielle Wissen in der Systematik aus Konferenzen und Tagungen zusammen zu
stellen und als Buch in Springer.com wie auch elektronisch in Springer Link und
Springer Professional bereit zu stellen. Die Reihe wendet sich an Fahrzeug- und
Motoreningenieure sowie Studierende, die aktuelles Fachwissen im Zusammen-
hang mit Fragestellungen ihres Arbeitsfeldes suchen. Professoren und Dozenten
an Universitäten und Hochschulen mit Schwerpunkt Kraftfahrzeug- und Moto-
rentechnik finden hier die Zusammenstellung von Veranstaltungen, die sie selber
nicht besuchen konnten. Gutachtern, Forschern und Entwicklungsingenieuren in
der Automobil- und Zulieferindustrie sowie Dienstleistern können die Proceedings
wertvolle Antworten auf topaktuelle Fragen geben.

Today, a steadily growing store of information is called for in order to understand


the increasingly complex technologies used in modern automobiles. Functions,
modes of operation, components and systems are rapidly evolving, while at the
same time the latest expertise is disseminated directly from conferences, congres-
ses and symposia to the professional world in ever-faster cycles. This series of
proceedings offers rapid access to this information, gathering the specific know-
ledge needed to keep up with cutting-edge advances in automotive technologies,
employing the same systematic approach used at conferences and congresses and
presenting it in print (available at Springer.com) and electronic (at Springer Link
and Springer Professional) formats. The series addresses the needs of automotive
engineers, motor design engineers and students looking for the latest expertise
in connection with key questions in their field, while professors and instructors
working in the areas of automotive and motor design engineering will also find
summaries of industry events they weren’t able to attend. The proceedings also
offer valuable answers to the topical questions that concern assessors, researchers
and developmental engineers in the automotive and supplier industry, as well as
service providers.

Weitere Bände in der Reihe http://www.springer.com/series/13360


Torsten Bertram
(Hrsg.)

Automatisiertes Fahren
2021
Vom assistierten zum autonomen
Fahren 7. Internationale
ATZ-Fachtagung
Hrsg.
Torsten Bertram
Technische Universität Dortmund
Dortmund, Deutschland

ISSN 2198-7432 ISSN 2198-7440 (electronic)


Proceedings
ISBN 978-3-658-34753-6 ISBN 978-3-658-34754-3 (eBook)
https://doi.org/10.1007/978-3-658-34754-3

Die Deutsche Nationalbibliothek verzeichnet diese Publikation in der Deutschen Nationalbiblio-


grafie; detaillierte bibliografische Daten sind im Internet über http://dnb.d-nb.de abrufbar.

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
Das Werk einschließlich aller seiner Teile ist urheberrechtlich geschützt. Jede Verwertung,
die nicht ausdrücklich vom Urheberrechtsgesetz zugelassen ist, bedarf der vorherigen Zustim-
mung des Verlags. Das gilt insbesondere für Vervielfältigungen, Bearbeitungen, Übersetzungen,
Mikroverfilmungen und die Einspeicherung und Verarbeitung in elektronischen Systemen.
Die Wiedergabe von allgemein beschreibenden Bezeichnungen, Marken, Unternehmensnamen
etc. in diesem Werk bedeutet nicht, dass diese frei durch jedermann benutzt werden dürfen. Die
Berechtigung zur Benutzung unterliegt, auch ohne gesonderten Hinweis hierzu, den Regeln des
Markenrechts. Die Rechte des jeweiligen Zeicheninhabers sind zu beachten.
Der Verlag, die Autoren und die Herausgeber gehen davon aus, dass die Angaben und Informationen
in diesem Werk zum Zeitpunkt der Veröffentlichung vollständig und korrekt sind. Weder der Verlag
noch die Autoren oder die Herausgeber übernehmen, ausdrücklich oder implizit, Gewähr für den
Inhalt des Werkes, etwaige Fehler oder Äußerungen. Der Verlag bleibt im Hinblick auf geografi-
sche Zuordnungen und Gebietsbezeichnungen in veröffentlichten Karten und Institutionsadressen
neutral.

Verantwortlich im Verlag: Markus Braun


Springer Vieweg ist ein Imprint der eingetragenen Gesellschaft Springer Fachmedien Wiesbaden
GmbH und ist ein Teil von Springer Nature.
Die Anschrift der Gesellschaft ist: Abraham-Lincoln-Str. 46, 65189 Wiesbaden, Germany
Vorwort

Künstliche Intelligenz und maschinelles Lernen zur immer besseren Beherrschung


von Verkehrssituationen bleiben spannende Herausforderungen für die Umsetzung
automatisierten Fahrens und für den Weg zu SAE-Leveln jenseits von 2+.
Der Clou ist möglicherweise, dass durch iteratives Anfügen von immer mehr „+“
an die 2 eines Tages Level 3 erreicht ist, ohne dass es den Nutzern auffällt. Aus
dieser Perspektive hätte Evolution statt Revolution einen doppelten Sinn: realis-
tische Entwicklungsziele und zugleich eine User-Experience im Sinn von echter
„Er-Fahrung“.
Als jährliches Trendbarometer nimmt die 7. Internationale ATZ-Tagung „Auto-
matisiertes Fahren“ den Faden auf:
Neben Aspekten der Weiterentwicklung des Stands der Technik mischen wir inter-
essante neue Entwicklungen ins Programm, um das Thema auf eine neue Ebene
zu heben. Nur so entsteht der konstruktive Dialog, der unsere Veranstaltung jedes
Jahr besser macht und der Branche die richtigen Impulse gibt.
Ich freue mich auf Ihre Teilnahme an der Tagung.
Für den Wissenschaftlichen Beirat
Dr. Alexander Heintzel
Chefredakteur ATZ | MTZ-Gruppe
Springer Nature

V
Foreword

Artificial intelligence and machine learning to continuously improve the control


of traffic situations will remain exciting challenges for the implementation of
automated driving and the path towards SAE levels beyond 2+. Perhaps the key
is that, by iteratively adding more and more “+” to the 2, we will reach Level 3
one day without the users even noticing. From this perspective, evolution instead
of revolution would have a double meaning: realistic development objectives and
at the same time a user experience in the sense of a real driving experience.
As an annual trend barometer, the 7th International ATZ Conference “Automa-
ted Driving” takes up this theme. In addition to aspects relating to the further
development of the current state of technology, we will include interesting new
developments in the program to raise the topic to a new level. Only in this way
can we create the constructive dialogue that makes our conference better every
year and gives the industry the right impetus. I look forward to welcoming you
to the conference.
On behalf of the Scientific Advisory Board
Dr. Alexander Heintzel
Editor-in-Chief ATZ | MTZ Group
Springer Nature

VII
Inhaltsverzeichnis

HEAT as an example for efficient environmental perception


for autonomous shuttle systems
Philipp Materne, Christoph Hartwig und Ulrike Peters
Digital Twin to design and support the usage of alternative drives
in municipal vehicle fleets
Sven Spieckermann, Josef Becker, Markus Henrich und Thomas Schulte
Paving the way for trust in autonomous vehicles–How OEMs,
authorities and certifiers can foster customers’ trust in AVs
Nils Köster und Torsten-Oliver Salge
Building the bridge between automotive SW engineering and DevOps
approaches for automated driving SW development
Dr. Detlef Zerfowski, Dr. Sergey Antonov und Christof Hammel
A Safety-Certified Vehicle OS to Enable Software-Defined Vehicles
Jan Becker, Mehul Sagar und Dejan Pangercic
Theoretical Substitution Model for Teleoperation
Elisabeth Shi und Alexander T. Frey
Physics-Based, Real-Time MIMO Radar Simulation for Autonomous
Driving
Jeffrey Decker, Dr. Kmeid Saad, Dan Rey, Stefano M. Canta
und Robert A. Kipp

IX
X Inhaltsverzeichnis

Validation concept for scenario-based connected test benches


of a highly automated vehicle
Moritz Wäschle, Kai Wolter, Chenlei Han, Urs Pecha, Katharina Bause
und Matthias Behrendt
Automatic emergency steering interventions with hands
on and off the steering wheel
Alexander K. Böhm, Luis Kalb, Yuki Nakahara, Tsutomu Tamura
und Dr. Robert Fuchs
Hand Over, Move Over, Take Over -What Automotive Developers
Have to Consider Furthermore for Driver’s Take-Over
Miriam Schäffer, Philipp Pomiersky und Wolfram Remlinger
Navigation with Uncertain Map Data for Automated Vehicles
Christopher Diehl, Niklas Stannartz
und Univ.-Prof. Dr.-Ing. Prof. h.c. Dr. h.c. Torsten Bertram
Scenario Generation for Virtual Test Driving on the Basis of Accident
Databases
Alexander Frings
A modular test strategy for Highly Autonomous Driving based
on Autonomous Parking Pilot and Highway Pilot
Andreas Bossert, Stephan Ingerl und Stefan Eisenhauer
Mastering the Data Pipeline for Autonomous Driving
Patrik Moravek und Bassam Abdelghani
Smart Fleet Analysis with Focus on Target Fulfillment and Test
Coverage
Erich Ramschak, Rainer Voegl, Philipp Quinz, Michael Erich Hammer
und Rudolf Freidekind
Autorenverzeichnis

Bassam Abdelghani dSPACE GmbH, Paderborn, Deutschland


Dr. Sergey Antonov ETAS Ltd., Hospital Fields Rd, York, UK
Katharina Bause Institute of Product Engineering, Karlsruhe Institute of Tech-
nology, Karlsruhe, Deutschland
Jan Becker Apex.AI, Palo Alto, USA
Josef Becker Frankfurt University of Applied Sciences, Frankfurt, Deutschland
Matthias Behrendt Institute of Product Engineering, Karlsruhe Institute of
Technology, Karlsruhe, Deutschland
Univ.-Prof. Dr.-Ing. Prof. h.c. Dr. h.c. Torsten Bertram Institute of Control
Theory and Systems Engineering, TU Dortmund, Dortmund, Deutschland
Alexander K. Böhm Chair of Ergonomics, Technical University of Munich,
Garching, Deutschland
Andreas Bossert ITK engineering GmbH, Rülzheim, Deutschland
Stefano M. Canta Ansys Inc, Canonsburg, USA
Jeffrey Decker Ansys Inc, Canonsburg, USA
Christopher Diehl Institute of Control Theory and Systems Engineering, TU
Dortmund, Dortmund, Deutschland
Stefan Eisenhauer ITK engineering GmbH, Rülzheim, Deutschland
Rudolf Freidekind AVL List GmbH, Graz, Österreich

XI
XII Autorenverzeichnis

Alexander T. Frey Federal Highway Research Institute (BASt), Bergisch


Gladbach, Deutschland
Alexander Frings IPG Automotive GmbH, Karlsruhe, Deutschland
Dr. Robert Fuchs System Innovation R&D Department, JTEKT Corporation,
Kashihara, Japan
Christof Hammel Robert Bosch GmbH, Abstatt, Deutschland
Michael Erich Hammer AVL List GmbH, Graz, Österreich
Chenlei Han Institute for Vehicle System Technology, Karlsruhe Institute of
Technology, Karlsruhe, Deutschland
Christoph Hartwig IAV GmbH, Chemnitz, Deutschlandv
Markus Henrich Hanau Infrastruktur Service, Hanau, Deutschland
Stephan Ingerl ITK engineering GmbH, Rülzheim, Deutschland
Luis Kalb Chair of Ergonomics, Technical University of Munich, Garching,
Deutschland
Robert A. Kipp Ansys Inc., Canonsburg, USA
Nils Köster Institute for Technology and Innovation Management (TIM), RWTH
Aachen University, Aachen, Deutschland
Philipp Materne IAV GmbH, Dresden, Deutschland
Patrik Moravek dSPACE GmbH, Paderborn, Deutschland
Yuki Nakahara System Innovation R&D Department, JTEKT Corporation,
Kashihara, Japan
Dejan Pangercic Apex.AI, Inc., CA, USA
Urs Pecha Institute of Electrical Energy Conversion, University of Stuttgart,
Stuttgart, Deutschland
Ulrike Peters IAV GmbH, Gifhorn, Deutschland
Philipp Pomiersky Institute for Engineering Design and Industrial Design,
University of Stuttgart, Stuttgart, Deutschland
Gerd Prillwitz Ansys Germany GmbH, Otterfing, Deutschland
Autorenverzeichnis XIII

Philipp Quinz AVL List GmbH, Graz, Österreich


Erich Ramschak AVL List GmbH, Graz, Österreich
Wolfram Remlinger Institute for Engineering Design and Industrial Design,
University of Stuttgart, Stuttgart, Deutschland
Dan Rey Ansys Inc., Canonsburg, USA
Kmeid Saad Ansys Germany GmbH, Otterfing, Deutschland
Mehul Sagar Apex.AI, Inc., Palo Alto, USA
Torsten-Oliver Salge Institute for Technology and Innovation Management
(TIM), RWTH Aachen University, Aachen, Deutschland
Miriam Schäffer Institute for Engineering Design and Industrial Design, Univer-
sity of Stuttgart, Stuttgart, Deutschland
Thomas Schulte Hanauer Straßenbahn GmbH, Hanau, Deutschland
Elisabeth Shi Technical University of Munich, Munich, Deutschland
Sven Spieckermann SimPlan AG, Hanau, Deutschland
Niklas Stannartz Institute of Control Theory and Systems Engineering, TU
Dortmund, Deutschland
Tsutomu Tamura System Innovation R&D Department, JTEKT Corporation,
Kashihara, Japan
Rainer Voegl AVL List GmbH, Graz, Österreich
Moritz Wäschle Institute of Product Engineering, Karlsruhe Institute of
Technology, Karlsruhe, Deutschland
Kai Wolter Institute of Product Engineering, Karlsruhe Institute of Technology,
Karlsruhe, Deutschland
Dr. Detlef Zerfowski ETAS GmbH, Stuttgart, Deutschland
HEAT as an example for efficient environmental
perception for autonomous shuttle systems

Philipp Materne[0000-0003-4805-4094], 1 and Christoph Hartwig2 and Ulrike Peters3


1
IAV GmbH, Manfred-von-Ardenne-Ring 20, 01069 Dresden, Germany
2 IAV GmbH, Kauffahrtei 25, 09120 Chemnitz, Germany
3
IAV GmbH, Rockwellstraße 3, 38518 Gifhorn, Germany

Abstract.
IAV developed a novel data fusion approach to ensure a comprehensive envi-
ronmental perception for the autonomous driving in urban environments. The
approach is highly scalable, easily adaptable, and sensor independent while
processing heterogeneous data. It is based on a dynamic occupancy grid in con-
junction with a sophisticated combination of inverse sensor models which are
taking the sensor properties into account. The approach has been successfully
used within the HEAT project and will be discussed in this context.

Keywords: grid, sensor, data, fusion, scalable

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_1
2

1 Introduction

Fig. 1. The HEAT shuttle driving in Hamburg´s HafenCity.

The Hamburg Electric Autonomous Transportation (HEAT) research and develop-


ment project explores the integration of an automated shuttle in the public transport of
an urban environment, namely Hamburg´s HafenCity in the immediate vicinity of the
Elbphilharmonie [1]. Upon completion, the HafenCity will provide residencies, work
places, hotels, educational institutions, and entertainment establishments resulting in
an estimated 120 000 people per day within a tiny area [Fehler! Verweisquelle konn-
te nicht gefunden werden.Fehler! Verweisquelle konnte nicht gefunden werden.]. This
circumstance, paired with ongoing construction work, creates a fast and dynamically
evolving environment with many different traffic participants resulting in a multitude
of unpredictable traffic situations. To safely transport up to nine passengers and navi-
gate this dynamic waterfront a comprehensive perception of the shuttle environment
is crucial.
The project started in 2018. The trial operation of the HEAT shuttle is divided into
three distinct phases on different sub-routes with increasing length and difficulty
within the HafenCity. The routes consist of two-lane main roads, side roads, crossings
with a concealed field of view, special bike and bus lanes, and lay-bies and thus being
representative for the conditions and difficulties of urban environments. Phase 2 was
successfully completed at the end of 2020 with an eight weeks of validation and a
four-week operation with passengers.
3

Taking the conditions of the HafenCity into account IAV developed the data fusion
algorithm for the environmental perception of the HEAT shuttle from scratch. Start-
ing with an initial sensor setup it was reasonable to expect this setup to change due to
the nature of development projects as well as the different conditions of the different
sub-routes. As a consequence, IAV designed the algorithm to be independent of the
specific sensor sources and highly scalable while being easily configurable at the
same time. The latter was realized by using a modular design principle.

The sensor setup is introduced and the necessity of a highly efficient data fusion algo-
rithm is motivated in section 2. Section 3 is devoted to the data fusion algorithm. The
performance of it is discussed in section 4 followed by a summary and conclusion in
section 5.

2 How to observe the environment?

To safely navigate an urban environment a comprehensive perception of the environ-


ment is necessary, which is in particular important for people movers such as the
HEAT shuttle. To ensure this a SOTIF analysis was performed. The analysis includes,
among other things, the reduction of potential blind spots as well as the safety through
redundancy-concept. The resultant sensor setup for phase 3 is:

 1 front and 4 area view cameras


 8 mid-range 3D-Radar
 2 mid- and long-range (64 layer) LiDAR
 8 mid-range (16 layer) LiDAR
 Objects and traffic lights delivered by infrastructure sensors

The arrangement of the sensor setup is schematically shown in Fig. 2. The two long-
range LiDAR sensors are mounted on the roof of the HEAT shuttle. Four mid-range
LiDAR sensors are mounted on the bottom corners of the HEAT shuttle. Together
they observe the mid- to far-distance area around the shuttle and are utilized for the
localization. The other four mid-range LiDAR sensors are mounted on the edges of
the roof and are tilted downwards. They are used to observe the surrounding of the
HEAT shuttle as well as for the localization and lane detection. The former is in par-
ticular important for the detection of pedestrians in the immediate vicinity of the
HEAT shuttle and for lanes.
The Radar sensors are used to observe the mid-range distance. They work in a dif-
ferent wavelength region than the LiDAR sensors. Therefore they have different de-
tection capabilities and are differently influenced by adverse weather conditions thus
providing additional safety through redundancy. In addition, Radar sensors can pro-
vide some dynamic information of objects in contrast to LiDAR sensors.
The front camera is mounted in driving direction and provides object and lane in-
formation in the near-to-mid range region. The four area view cameras observe the
direct vicinity of the HEAT shuttle and extract object information in this area.
4

Up to now, all already presented sensors are HEAT shuttle-centric. As a conse-


quence, all sensors are subject to occlusion by e.g. parking trucks or corners of hous-
es. The latter is in particular a problematic situation at crossings where the HEAT
shuttle turns. To circumvent this situation the crossing infrastructures on the routes
are equipped with Radar and LiDAR sensors providing object data and communicate
with the HEAT shuttle.

Fig. 2. Sensor setup of the HEAT shuttle in top and side view. Blue: cameras, green: Radar
sensors, orange: LiDAR sensors. Most right: infrastructure unit.

This extensive sensor setup provides a comprehensive perception of the HEAT shuttle
environment but generates an enormous amount of data necessitating an efficient data
processing. To handle this data volume IAV developed a novel and highly scalable
algorithm, which uses a minimum amount of computational resources, i.e. no graph-
ical processing unit is necessary, as well as is able to process heterogeneous data.

3 Data processing

3.1 Other sensor data fusion algorithm


The shuttle sensor setup generates raw point cloud data (LiDAR, Radar) as well as
preprocessed objects (camera, provided by the infrastructure). Several fusion algo-
rithms are present in the literature and used in the automotive sector [3] and shown in
Fig. 3. The first approach is the low-level data fusion of raw. An example is the fu-
sion of a LiDAR with short-range Radars for precrash applications [4]. The advantage
is that the data is classified early on and most likely yielding valid results. Its disad-
vantages are the high data bandwidth demands as well as its bad scalability.
The second approach is the data fusion on a feature level. A typical example is the
LiDAR-camera fusion [5, 6]. The idea is to extract features from the raw sensor data
and fuse them. The advantage is a reduced data band width compared to the raw sen-
sor fusion while retaining its classification and preprocessing capabilities [3].
The third approach is the high-level data fusion. A typical example is the classical
object fusion for an ACC [7] or safety features [8]. Within this approach, each sensor
provides preprocessed objects, which are then fused. The advantages are its modulari-
5

ty and simplicity. However, each sensor has reduced information to extract the objects
compared to the feature- and low-level fusion architectures. As a consequence, the
classification is more challenging.
IAV will provide a hybrid multilevel fusion approach combining the fusion of raw
data with features and objects.

Fig. 3. Structure of different data fusion schemes, adapted from [3]. The hybrid scheme was
developed by IAV.

3.2 Basic algorithm structure


As shown in Fig. 3 IAV developed a data fusion algorithm processing heterogeneous
data, e.g. features, raw sensor data, and preprocessed objects. The algorithm was de-
signed using a modular principle. Discussing all possible options is above the scope
of this paper. As a consequence, the specific implementation of the HEAT shuttle is
presented.
The sensor setup of the HEAT shuttle provides raw sensor data and preprocessed
objects resulting in the basic structure of the algorithm which is shown in Fig. 4. It
fuses raw sensor data with preprocessed object. The algorithm is independent of the
specific sensor sources and highly scalable while being easily adaptable and configu-
rable at the same time.
6

Fig. 4. Flow chart of the HEAT shuttle data fusion algorithm.

The data processing uses an extended occupancy grid approach, which is introduced
in 3.3. In a first step, the data of all point cloud sensors is fused with the occupancy
grid using inverse sensor models taking the dynamic and meta information of each
feature into account followed by the calculation of the grid cell occupancies (see 3.4).
Subsequently the data of all object sensors is projected onto the occupancy grid (see
3.5). Afterwards the occupancy grid of the current time step is fused with the occu-
pancy grid of the last time step by performing a Bayesian update (see 3.6). In the next
step, morphological operations can be applied to the occupancy grid, if necessary, and
the grid cells are clustered (see 3.7). The final step is devoted to the tracking and crea-
tion of dynamic objects (see 3.8).

3.3 Occupancy grid representation of the environment


An occupancy grid is a well-known discrete representation of the continuous envi-
ronment. The occupancy grid is an arrangement of single grid cells containing a
measure to describe the environment. Various shapes are possible, e.g. polar or Carte-
sian grid, static or dynamic sized grid cells.
7

In the context of the HEAT shuttle a Cartesian grid consisting of rectangular grid is
used. For the sake of computational simplicity, it was assumed that the grid cells are
independent from each other. Each grid cell has a measure consisting of the occupan-
cy as well as a dynamic layer. An occupancy occ of 0.5 means unknown while occ →
1.0 means fully occupied and occ → 0.0 means free. The dynamic layer can be used
to store e.g. object classification data, a drivability value, velocity, or road infor-
mation.

During the operation of the HEAT shuttle a Cartesian grid of 100 m x 100 m with
grid cells of the size 0.2 m x 0.2 m were used resulting in ~250k grid cells to describe
the environment.

3.4 Point cloud sensor model


In this step, the point cloud sensor data, such as provided by LiDAR and Radar sen-
sors, is fused with the occupancy grid to obtain the occupancies of the grid cells. In
short, the point cloud data are projected onto the occupancy grid using an inverse
sensor model taking the properties of the sensors, such as range accuracy or angular
resolution as well as the calibration quality and other sources of uncertainties into
account. Additional projected information are e.g. radial velocities provided by Radar
sensors. In the HEAT shuttle the used inverse sensor model can be chosen and easily
adapted by the operator.

In detail, each grid cell contains its occupancy and the dynamic layer as well as a
temporary quantity describing the measurement. The latter was chosen to be a point
count, which is proportional to the number of data points associated with the respec-
tive grid cell, to fuse the point cloud data of different sensors. To achieve this, for
each unique sensor an inverse sensor model is defined taking into account the proper-
ties of the sensor. An example of an inverse sensor model for a single data point with
no dynamic information is shown in Fig. 5. However, the projection of dynamic in-
formation can be done in a similar fashion.
8

Fig. 5. Examples of an inverse sensor model of a point cloud source (left) and of an object
projection model (right).

The exemplary sensor has a specific resolution of both the distance measurement as
well as the angle determination. As a consequence, the measurement uncertainty can
be visualized as an arc around the data point.
For each sensor, two quantities must be defined: the added point count value and how it is
distributed inside the uncertainty area. The weighting of the different sensors is done by care-
fully choosing the absolute values of the added point count. The distribution function describes
the properties of the point cloud sensor. The simplest case would be a uniform distribution
while models that are more complex can use functions that are more sophisticated. The exam-
ple inverse sensor model in

Fig. 5 shows a linear decrease of the point count towards the edges of the arc.
The occupancy of a grid cell with non-zero point count is calculated via
9

point count
𝑜𝑐𝑐𝑚 = 0.5 + 0.5 ∗ tanh ( ) (1)
𝑁𝑐

where 𝑁𝑐 denotes a normalization. The value of 𝑁𝑐 is determined by the chosen added


point count values of the used point cloud sensors. The tanh function ensures that the
occupancy will always be < 1 for reasonable point count and 𝑁 ratios.
For unoccupied grid cells and thus with a point count of zero the occupancy calcu-
lation should take into account the growing uncertainty with increasing distance. As a
consequence it is given by
𝑑
𝑜𝑐𝑐𝑚 = min (0.5, 𝜆𝑚𝑑 + (0.5 − 𝜆𝑚𝑑 ) ) (2)
𝐷𝑐

where λ𝑚𝑑 denotes the probability of a missed detection. 𝑑 denotes the distance to the
ego vehicle and 𝐷𝑐 denotes a normalization constant modelling the increase of the
occupancy and thus the uncertainty of an unoccupied grid cell with increasing dis-
tance. The minimum function ensures that the occupancy of an unoccupied grid cell
will never exceed 0.5 and thus being occupied.

3.5 Object sensor model


In this step, the object data is fused with the occupancy grid. In short, each object is
projected onto the occupancy grid using an occupancy probability representation of
the object.

In detail, IAV developed object sensor models, which take into account the specif-
ics of the object-providing sensor, such as field of view, accuracy, or classification
capability. To project the object onto the grid, three quantities must be known: its
dimension, its confidence, and how the confidence is distributed within the object.
The former determines on which grid cells the object is projected. The latter deter-
mines how much trust is set into the measurement. The confidence is transformed into
an occupancy representation using
𝑜𝑐𝑐 = 0.5 + 0.5 ∗ 𝑓(confidence) (3)

where 𝑓(confidence) is a function with a codomain of [0, 1) describing how the con-
fidence is distributed within the object. This function is chosen to model the object
sensor characteristics. The trivial case is the uniform distribution if nothing is known
about the sensor.
A non-trivial example is shown in Fig. 5. A car is inside the field of view of an ar-
ea view camera located at the left side of the HEAT shuttle and thus observed by it.
The camera provides the car as a preprocessed object enriched with meta information.
The front left corner as well as parts of the front bumper and the left side of the car
are in direct line of sight of the camera. As a consequence, the confidence has its
maximal value there. Additionally, the camera can observe parts of the front bumper
as well as of the left side of the car. In contrast, the camera cannot observe the farther
side of the car resulting in a decreased confidence.
10

To update the occupancy of the associated grid cells the confidence distribution
function is used to calculate an occupancy at the grid cell position. The latter is then
fused with the occupancy of the grid cell using a Bayesian update rule similar to what
will be discussed in section 3.63.5.

3.6 Bayesian occupancy update and ego motion compensation


To fuse the occupancy grid of the current time step 𝑡 with the one from last time step
𝑡 − 1 a compensation of the ego motion of the shuttle as well as of the observed dy-
namic objects and grid cells with a non-zero dynamic information is necessary.

In detail, a one-to-one mapping between the grid cells of both occupancy grids is
performed. In a first step, all grid cells being associated with a dynamic object at 𝑡 −
1 are motion compensated. Let 𝐶𝑜 be a grid cell with the center point 𝑟⃗ associated
with a dynamic object 𝑂, which has a velocity 𝑣⃗. Then the properties of 𝐶𝑜 are shifted
to the grid cell at the position 𝑟⃗ ′ via
𝑟⃗′ = 𝑟⃗ + 𝑣⃗ ∆𝑡 (4)

where ∆𝑡 is the time difference between the two time steps. 𝐶𝑜 is then treated as un-
occupied and its occupancy is calculated using Eq. Fehler! Verweisquelle konnte
nicht gefunden werden..

Subsequently each grid cell 𝐶𝑡𝑖 at time 𝑡 and position 𝑟⃗ ′ is fused with a grid cell
𝑗
𝐶𝑡−1 at time 𝑡 − 1 and position 𝑟⃗ if Eq. (4) is satisfied with 𝑣⃗ being the velocity of the
shuttle. The grid cells are fused using a Bayesian update rule and taking into account
Eq. (Fehler! Verweisquelle konnte nicht gefunden werden.) and (Fehler! Ver-
weisquelle konnte nicht gefunden werden.) [9]:
𝑗 𝑗
𝑜𝑐𝑐(𝐶𝑡𝑖 ) = 𝑜𝑐𝑐𝑚 (𝐶𝑡𝑖 ) ∗ ((1 − 𝜀) ∗ 𝑜𝑐𝑐(𝐶𝑡−1 ) + 𝜀 ∗ (1 − 𝑜𝑐𝑐(𝐶𝑡−1 ))), (5)

𝑗 𝑗
𝑓𝑟𝑒𝑒(𝐶𝑡𝑖 ) = 𝑜𝑐𝑐𝑚 (𝐶𝑡𝑖 ) ∗ (𝜀 ∗ 𝑜𝑐𝑐(𝐶𝑡−1 ) + (1 − 𝜀) ∗ (1 − 𝑜𝑐𝑐(𝐶𝑡−1 ))). (6)

𝜀 is a transition probability to take into account approximation errors. 𝑓𝑟𝑒𝑒(𝐶𝑡𝑖 ) de-


scribes how free and therefore unoccupied the grid cell is. To obtain the occupancy of
the grid cell 𝐶𝑡𝑖 the results of Eq. (5) and (6) must be normalized to ensure 𝑜𝑐𝑐(𝐶𝑡𝑖 ) +
𝑓𝑟𝑒𝑒(𝐶𝑡𝑖 ) = 1.
11

3.7 Morphological operations and clustering of grid cells


After the occupancy grid has been created the application of morphological operations
can be done. These operations are e.g. opening or closing. They can be used to im-
prove the quality of the obtained data, if necessary.
To extract objects and other data from the dynamic grid first the grid cells are clus-
tered using a connected-component analysis-based approach. The approach was cho-
sen due to its simplicity while being high-performance at the same time.
The basic idea is the following: start with a grid cell, which is sufficiently occu-
pied, and use it as the condensation nucleus of a new cluster. Then check the Moore
neighborhood of the grid cell for other sufficiently occupied grid cells and add them
to the cluster. Afterwards the Moore neighborhood of these added grid cells is
checked. Those steps are repeated until no sufficiently occupied grid cell is found.
However, the occupancy of a grid cell isn’t the only deciding criterion. The dynamic
layer of the grid cells can be used as other clustering criteria, e.g. clustering regarding
the classification, dynamic state, or velocity information is also possible.
The advantage of this grid cell-based clustering approach is that the typical number
of grid cells per cluster is much lower compared to a LiDAR point cloud-based clus-
tering. Therefore, it is more efficient from a computational perspective.

3.8 Object tracking


To estimate the dynamic state of objects IAV employs an interacting multiple model
(IMM) approach. Hereinafter a short introduction into the IMM is given. A more
comprehensive introduction as well as its derivation are above the scope of this manu-
script and the interested reader is referred to e.g. [10].
12

Fig. 6. Flow chart of the interacting multiple model approach with two Kalman filters.

In general, the IMM algorithm combines an arbitrary number of filter models, which
allows for a more precise tracking and prediction of the dynamic state of object com-
pared to a single filter. This improved precision is achieved at the cost of a higher
computational demand.
However, following a minimal resources approach, IAV opted for two filters in the
context of the HEAT shuttle. To be more precise: the used IMM combines two ex-
tended Kalman filters using the constant velocity (CV) and the constant turn rate and
velocity (CTRV) motion models. The CV model is well suited to describe the straight
movement of objects but is less effective to describe a track with a high curvature
while it is the opposite case for the CTRV model.

The flow diagram is visualized in Fig. 6. 𝑥⃗ denotes the state vector, 𝜇⃗ the condi-
tional model probability vector, and 𝜆 the model likelihood. For the CV model the
state vector is defined as 𝑥⃗CV = (𝑥, 𝑦, 𝜑, 𝑣)𝑇 , where 𝑥 and 𝑦 are the position, 𝜑 is the
driving direction, and 𝑣 is the absolute velocity value. The state vector of the CTRV
model is similarly defined as 𝑥⃗CTRV = (𝑥, 𝑦, 𝜑, 𝑣, 𝜔)𝑇 , where 𝜔 denotes the angular
velocity. The CV model state transition is given by with ∆t being the time difference
between the two time steps:

𝑥(𝑡) = 𝑥(𝑡 − 1) + 𝑣 cos(𝜑) ∗ ∆𝑡


𝑦(𝑡) = 𝑦(𝑡 − 1) + 𝑣 sin(𝜑) ∗ ∆𝑡.

The CV model assumes zero acceleration. As a consequence both 𝜑 and 𝑣 do not


change. For the CTRV model the state transition is given by:
13

𝑣
𝑥(𝑡) = 𝑥(𝑡 − 1) + (sin(𝜑 + 𝜔 ∗ ∆t) − sin(𝜑))
𝜔
𝑣
𝑦(𝑡) = 𝑦(𝑡 − 1) + (cos(𝜑) − cos(𝜑 + 𝜔 ∗ ∆t))
𝜔

𝜑(𝑡) = 𝜑(𝑡 − 1) + 𝜔∆𝑡.


Similar to the CV model, the CTRV model assumes zero acceleration and therefore 𝑣
and 𝜔 do not change. However, in both models the acceleration is implicitly taken
into account via the process noise matrix.
The initial step involves the mixing of the states using the conditional model prob-
abilities 𝜇⃗. The derived mixed states, 𝑥⃗𝑖𝑚 (𝑡 − 1) with 𝑖 = CV, CTRV, are used as in-
puts for the respective Kalman filters. Aforesaid filters are then updated using the
measurement 𝑧⃗(𝑡). Based on the measurement a likelihood 𝜆𝑖 is assigned to each
filter. The conditional model probabilities are then updated using those likelihoods. In
a final step, then an estimate of the final state is computed.
Following the modular design principle different parameter sets are available for
the Kalman filters representing the objects, e.g. different sets based on the object class
or size.

4 Performance evaluation

For the performance of the data fusion algorithm the average run time as a function of
input dynamic objects and point cloud data points was evaluated.
The computation time for the projection of an input object is directly proportional
to its size. Therefore, a truck or bus will produce a higher computational load than a
car or a pedestrian. However, in a typical urban environmental scene, the number of
trucks and buses will be much smaller than the number of cars and pedestrians. As a
consequence and for the sake of simplicity, the input dynamic objects were chosen to
be moving cars with a size of 4.5 m × 2.5 m and standard deviations of the length and
width of 1.0 m and 0.75 m, respectively.
The point cloud data points were a mixture of Radar and LiDAR data points. To
resemble the shuttle sensor setup, the ratio of LiDAR to Radar data points were set to
200:1. The dynamic grid had a size of 100 m x 100 m with a grid cell size of 0.2 m ×
0.2 m. The evaluation of the data fusion algorithm was performed with the HEAT
shuttle implementation within a single thread on an i7-8850H processor.
14

Fig. 7. Average run time as a function of the number of input objects and point cloud data
points (in 1000 counts). The semi-transparent plane is the fit to the data and shown in Eq. 7.

A fit to the generated data yields


3 ms
𝑟𝑢𝑛 𝑡𝑖𝑚𝑒 = 28 ms + × #𝑐𝑜𝑢𝑛𝑡𝑠 + 0.25 ms × #𝑜𝑏𝑗𝑒𝑐𝑡𝑠 (7)
10000

5 and is shown in Fig. 7

. It shows a linear increase of the average run time with increasing number of point
cloud data points for a constant number of input dynamic objects and vice versa. It
should be noted that the absolute numbers depend on the used hardware.
As described in sections 3.4 and 3.5 each sensor source will be treated inde-
pendently but they will be fused with the same occupancy grid. Because of this inde-
pendence it is straightforward to add/remove sensors to/from the data fusion algo-
rithm. As a consequence, the algorithm is highly adaptable to a changing number and
type of sensor sources.
All input dynamic objects are directly fused with the occupancy grid. As a conse-
quence it is irrelevant if 𝑁 input objects are provided by 1, 2, or 𝑁 different sensors.
The situation is similar for the point cloud sensors. They are all directly fused with the
occupancy grid via the point count values of the occupancy grid cells, which is direct-
ly related to their occupancies. Therefore the performance of the data fusion algorithm
is independent of the number of sensors providing the input data. It only depends on
15

the size of the input data, i.e. the number of input dynamic objects and point cloud
data points. This shows the high scalability of the data fusion.
The performance of the algorithm and thus the average run time could be improved
by utilizing the multithreading capabilities of the CPU or by employing a GPU.

6 Summary and conclusion

In summary, IAV presented a novel approach to efficiently fuse the data of a large
number of different sensors. The approach was motivated by the development and
successfully applied during the operation of the HEAT shuttle in Hamburg’s
HafenCity.
The data fusion algorithm was specifically shown using the sensor setup of the
HEAT shuttle providing raw point cloud sensor data as well as preprocessed objects.
However, the algorithm is not bound to this two types of sensor data but can be ap-
plied to other sources of information as well, such as road markings, map information,
or light signals.
The data fusion algorithm is highly scalable and easily adaptable while running
with minimal computational resources, i.e. without the support of a graphical pro-
cessing unit. Its application is not restricted to shuttles but can be used in e.g. auto-
matic valet parking or infrastructure sensor data fusion.

References
1. https://www.hochbahn.de/hochbahn/hamburg/de/Home/Naechster_Halt/Ausbau_und_Proj
ekte/projekt_heat
2. Hamburg HafenCity Homepage, https://www.hafencity.com/de/ueberblick/daten-fakten-
zur-hafencity-hamburg.html, last accessed 2021/02/08
3. Aeberhard, M., Kaempchen, N.: High-level sensor data fusion architecture for vehicle sur-
round environment perception, Proc. 8th Int. Workshop Intell. Transp, 665 (2011)
4. Pietzsch, S., Vu, T. D., Burlet, J., Aycard, O., Hackbarth, T., Appenrodt, N., Dickmann,
Jürgen., Radig, B.: Results of a precrash application based on laser scanner and short-
range Radars, IEEE Transactions on Intelligent Transportation Systems 10(4), 584-593
(2009)
5. Kaempchen, N., Buehler, M., Dietmayer, K.: Feature-level fusion for free-form object
tracking using laserscanner and video, IEEE Proceedings. Intelligent Vehicles Symposi-
um, 453-458 (2005)
6. Mahlisch, M., Schweiger, R., Ritter, W., Dietmayer, K.: Sensorfusion using spatio-
temporal aligned video and LiDAR for improved vehicle detection, IEEE Intelligent Ve-
hicles Symposium, 424-429 (2006)
7. Takizawa, H., Yamada, K., Ito, T.: Vehicles detection using sensor fusion, IEEE Intelli-
gent Vehicles Symposium, 238-243 (2004)
16

8. Floudas, N., Polychronopoulos, A., Aycard, O., Burlet, J., Ahrholdt, M.: High level sensor
data fusion approaches for object recognition in road environment, IEEE Intelligent Vehi-
cles Symposium, 136-141 (2007)
9. Nègre, A., Rummelhard, L., Laugier, C.: Hybrid sampling bayesian occupancy filter,
IEEE Intelligent Vehicles Symposium Proceedings, 1307-1312 (2014)
10. Genovese, A.: The interacting multiple model algorithm for accurate state estimation of
maneuvering targets, Johns Hopkins APL technical digest 22(4), 614-623 (2001)
Digital Twin to design and support the usage of
alternative drives in municipal vehicle fleets

Sven Spieckermann1, Josef Becker2, Markus Henrich3, and Thomas Schulte4


1 SimPlan AG, Sophie-Scholl-Platz 6, D-63452 Hanau, Germany

2 Frankfurt University of Applied Sciences, Nibelungenplatz 1 - HoST, D-60318 Frankfurt,


Germany
3 Hanau Infrastruktur Service, Hessen-Homburg-Platz 5, D-63452 Hanau, Germany
4 Hanauer Straßenbahn GmbH, Daimlerstraße 5, D-63450 Hanau, Germany

Abstract. This paper describes a research project conducted in conjunction with


the Hanauer Straßenbahn GmbH (HSB), serving as urban operator of public bus
transportation in the city of Hanau, and with the Hanau Infrastuktur Service
(HIS), operating the garbage collection trucks in the same city. The integration
of vehicles powered by alternative forms of fuel such as batteries or fuel cells
instead of combustion engines into bus and truck fleets imposes new challenges
on regional or urban fleet operators like HSB and HIS. Issues about an appro-
priate mix of different types of fuel in the fleet, around applicable refueling
concepts, and around consequences for the infrastructure in (existing) depots
are arising. Furthermore, the vehicle scheduling and the tour planning might be
impacted by the varying range of vehicle types using different fuel. This chal-
lenge becomes even harder in heterogeneous fleets. The objective of the project
was to develop digital twins for the vehicle operation of the HSB and the HIS
and to use these twins to assess future scenarios of fleets composed of different-
ly powered vehicle types. The paper describes the as-is situation at HSB and
HIS, the development of the digital models and provides some results.

Keywords: Digital Twin, Public Vehicle Fleets, Fuel Cells, Batteries.

1 Introduction

Since several years, reducing the environmental impact of mobility clearly is a politi-
cal objective on national and regional level in Germany ([1], [2]). For municipal gov-
ernments and local operators of public vehicle fleets, this translates into the task of
integrating battery electric or fuel cell electric vehicles into their operations.
However, to substitute a substantial part of a fleet comprising only fossil fueled
vehicles by battery electric or fuel cell vehicles imposes several challenges on the
fleet operators. These challenges comprise e. g. refueling strategies (charging strate-
gies in case of battery electric vehicles) and issues on how to organize and potentially
refurbish depots to cope with these strategies. Furthermore, routes or schedules served
by combustion engine vehicles may have to be adjusted due to different (typically

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_2
2

smaller) ranges of battery electric vehicles. In addition, the investment for battery
electric and fuel cell vehicles is currently also comparatively high which simply does
not allow to just “go for” the replacement of a whole existing fleet of combustion
engine vehicles. Instead, a sequence of steps towards a zero-emission operation over a
period of several years will be quite common and needs to be thoroughly planned.
The stepwise conversion of a vehicle fleet in turn results in the necessity of operating
mixed vehicle fleets, possibly comprising vehicle with fossil fuel, batteries, and fuel
cells.
All these constraints lead to a setting where it is sensitive and consequent to evalu-
ate future scenarios utilizing available digital technologies such as digital twinning,
simulation, and optimization. Making use of digital scenarios of their operation was
exactly the objective of two fleet operators Hanauer Straßenbahn GmbH (HSB) and
Hanau Infrastructure Service (HIS) responsible for public buses (HSB) and garbage
collections trucks (HIS) in the city of Hanau in the federal state of Hesse, Germany,
when they joined a research project together with the technology providing company
SimPlan AG (SimPlan) and the Frankfurt University of Applied Scienes (FRA UAS).
Coming from the current state with only diesel-powered vehicles, the main subject of
the project was the development of digital twins for the processes of both fleet opera-
tors allowing to assess future scenarios with different fuel types of the vehicles in the
fleet. The perfect support for this joint undertaking was provided by a grant of the
Hessian Ministry of Economics, Transport, Urban and Regional Development
(HMWVL) as part of their initiative to promote electric mobility ([5]).
This paper presents some of the work conducted by the partners in the project
named SimCityNet. The remainder of the paper is organized as follows. In section 2
the as-is situations of the two fleet operators Hanauer Straßenbahn GmbH (HSB) and
Hanau Infrastructure Service (HIS) are explained in more detail. Section 3 introduces
the models developed to support decision making with respect to different fleet mix
scenarios and section 4 presents some of these scenarios including initial and prelimi-
nary results. The paper concludes with a summary and an outlook on the still ongoing
project activities and beyond.

2 Fleet operation as HSB and HIS

HSB and HIS are both striving to integrate battery electric and fuel cell vehicles in the
best possible way into their future operations. As starting point, the as-is situation of
both operators will be described in the following two subsections.

2.1 HSB - Operating buses in Hanau

The HSB runs a fleet of about 65 diesel buses in the city of Hanau. 40% of the vehi-
cles are articulated buses with the remaining 60% being standard ones. A timetable
with twelve bus lines is offered to the public, starting operation on some lines before
5am in the morning and operating until past midnight with cycles of 15, 20 or 30
minutes depending on the line, the time of the day, and the weekday. In addition,
3

buses are dispatched for school traffic around 7am in the morning and around midday.
HSB today is operating a single depot which is located near the main train station of
the city. The depot today is neither equipped to charge many battery electric buses
simultaneously nor to refuel fuel cells, i. e. it would have to be significantly refur-
bished if it were to maintain the respective vehicle. Until recently, the HSB has
gained some experience in testing the handling and the operation of single electric
buses but not yet with the large-scale integration of battery electric or fuel cell buses
into the fleet.
As it is quite common for most timetable driven traffic systems the HSB is facing
what is called vehicle scheduling problem (VSP, [3]). Trips, defined by the lines and
the timetable, need to be assigned to vehicles and must be combined to so-called
schedules. This needs to be done in such a way that each trip is covered exactly once
and that the resulting sequence of trips per vehicle is feasible for the respective vehi-
cle (i. e. each vehicle must have enough time to get from an endpoint of one trip to the
starting point of the next trip). The HSB is using a well-proven and established soft-
ware suite (init MOBILE PLAN, software vendor init SE) to support daily operation
and to compute the vehicle schedules. However, the limited range of battery electric
buses imposes additional restrictions on the VSP which until recently have not been
considered in many solution approaches ([6], [10]). Quite typically, most of the
schedules computed for diesel buses are not feasible for battery electric buses. This is
also true for the schedules the HSB uses and thus needs to be considered in the sce-
narios discussed in section 4.

2.2 HIS – Operating Garbage Collection Trucks in Hanau

The HIS operates a fleet of sixteen garbage collection vehicles with a type-dependent
capacity between 8.7 and 11.7 tons. Four different types of waste, so-called fractions,
need to be collected: general (RM), paper and cardboard (PPK), plastic (LVP), and
organic (BIO). Nine of the vehicles are equipped to collect general waste, four are
able to load paper and cardboard, three can collect BIO, and again four will pick up
plastic. This implies that not all vehicles can carry every fraction and that some vehi-
cles are equipped to collect more than one fraction. However, fractions must not be
mixed, i. e., at every given collection tour one vehicle is assigned to the collection of
one fraction. Whenever the capacity of the collection vehicle is exhausted and finally
in any case at the end of a tour, it needs to head for one of three disposal sites outside
of Hanau.
The number of customers (companies, households etc.) the HIS serves depends on
the fraction and varies between roughly 13,000 and 17,000 and the number of collect-
ed bins ranges from 14,000 to 20,000. Again, depending on the fraction and to a cer-
tain extent also on customer preferences, the bins are collected in cycles of 7, 14, or
28 days. Service times are between 6am and 2.30 pm or 2pm depending on the week-
day. A typical tour of a collection vehicle comprises around 500 customers and 600
bins.
It is quite common to decompose the planning process for garbage collection oper-
ators in several steps ([14]). For each fraction, the customers are divided in districts so
4

that all customers in each district can be served on a single workday. The breakdown
in districts needs to consider the length of resulting tours, the number of containers in
the tour and the estimated weight of the collected waste in order to get a somehow
balanced workload per district. The decomposition in districts is a mid-term planning
task since customers expect a collection schedule to remain unchanged for, e. g., the
next year or at least for a couple of months to come. The computation of tours within
given districts has to be done on a day-to-day basis. It is a short-term problem because
the conditions within a district are constantly changing: customers are added or re-
moved, road works may have to be considered, and the amount of waste differs from
cycle to cycle causing variations in the number of visits to the respective disposal site.
In parallel to evaluating the impact of battery electric or fuel cell vehicles in the
fleet, the HIS had a project running on re-shaping the collection districts in Hanau.
Since the results of this projects were not yet available for the modelling activities in
the SimCityNet project, it was decided to work with todays’ districts and to assess the
assignment of alternative vehicles to todays’ tours.

3 Simulation and Optimization Models for HSB and HIS

The modelling and digitization activities in the project were in a double sense two-
fold: models had to be designed and implemented for the bus operation of the HSB
and for the garbage collection tours of the HIS. And it took simulation models to as-
sess future fleet operations and optimization models to support vehicle scheduling and
tour planning for future scenarios.

3.1 Models for bus operation at HSB

As a means to assess impact of changes in fleet mixes and of changes schedules, a


simulation model of the bus operations in Hanau was implemented using a simulation
software package called AnyLogic. Reasons for the choice of this tool were the high
customizability with Java code and libraries, support for agent-based modelling as
well as conventional discrete event simulation and a rich geographic information sys-
tem module.
Fig. 1 presents a screenshot of the visualization of the simulation model. It depicts
the (virtual) bus operation of the HSB with around 50 buses being on track on a Mon-
day morning according to the timetable. This visualization makes it easier to follow
the processes in the model e. g. by indicating the power type of each vehicle simply
by using different colors or displaying the remaining range of the vehicles (highly
relevant for battery electric buses).
The simulation model allows an easy modification of several parameters. It is e. g.
possible to set up different bus models (manufacturer, name, range, battery or fuel call
capacity, standard vs. articulated, additional heating) and to define based on this mod-
el specification how the bus fleet is composed for a dedicated scenario. The impact of
the temperature on the range can be specified as well as the recharging speed of bat-
teries depending on the state of charge. The available infrastructure for charging and
5

hydrogen refueling at the depot with respect to number of charging or refueling points
or the usable electrical power may be set up. And finally, the schedules have to be
connected with the simulation model. This can either be done by importing schedules
generated with the init MOBILE PLAN software application, by importing manually
modified schedules, or by invoking a heuristic specifically developed for the schedul-
ing problem of the HSB.

Fig. 1. Visualization example of simulated bus traffic in Hanau

This heuristic and the underlying mathematical model adapt and combine vehicle
scheduling approaches found in the literature (e. g. [9], [10], [13]) for the situation at
the HSB. For the fleet with diesel engines as it is today, the heuristic approach pro-
duces results almost identical to the schedules the HSB has currently in use. And for
mixed fleet scenarios, it allows a fast calculation of alternative schedules adjusted to
the (shorter) ranges of battery electric and fuel cell buses.

3.2 Models for garbage collection at HIS

From a technical perspective, the modelling approach for the HIS is quite similar. An
Anylogic simulation model including a visualization on the map of Hanau (Fig. 2) is
available for an assessment of the vehicle operations and a mathematical model plus a
heuristic solution procedure are available for automatic tour calculation. The visuali-
zation is showing the garbage collection trucks operating on several districts for a
waste fraction (in case of Fig. 2 it is general waste). And it is displaying the remaining
range of each vehicle as well as the fuel type.
6

However, as a closer look at Fig. 2 reveals, there are some differences in several
details. Firstly, the simulation model and the visualization model are comprehensive
in the sense that each location of a waste bin is marked on the map. And as briefly
mentioned, the collection districts are different from day to day and from fraction to
fraction with a cycle of at most four weeks. Hence, the application allows to switch
between weeks (“Woche”) and weekdays (“Wochentag”).

Fig. 2. Visualization of example of simulated garbage collection in Hanau

The simulation model for the HIS has also some additional parameters on top of
the ones described in section 3.1. For each vehicle it needs to be set up which frac-
tions may be collected. And for the collection process itself, settings on the amount of
waste and the service time per bin need to be made. Here, constant values may be
used but random numbers are also offered bringing some stochastic variations into the
simulation model.
During each simulation run several performance indicators are collected such as
utilization of each vehicle, distance driven per vehicle, amount of waste collected,
tours to a loading station, and tours to deposit sites. Fig. 3 presents an example of the
tour distances per vehicle, weekday, and fraction. The codes starting with “HU-“ are
the IDs (e. g. the number plates) of the vehicles.
The mathematical model for the tour planning of the garbage collection vehicles is
designed closely to the model presented by Tirkolaee et al. [12]. The implemented
solution procedures are based on heuristic approaches by Golden and Wong [4]
(opening procedure) and Muyldermans and Pang [8] (improvement procedure). How-
ever, it became apparent that a major part of the mathematical modelling process
starts earlier and lies in adequately handling specificities such as u-turns (typically not
allowed for garbage collection trucks and hence to prevent in the models), one-sided
7

or two-sided bin collection (with two-sided collection being e. g. applied for one-way
streets – but for other types of streets as well), or the processing of remote bins (i. e.
with bins not directly positioned at a passable street). The modelled network covers
all these aspects.
Overall, the implemented heuristic approach allows to compute realistic garbage
collection tours. In connection with the simulation model it also provides on option
for dynamic re-calculations of tours which have been interrupted by not foreseen
visits to a deposit site.

Fig. 3. Simulated tour distances per vehicle, fraction, and weekday

3.3 Digital Twinning

The preceding two sections presented simulation models and mathematical models.
These types of models are an essential part of so-called digital twins, as steps 6 and 7
in Fig. 4 indicate.
However, it takes a lot more steps to turn (simulation) models into digital twins.
First of all, it takes a permanent connection and interaction between the digital “sib-
ling” and the real-world. Via this connection, data collected during the operation in
the real-world is transferred into the digital world in an appropriate way (with the
discussion of what appropriate means in that context being beyond the scope of this
article). In the digital world, the data is used to execute model-based analysis with the
results of this analysis being transferred back to improve real-word operations.
8

Such an integrated, automated digital twin is not within the reach of the
SimCityNet project. So, in the sense of the three categories “digital model” (manual
data exchange between physical and digital objects), “digital shadow” (automatic data
transfer from physical to digital objects and manual transfer back), and “digital twin”
(automatic data exchange between physical and digital objects) presented in Kritz-
inger et al. [7], the HSB models introduced in its current state may be classified as
digital shadow whereas the HIS models may be even more in the state of a digital
model.

Fig. 4. Real-world, simulation, and digital twin [11]

The reason is that data preprocessing and transfer for the HIS models is too a large
extent manual. The data for the HSB models, however, is (semi-)automatically de-
rived from the data in HSBs’ IT systems. An automated transfer of analytical results
from the digital models back into the real-world IT is currently not intended. Howev-
er, it should be noted that the main purpose of the digital models in the HSB case and
the HIS case is to provide mid-term decision support. It may very well be that the
rather technology and interfaced motivated understanding of digital twins as it has
been discussed in this section is not fully appropriate for this kind of model applica-
tions.

4 Scenarios and Initial Results

At the time of the preparation of this paper, the work on the project was still a couple
of months away from being finished. And these last months are exactly the project
phase being dedicated to numerical assessment of different scenarios. Hence, this
section will present an overview on the planning scenarios the fleet operators want to
be assessed and a few initial, yet still preliminary calculations.
9

Table 1 presents an overview on the fleet mix scenarios the two operators want to
investigate further. Both have clear yet different preferences. The HSB does not ex-
pect battery electric buses alone to be a preferable technology for the future but sees
bigger potential in fuel cells. The situation for the HIS with respect to battery electric
vehicles is even more difficult. While the number of offered battery electric bus mod-
els is increasing, the selection of battery electric garbage collection vehicles is still
limited. The vehicles of potential vendors are still in a preliminary test phase. An
additional challenge is imposed by the relatively high-power consumption of the ve-
hicles during the collection phase and – specifically in the HIS set up – the distances
of 30km to 40km one way between the collection districts in Hanau and the disposal
sites. All these constraints in combination with the larger range fuel cell vehicles lead
to the HIS preferences shown in Table 1. Nonetheless, the HIS wants to use the model
to understand what a fleet with only battery electric garbage collection vehicles would
mean for their processes.

Table 1. Fleet mix preferences of HSB and HIS

Fleet Mix HSB HIS


100% Battery - o
100% Fuel Cell o +
Mix Diesel + Battery o o
Mix Diesel + Fuel Cell o +
Mix Battery + Fuel Cell + -
Mix Diesel + Battery + Fuel Cell + -
Other planning parameters HSB and HIS want to consider in the scenarios are:

• Depot capacity for parallel loading / refueling the fleet in percentage the fleet size.
A value of e. g. 50% would mean that 50% of the battery electric vehicles may be
charged at the same time.
• Battery capacity usage defining a threshold that should not be exceeded when load-
ing and not be fallen short of when using the vehicle. A value of e. g. 15% would
mean that during a normal loading and operation cycle the charging state of the
battery should be between 85% (= 100% - 15%) and 15%.
• In case of the collection vehicles the assignment of fractions to vehicles is another
relevant setting. In that respect, HIS does foresee a scenario similar to the as-is sit-
uation: vehicles need to be assigned to more than one or in some cases only to one
fraction but typically to all four fractions.
• For the buses, the scenarios with and without a loss in range due to the use of bat-
tery electric energy for heating should be evaluated.

The different fleet mixes combined with the other parameters lead to several doz-
ens of simulation configurations for the HSB and the HIS. As already indicated, at the
time this paper was prepared, the project was still ongoing. Hence, only some initial
calculations can be presented here.
10

These initial optimizations and simulations led to results like the one displayed in
Fig. 5. It is showing the number of required buses as it is with the diesel fleet today
and the according results for a pure fuel cell, a pure battery electric, and a mixed fleet.
The ranges assumed were 500km for diesel buses, 350km for fuel cells buses, and
180km for the battery electric buses. Considered were solely the operations on a
Monday which is the peak day of the week for the HSB. The bus lines remained un-
changed but the schedules were of course re-calculated considering the ranges and the
re-fueling times of the buses in the fleet.

Fig. 5. Impact of fleet mix on number of buses

One rather obvious explanation for the results is that a use of short-range instead of
long-range buses leads to shorter schedules. These shorter schedules in combination
with the re-charging times increase the number of required vehicles. Apparently, this
effect sets in for a range below 200km whereas a range of 350km allows to operate
very much like today. This result is not necessarily an objection against a certain type
of power but rather against vehicles with insufficient ranges. What came a bit unex-
pected for the project team is that a fleet mix with an (almost) equal share of diesel,
fuel cell, and battery electric buses shows no increase in the needed number of vehi-
cles. This indicates, that intelligent combinations of vehicles and schedules may allow
the integration of buses with lower ranges. And it demonstrates how useful these vir-
tual scenarios are in avoiding surprises in real-world fleet operations.
However, as already pointed out, these computations have to be considered as pre-
liminary. The available results are still undergoing a careful validation and in-depth
analysis, which is also the reason for deliberately not showing concrete values on the
y-axis of the chart in Fig. 5. And more results are going to follow as the preferences
and parameters introduced at the beginning of these section will be investigated.
11

5 Summary and Outlook

This paper presented a comprehensive simulation and optimization approach for bus
and garbage collection truck operations of the HSB and the HIS in the city of Hanau.
The modeling approaches allow a detailed investigation of fleet mix scenarios and are
a step towards a digital twin of the fleet operations. Open issues and tasks with re-
spect to digital twinning were discussed in the paper. Finally, the presented prelimi-
nary results underline how challenging the integration of electric vehicles into exist-
ing fleets is.
This paper also demonstrates the extent of work that is still ahead on several levels
in integrating fuel cell and battery electric vehicles in fleet operations. On the level of
the presented project, several more results have to be generated and more analysis
have to conducted for HSB and HIS over the next couple of months to support deci-
sion making for the city of Hanau. On a technical and modeling level, there is also
room for improvement with respect to optimization approaches and the according
solutions procedures, with respect to the integration of simulation and optimization
and last not least with respect to digital twinning. And on the level of zero-emission
fleet operations, it will be interesting to follow the progress towards technically and
economically improved vehicles. Models to virtually support and assess this progress
are ready to be used.

References
1. Ammermann, H., Ruf, Y., Lange, S., Fundulea, D., Martin, A.: Fuel cell electric buses -
potential for sustainable public transport in Europe, Roland Berger GmbH, Munich, Sep-
tember 2015.
2. BMUB - Federal Ministry for the Environment, Nature Conservation, Building and Nucle-
ar Safety Homepage, https://www.bmu.de/fileadmin/Daten_BMU/Pools/Broschueren/
klimaschutzplan_2050_en_bf.pdf, last accessed 2021/04/06.
3. Bunte, S., Kliewer, N.: An overview on vehicle scheduling models. Public Transportation
1(4), 299–317 (2009).
4. Golden, B., Wong, R.: Capacitated arc routing problems. Networks, 11(3), 305–315
(1981).
5. HMWVL - Hessian Ministry of Economics, Transport, Urban and Regional Development,
Homepage for Electric Mobility, https://wirtschaft.hessen.de/verkehr/elektromobilitaet/
foerderung-der-elektromobilitaet-hessen, last accessed 2021/04/06.
6. Jefferies, D., Göhlich, D.: A comprehensive TCO evaluation method for electric bus sys-
tems based on discrete-event simulation including bus scheduling and charging infrastruc-
ture optimisation. World Electric Vehicle Journal 11(3), 56 (2000).
7. Kritzinger, W., Karner, M., Traar, G., Henjes, J., Sihn, W.: Digital Twin in manufacturing:
A categorical literature review and classification, IFAC PapersOnLine 51(11), 1016–1022
(2018).
8. Muyldermans, L., Pang, G.: A guided local search procedure for the multi-compartment
capacitated arc routing problem, Computers & Operations Research 37(9), 1662-1673
(2010).
12

9. Rinaldi, M., Parisi, F., Laskaris, G., D'Ariano, A., Viti, F.: Optimal dispatching of electric
and hybrid buses subject to scheduling and charging constraints. In: 21st International
Conference on Intelligent Transportation Systems (ITSC), pp. 41-46, Maui, HI, USA,
(2018).
10. Rogge, M., van der Hurk, E., Larsen, A., Sauer D.: Electric bus fleet size and mix problem
with optimization of charging infrastructure. Applied Energy 211, 282-295 (2018).
11. Schele, M., Kühn, M.: Vorsprung durch Digitale Zwillinge. Smart Engineering 2017(5),
12-15 (2017).
12. Tirkolaee E., Goli A., Weber GW., Szwedzka K.: A novel formulation for the sustainable
periodic waste collection arc-routing problem: a hybrid multi-objective optimization algo-
rithm. In: Golinska-Dawson P. (ed.) Logistics Operations and Management for Recycling
and Reuse. EcoProduction (Environmental Issues in Logistics and Manufacturing).
Springer, Berlin, Heidelberg (2020).
13. Wen, M., Linde, E., Ropke, S., Mirchandani, P., Larsen, A.: An adaptive large neighbor-
hood search heuristic for the Electric Vehicle Scheduling Problem. Computers & Opera-
tions Research 76, 73-83 (2016).
14. Wøhlk, S., Laporte, G.: A districting-based heuristic for the coordinated capacitated arc
routing problem. Computers & Operations Research 111, 271-284 (2019).
Paving the way for trust in autonomous vehicles–
How OEMs, authorities and certifiers can foster
customers’ trust in AVs
Nils Köster and Torsten-Oliver Salge
1 RWTH Aachen University, Institute for Technology and Innovation Management (TIM),
Kackertstr. 7, 52072 Aachen, Germany

Abstract. Trust will be a decisive factor for customer acceptance of autonomous


vehicles (AVs)—but how can it be built? Our research project provides initial
answers to this question. We identify mechanisms that AV providers, authorities
and certifiers could use to build trust and test them with German, Chinese and
U.S. car drivers. Our comparison shows that German and U.S. customers have
similar preferences for trust-building mechanisms, but the preferences of Chinese
users seem to differ significantly. For example, technical protection measures
seem to be far less important for the trust of Chinese users, but the protection
measures of the legislators and the ratings of other users play an even greater role
for trust-building. However, differences do not only emerge between the different
countries; different user segments also vary in their trust preferences within the
markets studied.
AV providers should be aware of these different user segments when prioritiz-
ing trust measures and can, for example, develop segment-specific strategies for
building trust based on typical personas.
Our findings can offer points of departure for practitioners, who today already
want to make informed decisions on how to build trust in AVs. We present prac-
tical examples of the measures many players are already using to build users’
trust in AVs. This article further calls OEMs, legislators and certifiers to work
together to build user trust in AVs so that the benefits of autonomous mobility
can become a reality.

Keywords: Autonomous vehicles, trust, customer acceptance, certification, reg-


ulation, artificial intelligence

1 Introduction

Even though autonomous vehicles (AVs) are still some years down the road, it is al-
ready clear that trust will be a decisive factor for customer acceptance [1]. Only when
customers trust self-driving cars will the benefits of AVs, such as low-cost driverless
mobility services, reduced accident rates and more efficient use of road space [2] be-
come reality. In recent years, many breakthroughs in development have been cele-
brated, such as successful pilots with robo shuttles on public roads [3]. At the same
time, however, the first fatal accidents involving AVs were reported [4]. Customers

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_3
2

will therefore pay close attention to whether they can really entrust an AV with full
responsibility for their safety. AV providers will have to actively build this trust. How-
ever, legislators and authorities, as well as certifiers (such as TÜV, ISO and DEKRA),
will also play important roles.
How do customers rate trust in AVs, and how can this trust be strengthened? How
exactly can OEMs (original equipment manufacturers; here: car manufacturers) and
mobility service providers, regulators and certifiers strengthen it? How does this differ
internationally and between different user groups? We investigated these questions in
a research project at RWTH Aachen University (part already published in [5]; further
scientific articles currently in publication), which combined surveys, interviews and
experiments.

2 How customers find trust in AVs

As of today, 45% of German survey respondents would trust a self-driving car, as can
been seen in Figure 1. We were able to determine this in a survey with a total of over
800 participants. The more familiar respondents are with assistance systems already
available, the more likely they are to trust an AV. Interestingly, respondents from me-
dium-sized cities seem to be particularly inclined to trust AVs. In fact, medium-sized
cities could be good environments for self-driving cars, as the relief in everyday mo-
toring would be greater than in smaller cities with low traffic density, whilst traffic
situations might be more manageable than in significantly larger cities.
Differences emerge for different modes, as Figure 2 illustrates. Older respondents,
in particular, would have even higher trust in vehicles that were fully autonomous than
in those that were partially autonomous and required their supervision and interaction.
Overall, trust in robo taxis is lowest—respondents seem to be particularly skeptical
about being driven by an unknown AV on their own. A different picture emerges, how-
ever, when other passengers are on board. Respondents would actually trust a robo
shuttle shared with other passengers more than their own private AV.
What exactly, though, apart from potential peer effects, can help customers find trust
in an AV so that they are willing to use it? Even before they start their journeys, cus-
tomers must decide whether they want to entrust the AV with responsibility for their
safety. To trust an AV, customers need to be assured of an AV's functionality and reli-
ability, but should also find its driving maneuvers transparent and comprehensible. To
support this, authorities and certifiers can implement trust-building mechanisms.
3

If your own car could drive fully autonomously, would you trust it?
​Share of respondents who agree, %

​By age ​By driving skill level

44 43 53 32 45 56

​<30 years ​30-49 years ​>49 years ​Low ​Medium ​High

45
​By familiarity with
​By residence, thd. inhabitants driving assistance systems
​Overall
32 31 71 57
32 34 80

​<50 ​50-100 ​100-400 ​>400 ​Low ​Medium ​High

​Source: RWTH Aachen study "Trust in AVs“, 2020; total n for analysis: 214

a3616605734f152db5cf29b75ce85770
Fig. 1. Respondents who had previous experience with driver assistance systems would be espe-
cially likely to trust autonomous cars.

Would you have trust in the autonomous/automated vehicle?


​Share of respondents who agree, %

​Automation/
autonomous level, ​By age group
mobility mode ​Overall ​<30 years ​30-49 years ​> 49 years
​Level 3 Own car

​42 ​41 ​48 ​39

​Level 5 Own car


​45 ​44 ​43 ​53

​Robo taxi
​33 ​33 ​38 ​29

Robo shuttle

​46 ​42 ​62 ​47

​Source: RWTH Aachen study “Trust in AVs”, 2020; total n for analysis: 807

e0c84b108feb045581599ea65d9f64ea
Fig. 2. Older respondents would trust a private AV the most, but overall, respondents would trust
robo shuttles the most.
4

3 How to build trust in AVs

3.1 Trust-building mechanisms AV providers, authorities and certifiers may


implement
What would trust-building mechanisms that are visible, understandable and compelling
[6] to the AV user look like? Through interviews with experts from OEMs and certifiers
and from research and consulting, we identified important trust-building mechanisms
that are currently debated in the industry. Figure 3 provides a detailed overview of
mechanisms that the experts believe are essential for building customer trust. Below,
we present a few of the most important trust-building mechanisms.
Technical elements that can increase customer trust in AVs of course include
measures that strengthen the reliability of AVs by minimizing security risks, for exam-
ple, through redundant systems and multi-level intervention capabilities. Furthermore,
the functionality of on-board sensing and environment detection capabilities can be
strengthened through vehicle-to-X communication with, for example, other cars and
traffic infrastructure. Another important trust-building mechanism will be to make the
functions of the AVs comprehensible in the first place, that is, to make the driving ma-
neuvers transparent, so that the passenger—who often has decades of driving experi-
ence—trusts the AV and is convinced that he or she would have taken the same decision
in a traffic situation. However, AV providers should not only build customer trust
through the technical features, but also clearly highlight their experience in developing
AVs. It should be signaled that everything has been done to make the AVs as trustwor-
thy as possible. Key figures such as test kilometers covered without disengagement in
autonomous driving will be important signals in this context for comparing the trust-
worthiness of different AV manufacturers.
Legislators and authorities will also play important roles in building trust. Of course,
the appropriate legal framework must be created to clarify liability issues as much as
possible. Legislators should also create clear approval criteria, however, such as a cen-
tral catalog of traffic situations that AVs must master and against which their safety is
measured. This catalog could then be used, for example, by local licensing authorities
to determine what requirements AV mobility services must fulfill according to the traf-
fic conditions in their city. Furthermore, the authorities could enforce a cross-manufac-
turer evaluation of critical driving data, which would increase the safety of AVs and
public trust in AVs. On one hand, critical driving data could be evaluated in the event
of accidents by reading out the black boxes. On the other hand, authorities could man-
date the continuous, anonymized transmission of safety-critical data points such as fre-
quency of manual interventions by the control center.
AVs will pose new challenges for certifiers such as TÜV and DEKRA. To assess the
safety of AVs, today's main inspections (Hauptuntersuchung) will not be sufficient.
However, independent certifiers can become important pillars in building trust in AVs
if they develop comprehensive audits for testing AVs, become involved in the devel-
opment process and establish visible trust signals such as star ratings.
5

Especially at the beginning of the adoption of AVs, reviews from other users could
also play a major role in building customer trust, as has been the case with other mo-
bility innovations such as ride hailing.

Fig. 3. AV providers, authorities and certifiers may increase trust in AVs through a range of trust-
building mechanisms, as the expert interviews suggested.
6

3.2 Effectiveness of trust-building mechanisms for customers in Germany, the


U.S. and China
To quantify how effective such trust-building mechanisms would be, conjoint ex-
periments were conducted with representative samples (in terms of age and gender) for
Germany, the U.S. and China. In pair-wise comparisons, fictitious AVs with potential
configurations of trust-building mechanisms were presented, and respondents were
asked to choose the one they would trust more. Similar results emerged for Germany
and the U.S., as Figure 4 shows. Technical protection and a clear emphasis on the
OEM's development experience are, as expected, particularly important for trust-build-
ing. Ratings from other users, in contrast, play a subordinate role. Slight differences
between the markets are noticeable. For German users, regulation in the approval of
AVs and the liability of the OEM are particularly decisive for their trust. U.S. users, on
the contrary, seem to gain more trust from investigations by certifiers than from legis-
lation. For Chinese respondents, legislation was the most critical factor. This is plausi-
ble in light of the stronger enforcement power of authorities in China. Conversely, Chi-
nese users do not seem to attach much importance to whether the AV provider could
demonstrate many years of experience. A lack of a comprehensive track record in AV
development would hardly diminish trust. This could be due to the fact that many Chi-
nese OEMs have only emerged in recent years; Chinese customers may place less value
on traditional brands than German and U.S. customers. The biggest difference between
the Chinese and western markets, however, was the high relevance of user ratings.

Relative importance of trust-building signals, %

20
​Technical protection 30 28

15

​Provider protection 19 20
22

16
​Legal protection 22
21
23
​Certifier protection 19
22
​Social protection 10 13

​Source: RWTH Aachen study "Trust in AVs“, 2020; Germany: n = 202; USA: n = 202, China: n = 216

4e865e2908332c070cd006c0bbde8de4
Fig. 4. Effectiveness in trust-building—similar results in Germany and the U.S., and higher rel-
evance of user ratings in China.
7

There were significant differences not only between the various national markets,
but also between different user groups, as can be seen in Figure 5. To illustrate this, we
present two particularly succinct segments for Germany and China. In Germany, the
largest user segment identified can be labelled as "tech-oriented control seekers." This
group attaches great importance to technical protection measures. A high degree of
transparency, for example through voice or display information explaining the AV's
driving maneuvers, is quite important to users. Concerns that AVs were associated with
a loss of driving pleasure are particularly high in this segment. The segment of "pro-
vider-oriented optimists," in contrast, is significantly more open to the potential bene-
fits of AVs and less critical of potential risks. Users in this segment are mostly drivers
of premium brands. When choosing an AV they could trust, this user group would pri-
marily be attracted to OEMs that were among the pioneers of the technology. These
users are less interested in dealing with the technical details of the AV themselves, but
would rather rely on a full AV certification.

​Relative importance of trust-building signals, % x% Share of users (n = xxx) Detailed in article

​Segments identified

1 “Tech-oriented 36% 2 “Provider-oriented 22% 3 “Regulation-oriented 21% 4 “Certifier-oriented 20%


control seekers” AV optimists” users” safety skeptics”
Tech Tech Tech Tech
40 40 40 40
30 30 30 30
20 20 20 20
Provider 10 Social Provider 10 Social Provider 10 Social Provider 10 Social
0 0 0 0

Legal Certifier Legal Certifier Legal Certifier Legal Certifier

1 “Tech- and 51% 2 “Regulation- 29% 3 “Regulation-skeptical 20%


certifier-oriented oriented driving AV optimists”
safety users” enthusiasts”
Tech Tech Tech
40 40 40
30 30 30
20 20 20
Provider 10 Social Provider 10 Social Provider 10 Social
0 0 0

Legal Certifier Legal Certifier Legal Certifier

1 “Indifferent 43% 2 “Young urban trust 37% 3 “Reputation 20%


users” outsourcers” seekers”

Tech Tech Tech


40 40 40
30 30 30
20 20 20
Provider 10 Social Provider 10 Social Provider 10 Social
0 0 0

Legal Certifier Legal Certifier Legal Certifier

​Source: RWTH Aachen study "Trust in AVs“, 2020; Germany: n = 202; USA: n = 202, China: n = 216

4e865e2908332c070cd006c0bbde8de4
Fig. 5. Different user segments can be identified in terms of trust preferences.

As described above, the ratings of passengers who already had experience with an
AV play a major role for Chinese users. This was particularly pronounced for the seg-
ment of "reputation seekers" (see Figure 5), for whom the absence of user ratings would
virtually be a no-go. The opinion of other users appears to be quite important to this
segment. This could also explain why many users would see a high prestige gain in
8

being among the early adopters of AVs. The segment of "young, urban trust out-
sourcers" is characterized by high incomes and comparatively low interest in self-driv-
ing. Therefore, this segment also has an affinity for AVs. Since they have relatively
little car experience, these potential AV users would mainly rely on certification by
certifiers. Moreover, for this user group, which mainly lives in metropolitan areas, strict
liability of the OEM in case of accidents are particularly important. For OEMs, the user
segments we identified in each market can be an important starting point to understand
different trust preferences. Thus, AV providers can derive typical personas from each
user segment to develop targeted trust-building strategies. Figure 6 shows an example.

How Sandra’s life could look like What’s (not) important to Sandra
y Sandra is a service operations manager at a telco in Duesseldorf. y Technical protection, i.e., backup sensors and
y She received her diploma in Business Administration from computers, are of utmost importance
U’Hanover in 2000. y Put high weight on transparency of AV, i.e., that
y Sandra is married and lives with her husband and their two kids driving maneuvers are announced and
(7 & 10 yrs.) in a suburb of Duesseldorf. explained
y They mainly drive a 2018 VW Tiguan (15thd km/year), and enjoy y Sees low value in convenience benefits of AVs
Tech-oriented control their frequent visits to Sandra’s older sister and her family in y Would be most concerned of loss of driving
Muenster. pleasure, compared to other segments
seekers: Sandra
Age: 44 Characteristics of segment “tech-oriented control seekers”
Occupation: Service operations
manager, telco This segment Average across segments
Location: Duesseldorf
Socio-demographics Personal car preferences Car attitude
Personality: Open towards
innovations; likes driving Education Yearly income Yearly mileage Car Brand Driving fun

Trust preferences No degree PhD <12thd EUR >100thd EUR <5thd km >45thd km Volume Premium Low High

ADAS1 experience Willingness to switch to AV


Tech
40 Familiarity with ADAS Comfort with ADAS2 No manual driving mode With manual driving mode Interest in cars
30
20
Provider 10 Social
0 Low High Low High Low High Low High Low High

Legal Certifier Perceived benefits of AVs Perceived risks of AVs


Increased convenience Increased safety SW errors HW errors Cyber attacks

“I need to know what's


inside the car in terms of Low High Low High Low High Low High Low High
technology, sensors and
Incr. efficiency Prestige of being a early Loss in driving pleasure Accidents w/ non-AVs Segment may find high
environment recognition and adopter trust in OEMs like…
so on.”
Highly HW-focused
- Quote from user interviews Low High Low High Low High Low High
OEMs, such as

1. Automated Driving Assistance Systems 2. Of those being familiar with ADAS

6RXUFH5:7+ $DFKHQVWXG\7UXVW LQ$9V³

Fig. 6. Targeted trust-building strategies can be derived from personas.

4 Implications for OEMs and mobility service providers

Our study shows that, already today, almost one in two people would trust AVs. How-
ever, in order to achieve even broader, sustainable acceptance of AVs, manufacturers
and AV providers must consistently build trust along the dimensions of functionality,
reliability and transparency.
How transparency, for instance, could be created in practice, is demonstrated by ex-
amples such as Baidu's test vehicles [7]. The robo taxis inform their passengers in the
back seat about subsequent driving maneuvers via a display.
9

Segmenting potential AV customers by trust preferences can help AV providers to


prioritize those segments that are a good fit for their products and capabilities, and re-
veals options for differentiation between providers. For example, premium OEMs such
as Audi, BMW or Mercedes-Benz could be well positioned to serve the needs of "tech-
oriented control seekers" by emphasizing the extensive technical safety features they
integrate into AVs and giving passengers the freedom to switch to a manual driving
mode if they wish. In contrast, tech players such as Waymo or Uber, which are new to
the automotive industry but among the pioneers in testing AVs on the road, should
emphasize this advantage to appeal to "provider-oriented optimists" with their planned
autonomous mobility services. "Attacker OEMs," such as Chinese OEMs that do not
yet have extensive histories, can compensate for this in the international market by
working closely with certifiers to build users' trust. On the other hand, segmentation
can be used to identify white spots, that is, current gaps between the trust-building
mechanisms that the target segments want and the manufacturers have implemented.
In their collective interest, however, AV providers should not only compete for the
best technology and the most extensive testing procedures, but also collaborate on a
joint definition of common standards. This would allow customers to further identify
leading providers and develop greater trust in AVs. The PAVE initiative, for example,
is a collaboration of OEMs such as VW, Ford and Daimler; tech players such as AutoX
and Deepen AI and certifiers such as SAE. The initiative pursues the goal of increasing
AV security awareness and public trust in the technology [8]. One such measure could
be, for example, the establishment of a common rating platform where prospective AV
users could read about the experiences of other users and can find statistics on the safety
of different AVs. According to the presented research findings, such a platform could
increase trust in AVs, especially in the Chinese market.

5 Implications for certifiers

As this research project shows, certifiers, such as ISO, TÜV, SAE or DEKRA, will
be important enablers of customer trust in AVs. In all three markets investigated, ex-
aminations by independent certifiers were strong sources of trust for potential AV cus-
tomers. In the U.S., for example, such approvals were perceived as even more important
than regulation. Especially for customer segments that were unable—or unwilling—to
look more closely at the technical details or the reputation of the manufacturers, inde-
pendent certificates were a key source of trust.
The findings indicated that there was a large segment of customers in China who
were affluent and open to autonomous driving, but had little experience with the tech-
nical details of cars and wanted to outsource the decision on whether to trust the AV to
certifiers. In this respect, it seems only logical that the major German certifiers such as
DEKRA [9] are building up large testing capacities in China and could thus also be-
come partners of new Chinese AV producers, as described above.
From the customer's point of view, certifications were particularly trustworthy not
only if the finished AV was put to the test as a black box, but also if the certifier was
involved in the development process at an early stage. This is in line with the initiatives
10

of TÜV, DEKRA and Co., which are building up new testing capabilities and develop-
ment support capacities on a large scale. In practice, this could also mean more frequent
main inspections and, above all, regular software checks, as demanded, for example,
by the TÜV association [10]. Thus, in the interplay between providers, authorities and
users, certifiers could take on a central role in building trust in AVs by acting as inde-
pendent trust centers that could control access to vehicle data, for example, in the event
of malfunctions or accidents [11].

6 Implications for policy

AV customers expected legislators to play active roles in building their trust. Authori-
ties should continuously check vehicle updates from OEMs and centrally track and
evaluate malfunctions and accidents. Authorities need to be involved at an early stage
of the technology development. For example, they can support testing under real-world
conditions by setting up AV test zones. The city of Beijing already recently established
its third AV test zone [12]. Collaboration among cities, for example to jointly develop
a catalog of traffic scenarios that AVs must be able to handle flawlessly, will be im-
portant in this regard. In practice, international public collaboration can also be seen,
such as the planned cross-border test zone in Germany, France and Luxembourg [13].
Notably, our results show that legal protection can be even more effective in building
trust than the measures taken by AV providers themselves. Therefore, AV providers
would be well advised to not oppose strict AV regulation (which might seem intuitive),
but rather to support appropriate initiatives. This is exactly what the Goggo Network
[14] is working towards. Similar to licensing in mobile communications, the initiative
calls for the public award of AV licenses to mobility providers. Through defined trust
standards, a high user acceptance for AVs and investment security for AV developers
should be achieved.

References

1. Hengstler M, Enkel E, Duelli S (2016) Applied artificial intelligence and trust—


The case of autonomous vehicles and medical assistance devices. Technological
Forecasting and Social Change 105:105–120
2. Gkartzonikas C, Gkritza K (2019) What have we learned? A review of stated
preference and choice studies on autonomous vehicles. Transportation Research
Part C: Emerging Technologies 98:323–337
3. White J (2020) Waymo opens driverless robo-taxi service to the public in Phoe-
nix. Reuters. https://www.reuters.com/article/us-waymo-autonomous-phoenix-
idUKKBN26T2Y3. Accessed 27 Jan 2021
4. Wakabayashi D (2018) Self-Driving Uber Car Kills Pedestrian in Arizona,
Where Robots Roam. New York Times. https://www.ny-
times.com/2018/03/19/technology/uber-driverless-fatality.html. Accessed 21
Apr 2021
11

5. Koester N, Salge T-O (2020) Building Trust in Intelligent Automation: Insights


into Structural Assurance Mechanisms for Autonomous Vehicles. Forty-First In-
ternational Conference on Information Systems, India
6. Connelly BL, Certo ST, Ireland RD et al. (2011) Signaling theory: A review and
assessment. Journal of Management 37:39–67
7. Baidu (2020) Building a self-driving car that people can trust. Technology Re-
view. https://www.technologyreview.com/2020/12/16/1014672/building-a-self-
driving-car-that-people-can-trust/. Accessed 03 Feb 2021
8. Stewart L, Musa M, Croce N (2019) Look no hands: self-driving vehicles' pub-
lic trust problem. World Economic Forum. https://www.wefo-
rum.org/agenda/2019/08/self-driving-vehicles-public-trust/. Accessed 03 Feb
2021
9. DEKRA (2020) DEKRA Extends Global Testing Network for Automated and
Connected Driving in China. https://www.dekra.com/en/dekra-extends-global-
testing-network-for-automated-and-connected-driving-in-china/. Accessed 03
Feb 2021
10. TÜV Süd (2020) Homologation of Automated Vehicles: The Regulatory Chal-
lenges. https://bit.ly/3sDgU8T. Accessed 09 Mar 2021
11. Meyer C (2020) Hauptuntersuchung 2.0: So will der TÜV in Zukunft E-Autos
und autonome Fahrzeuge prüfen. Business Insider. https://www.busi-
nessinsider.de/wirtschaft/hauptuntersuchung-2-0-so-will-der-tuev-autos-in-zu-
kunft-pruefen/. Accessed 03 Feb 2021
12. Xinhua (2020) Beijing's 3rd self-driving test zone greenlighted.
http://www.xinhuanet.com/english/2020-11/13/c_139514385.htm. Accessed 03
Feb 2021
13. Bundesministerium für Verkehr und digitale Infrastruktur, Ministère de la Tran-
sition écologique, Le gouvernement du Grand-Duché de Luxembourg (2018)
Franco-German-Luxemburgish cooperation on automated and connected dri-
ving. Bundesministerium für Verkehr und digitale Infrastruktur; Ministère de la
Transition écologique; Le gouvernement du Grand-Duché de Luxembourg.
https://bit.ly/2Pea8sM. Accessed 03 Feb 2021
14. Kugoth J (2019) Lizenz zum autonomen Fahren. Tagesspiegel. https://back-
ground.tagesspiegel.de/mobilitaet/lizenz-zum-autonomen-fahren. Accessed 03
Feb 2021
Building the bridge between automotive SW engineering
and DevOps approaches for automated driving SW
development

Dr. Detlef Zerfowski1, Dr. Sergey Antonov2, Christof Hammel3


1 ETAS GmbH, 70649 Stuttgart, Germany
2 ETAS Ltd., Hospital Fields Rd, York YO10 4DZ, UK
3
Robert Bosch GmbH, 74232 Abstatt, Germany

Abstract. In the past two decades, the automotive industry established its own dedi-
cated SW engineering approach, especially for control algorithm based, deeply embed-
ded SW. This approach is based on well-defined SW requirements specifications often
taking into account dedicated hardware designs, whereby both are largely fixed at start
of development. With the recent developments in the automated driving domain, the
industry faces new challenges arising with the introduction of µProcessor based control
units and data driven algorithms. In this paper, we describe a new approach in the do-
main of automated driving for

 Efficient utilization of high computing performance


 Managing the complexity of the software design
 Re-Use of SW from different sources
 Reaching functional safety goals
 Efficient and fast development cycles
 Efficient simulation-based validation of the system

Keywords: Automated driving, automotive middleware, functional safety,


SOTIF (ISO/PAS 21448) development tool chains.

1 Automotive SW development meets IT SW development

In the past two decades, the automotive industry established an automotive specific SW
engineering approach. This approach is based on well-defined SW requirements speci-
fications for dedicated, strongly coupled hardware specifications at start of develop-
ment. Consequently, SW engineers designed, implemented, verified, and validated the
ECU specific software following a widespread V-model approach.
With increasing computing performance within vehicles’ E/E architecture, a strong
paradigm shift happens. Automotive software is increasingly decoupling from hard-
ware [1,2]. Therefore, middleware solutions gain more and more importance and allow
for software engineering solutions and methodologies from the IT domain to enter the
development of in-vehicle software solutions. Managing and integrating increasing
amount of SW, coming from different sources (even from outside the automotive do-
main) into automotive solutions becomes a key success factor. New fast, efficient, and

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_4
2

reliable development approaches need to be established within the automotive world


(like Agile and modern DevOps methodologies). Especially in the data-driven domain
of highly automated driving (HAD), conventional automotive SW engineering meth-
odologies are no longer adequate.

1.1 Need for highly efficient development performance


The decoupling of SW from the hardware allows the introduction of new development
paradigms into the automotive domain. Figure 1 gives a high-level overview for the
suggested development cycle in the ADAS/AD domain supporting a highly iterative
process along with data driven development.
From the very beginning of the architectural design and development phase of AD fea-
tures, their implementation is kept abstracted and independent of the underlying target
hardware and Operating System. In the next step, algorithms are mapped to, or as we
call it ‘deployed on’, computing resources and the executable code is built for dedicated
targets. It is important to note that these potential targets are not restricted to in-vehicle
ECUs.
In case the executable is built for an in-vehicle target ECU, the software is running in
real time while driving the vehicle. During software development, the AOS middleware
provides efficient measurement access APIs allowing high performance measurement
of data and corresponding meta-information to be captured in real driving situations.
The huge amount of recorded data can be stored offline in a data-lake for later use in a
highly accurate and reliable re-play and re-compute within a virtual environment.
With the above described data collection cycle (outer cycle in Figure 1) we enable a
much faster development cycle (inner cycle in Figure 1), which does not require repe-
tition of the real driving. This is achieved by deploying and building the application
software for a virtual environment, e.g. a server cluster, using the earlier collected data
from the data-lake. Being able to use cloud-based target builds, automated tests can
scale within the simulated environment – verification and validation test can be per-
formed much faster, leading to much quicker developer feedback loops and providing
validation evidences without costly and time consuming real test drives.

Figure 1: Fast development cycle.


3

Within the classical deeply embedded development, which typically is a real-time


driven software domain, one might be of the opinion that a simulated environment
might not be able to deliver reproducible results during re-play and re-compute simu-
lations. This is true, as long as timing behavior is tightly coupled with the underlaying
compute platform. The AOS platform removes this dependency by introducing a plat-
form agnostic activation of the application components, this means, the design of the
application compute graph does not need to know about the underlaying Operating Sys-
tem. Another central mechanism of AOS the event-driven triggering which allows the
activation of the application components at the income of new sensor data. Therefore,
delays such as latency due statically scheduled time frames (see Figure 2 on the right)
can be avoided. On the other hand, the time-wise predictability can be guaranteed much
easier within the time-driven approach and plays to its strength in trajectory planning
and control tasks. Figure 2 shows the comparison of these two different triggering strat-
egies and suggests a smart combination of both mechanisms to fit to the characteristics
of the ADAS/AD application software.

Figure 2: Event- or data-driven (left) vs time-driven (right) approach.

1.2 Challenges of functional safety requirements


In the past, functional safety requirements have been addressed by ECU specific devel-
opment approaches. Safety argumentations are usually based on safety element out-of-
context ASIL implementation for software components (e.g. basic software stack ac-
cording to AUTOSAR classic) and freedom from interference considerations. On ap-
plication software level, safety case argumentations for temporal freedom from inter-
ference are typically based on time deterministic behavior of the software system, real-
ized by static scheduling tables and supported by the underlying basic software. With
an increasing number of time-slices in safety applications, verification and validation
become increasingly complex and time consuming, and eventually get in contradiction
with requirements on system availability.
Modern high performance computing systems together with modern SW engineering
approaches stand in conflict with the restrictive time synchronous communication.
4

Software features dynamically distributed across several ECUs in a service oriented


architecture paradigm will require a functional safety argumentation, which cannot be
separated from the involved ECUs out-of-context. Especially, pure time deterministic
requirements will become a barrier to introduce data driven algorithms for evolving
functional safety relevant domains. As already mentioned, time cyclic activation and
communication results in latency challenges. Trying to improve latency by using in-
creased sample rates leads in turn to higher computational footprint. Therefore, we pro-
pose an extension to event-driven communication as approach to address this challenge.

1.3 The validation challenge


Complete in-vehicle system validation with corner cases in real test drives turns out to
be unrealistic with growing complexity of the systems. Obvious reasons for this are:
1. In-vehicle validation only scales with the amount of available test vehicles and
test drivers, i.e. it does not scale well.
2. Corner cases will hardly occur during test drives and could be extremely dan-
gerous for the test driver (Figure 3 shows some few examples with rough prob-
abilities to encounter these or similar events during real driving cycles). Test
drivers would be endangered as soon as the corner cases address physical lim-
its of the vehicle or driving scenarios. Corner cases in real life would not be
reproducible at all, especially in public traffic areas.

Figure 3: The validation challenge - system validation with corner cases.

Consequently, validation of this kind of software-based systems requires simulation


based approaches. As a mandatory prerequisite, we need a “high fidelity re-compute”
environment to ensure, that validation results from simulation can be used for the offi-
cial release of the software running on public roads.
Only with a highly accurate simulation environment it is possible to rely on the simu-
lation results as evidence for the validation of the software system. Within the simula-
tion, the software needs to behave identically compared to the previously recorded be-
havior in real driving situations. This is a prerequisite for generating synthetic driving
5

scenarios in the simulated world, which we might not be able to face in real driving
scenarios.

Figure 4: The validation challenge - simulation need to behave like reality.

Figure 4 shows the setup for this validation challenge. During real driving situations,
intermediate states of the running software algorithms together with the information on
the sensed environment need to be measured and recorded with high bandwidth. At this
point it is important to note that the used measurement equipment needs to have access
to the internal states of the running software, without significantly influencing the run-
time behavior of the running software.
This can only be achieved if measurement and logging devices are supported by high
performance data sharing mechanism within the middleware. For example, data trans-
fer between different processes should not happen by copying data. This would lead to
a high memory and runtime footprint. Especially dynamically allocated memory should
not be used in safety related application features.
6

Figure 5: Shared memory concept for inter process communication and data access for
measurement adapters (MTA).

For this reason, our implementation uses shared memory concepts with a single-sender-
multiple-receiver approach. For efficiency reasons, the implementation avoids local
copies of data from other processes by using zero-copy mechanisms. Due to the fact,
that processes have their own local address space, the corresponding part of the shared
memory is mapped into the logical address spaces of the different processes. For safety
reasons the memory space should be fixed before run-time and should not be dynami-
cally allocated.
Transferring data (with high bandwidth) out of the system without influencing the sys-
tem itself is a considerable technical challenge. It requires optimized data communica-
tion mechanisms via specific measurement adaptors (MTA) implemented within the
AOS middleware, making the process internal data available for special purpose meas-
urement equipment. In this context, the MTA is considered as a separate process and
uses the shared memory zero-copy approach as well (see Figure 5).
Recording corresponding data on an in-vehicle data storages requires technologies (like
PCI-express connectivity) coming from non-automotive domains. (Obviously, ex-
tremely high amounts of data cannot be transferred over the air with currently available
technologies.) In an offline process, driving scenarios including corresponding meta-
data will be transferred into a cloud-based data-lake. From there it is available for re-
play and re-compute in a simulated environment, including visualization using tools
from the open source ROS2 environment.

1.4 Middleware deployment on different targets


Coming back to the challenge, how to ensure reproducibility of validation results in the
simulated world compared to reality. In Section 1.1 we compared the time-driven with
the data-driven scheduling paradigms. Since simulations are running on a different tar-
get system compared to the real in-vehicle system, the timing behavior on scheduling
7

task level cannot be simulated. (Note: we are not looking for a simulation of the phys-
ical target hardware.)
During the real driving scenario, the target system runs a different hardware and very
likely even a different operating system compared to the simulation environment (see

Figure 6: The validation challenge - going down one technical level.

Error! Reference source not found.). Therefore, the AOS middleware is representing
an abstraction layer for the validation itself. Features running on top of the AOS mid-
dleware behave in the same way on different underlying software stacks and hardware
environments.

Figure 7: AOS middleware is deployable on different target systems.


8

In Figure 7, from left to right different target systems during development towards se-
ries readiness are shown. On the left the AOS including the software development kit
is deployed on a standard Linux based laptop or PC. With this approach, the develop-
ment environment can easily be distributed within bigger development teams.
Getting closer to a series ready software stack, the AOS middleware carrying the au-
tonomous driving features is deployed on server boards with a QNX operating system.
For cloud-based scaling of validation tests, simulation server clusters running Linux
operating system are used.

Especially with the deployment on cloud-based servers, the AOS enables ContinousX
approaches for application feature developers. Features can be extended at developers’
desk and while committing changes into the configuration management systems, in the
background virtual tests can start automatically (even with several instances for differ-
ent test scenarios). The developer immediately gets feedback on his implementation
from validation tests in the simulated environment. Therefore, we closed the DevOps
loop for feature developers for highly automated driving, leading to a significant engi-
neering performance increase.

Coming back to Figure 7. The two examples on the right are closer to in-vehicle targets.
Development kits running QNX are the next potential deployment target. Finally, on
the right is an in-vehicle ECU with QNX operating system additionally running an
Adaptive AUTOSAR instance (called Vehicle Run Time Environment VRTE) is
shown.
Obviously, the runtime behavior of all these systems differs. With the data-driven AOS
approach, validation results contribute to the final software release for in-vehicle soft-
ware. Without the AOS approach, a full system validation on the in-vehicle target sys-
tem would be required, which physically would not be possible.

2 Summary

In the previous sections we’ve described the AOS platform which provides a runtime
and development environment to address key challenges in the domain of automated
driving and contributes to solving earlier mentioned issues:

 Efficient utilization of high computing performance


(By introduction data-driven activity execution)
 Managing the complexity of the software design
(By abstraction of underlying SW stacks and operating systems, as well as hardware)
 Re-Use of SW from different sources
(By utilizing standardized software stack elements like Linux, QNX, AUTOSAR
Adaptive solutions, etc.)
 Reaching functional safety goals
(By ensuring reproducibility of feature behavior across different development plat-
forms)
 Efficient and fast developing cycles
9

(By establishing a development cycle outside the vehicle)


 Efficient simulation-based validation of the system
(By deploying the middleware to cloud-based simulation platforms, for scaling val-
idation tests)

References
1. Zerfowski, Detlef: „Automotive Software: Vertikalisierung versus Horizontalisierung? Die
(R)Evolution der Automotive SW schreitet voran“, Tagungsband Embedded Software En-
gineering Kongress 2018, Sindelfingen, 3.-7. Dezember 2018, Seiten 452-460, ISBN 978-
3-8343-3447-3.
2. Zerfowski, Detlef und Lock, Andreas: “Funktionsarchitektur und E/E-Architektur – eine
Herausforderung in der Automobilindustrie“, 19. Internationales Symposium, 19.-20. März
2019, Tagungsband (Herausgeber: M. Bargende, H.-C. Reuss, A. Wagner, J. Wiedemann),
Volume 2, Seiten 99-110, Springer Vieweg, Wiesbaden, ISBN 978-3-658-25938-9, Online
ISBN 978-3-658-25939-6.
3. Hammel, Christof: Collaborative Software Development in Automotive, ELIV (Electronics
in Vehicle). Baden-Baden 2015
4. Hammel, Christof: Collaborative Automotive Software Engineering, IBM Innovate Konfer-
enz. Orlando 2014.
5. Hammel, Christof: Challenges in the Automotive Industry: EMF Model Repositories –
Scalability vs Performance, EclipseCon Europe, 2012
A Safety-Certified Vehicle OS to Enable
Software-Defined Vehicles

Jan Becker12 and Mehul Sagar1 and Dejan Pangercic1

1
Apex.AI, Inc., 979 Commercial Street, Palo Alto, CA, USA
2
Stanford University, Stanford, CA 94305, USA

Abstract. Autonomous driving, connected vehicles, e-mobility, shared mobility


– all mobility disruptors rely on software but lack a unified software platform is
preventing cross-domain software development. In the meantime, the vehicle
compute and network architecture are moving to centralized high performance
computers, but the software implementation is lagging behind the hardware ar-
chitecture. We are now introducing Apex.OS, the first mobility software plat-
form that is truly integrated end to end. A primary vehicle operating system, ro-
bust and flexible enough to cover major systems throughout the vehicle and the
cloud, enables user-focused development, just like iOS and Android SDK do so
for embedded devices. This paper describes our approach to an automotive SDK
capable of covering all automotive software domains and certified to ISO 26262
ASIL D.

Keywords: Vehicle Operating System, automotive, SDK, open-source, ROS,


Apex.OS, functional safety, certification, ISO 26262, ASIL D.

1 Introduction

How do we get from assisted to autonomous vehicles?


The automotive industry is moving toward new generations of the E/E-architecture
based on centralized high-performance computers to support the megatrends towards
electric, self-driving, shared, and connected vehicles. But the vehicle software archi-
tecture is lagging behind the hardware architecture.
Today, automotive OEMs are stitching together disconnected software components
to build proprietary platforms, see Fig. 1. In the past, in a distributed E/E-Architecture,
distributed and independently functioning electronic control units (ECUs) communi-
cate with each other over CAN bus-based communication. For today’s architecture, a
central gateway was introduced to enable some cross-domain communication. Cur-
rently under development is the fourth generation of the E/E-architecture, which will
see the introduction of domain controllers. These are computing platforms that central-
ize computing resources within one domain, such as the powertrain, chassis control, in-
vehicle-infotainment, or ADAS and vehicle automation. This enables cost optimiza-
tion, e.g., through consolidation of software, and reduction of wiring harness cost, but

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_5
2

requires much higher bandwidth network communication and increases the require-
ments for safety and security. The fifth generation, currently being developed, will then
see the introduction of centralized computing infrastructure, in which all major func-
tionality is implemented on one high-performance computing platform, “dumb” satel-
lite sensors are connected via a high-speed ethernet network and separation of domains
is implemented in software.
Overall, the architecture is rapidly moving from a complex system architecture with
simple software to a simple system architecture with complex software. If done incor-
rectly, this will pose incredible challenges and risks, such as exploding software com-
plexity, increased cybersecurity risks, incoherent cross-domain implementation, di-
verging non-consistent tooling, and so on. If done properly, this opens up the oppor-
tunity to introduce modern software development processes and paradigms, a clean and
scalable software architecture, cross-domain consistency and tooling, limited control-
lable cybersecurity risks, over-the-air updatability, just to name a few.
In this paper, we aim to describe a world as well as a path to this world in which
software throughout the whole vehicle is truly integrated end-to-end. A primary vehicle
operating system on the outside – but really an SDK (software development kit) in the
sense of Android SDK or iOS SDK on the inside – to abstract complexity away from
the developers, provide common and open APIs as well as robust and flexible enough
to cover major systems throughout the vehicle. What is needed to get to software-de-
fined vehicles? We argue that the following requirements need to be fulfilled:
1. A standardized software architecture with open APIs to enable mutually
compatible solutions ideally across many manufacturers, suppliers, and aca-
demia.
2. An awesome developer experience to enable developer productivity – based
on the understanding that the quality of the developer experience is directly
related to their productivity.
3. A software architecture that scales to massive software systems.
4. A software implementation based on modern software engineering practices.
5. Abstraction of the complexity of all underlying hardware and software.
6. All of the above with deterministic, real-time execution, and with automotive
functional safety certification.

We draw an analogy to the field of robotics, which encountered similar challenges


(with some exceptions w.r.t 6.) ten years ago and addressed the first five challenges
by developing ROS (Robot Operating System) as an open-source software frame-
work. The authors are not aware of a single automotive company that isn't or at least
hasn't been using components from ROS or its rich software ecosystem in their devel-
opment. But the first generation of ROS had its deficiencies, and the lack of determin-
istic, real-time execution and functional safety certification has prevented adoption
into cars so far.
This has now changed: In 2018, the 2nd generation ROS 2 was released with im-
provements based on the lessons learned of using ROS 1 in hundreds of robotic and
automotive prototypes over ten years. ROS 2 is based on a proven architecture that
enables scaling to complex software systems. It is also based on a proven and stand-
ardized middleware instead of a homegrown data communication approach. In
3

addition, with the release of Apex.OS (Apex.AI proprietary fork of ROS 2) we ad-
dressed real-time and deterministic execution by rewriting the underlying implemen-
tation based on the same open APIs. And we developed a process to take this imple-
mentation through functional safety certification in record time.
To summarize, we will describe the advantages of working with open APIs and an
architecture that has been proven in use by tens of thousands of developers and in
thousands of robots and automated vehicles. We will close with outlining a software
development process that allows taking open-source software through functional
safety certification into production vehicles.

Fig. 1. The vehicle E/E-Architecture is evolving by introducing domain controllers and central
high-performance computer [9].
4

2 Lessons Learned from Robotics

2.1 What is a robotics framework?


A robotics framework is a collection of software tools, libraries, conventions, and APIs
made to simplify the task of developing software for a complex multi-sensor, multi-
actuator system, like a vehicle.
A well-implemented robotics framework is implemented with the separation of con-
cerns in mind. These concerns are introduced in the paper by Vanthienen as 5Cs [13]:
configuration, coordination, composition, communication, and computation. Writing
the software this way allows for maximum reusability, testability, reliability, portabil-
ity, maintainability, extensibility of components. In most cases, the use of a robotics
framework dictates the general architectural principles of the developed software. For
example, if the software is centralized or decentralized, real-time or non-real-time, etc.
A key component is the middleware, which is the glue that holds together the numerous
components of a robotics framework. The most basic task of the middleware is to pro-
vide the communications infrastructure between software nodes in an autonomous ve-
hicle. The typical use case for a robotics framework is to provide the essential interfaces
between high-level (software) and low-level (hardware) components of the system.
These interfaces and components consist of various operating system (OS) specific
drivers that would take a single developer a significant amount of time to develop. To
illustrate this with an example of an autonomous driving stack see Fig. 2. In the figure
we see that the software framework has to abstract away ECUs, OSes, physical
transport layers, and interface to the components needed for offline (e.g., mapping, sim-
ulation, testing) as well as online operations (user applications).

2.2 ROS 1
Various efforts at Stanford University in the mid-2000s involving integrative, embod-
ied AI, such as the STanford AI Robot (STAIR) and the Personal Robots (PR) program,
created in-house prototypes of flexible, dynamic software systems intended for robotics
use - which they named ROS [11]. In 2007, Willow Garage, a nearby visionary robotics
incubator, provided significant resources to extend these concepts much further and
create well-tested implementations. The effort was boosted by countless researchers
who contributed their time and expertise to both the core ROS ideas and to its funda-
mental software packages. Throughout, the software was developed in the open using
the permissive BSD open-source license and gradually has become a widely used plat-
form in the robotics research community.
From the start, ROS was developed at multiple institutions and for multiple robots,
including many institutions that received PR2 robots from Willow Garage. Although it
would have been far simpler for all contributors to place their code on the same servers,
over the years, the "federated" model has emerged as one of the great strengths of the
ROS ecosystem. Any group can start their own ROS code repository on their own serv-
ers, and they maintain full ownership and control of it. They don't need anyone's per-
mission. If they choose to make their repository publicly available, they can receive the
5

recognition and credit they deserve for their achievements, and benefit from specific
technical feedback and improvements like all open-source software projects.
The ROS ecosystem now consists of tens of thousands of users worldwide, working
in domains ranging from tabletop hobby projects to large industrial automation systems.
Despite its large success ROS 1 did not find its path to automotive production programs
because of the following reasons: low code quality, non-standard, non-automotive mid-
dleware, lack of real-time capabilities, no nodes with managed lifecycle, no security,
ack of testing and documentation, no support for automotive ECUs.

Fig. 2. Autonomous Driving stack and the role of the Framework Software in it.

3 ROS 2

3.1 Enter ROS 2


To address the aforementioned shortcomings, the Open Source Robotics Foundation
(OSRF) started a new project in 2013, ROS 2. Open Robotics considered autonomous
vehicles as well as the following use cases before starting development for ROS 2:
• Small, embedded platforms: ROS 2 has a smaller and more optimized codebase
compared to ROS 1, which can run on aarch64 computer architectures, such as
the R-Car H3.
• Real-time systems: ROS 2 communication mechanism, memory management,
and threading model allow for it to be real-time upon hardening.
• Production environments: Coupled with the aforementioned real-time constructs,
ROS 2 can be run on an RTOS such as QNX.
6

• Prescribed patterns for building and structuring systems: While ROS 2 has the
underlying flexibility that is the hallmark of ROS 1, ROS 2 provides patterns and
supporting tools for features such as life cycle management and static configura-
tions for deployment.
Now let's have a look at how ROS 2 fulfills the promises given in the introduction of
this article.

3.2 A standardized software architecture with open APIs

Writing a standardized software architecture that will satisfy everything from an em-
bedded microcontroller running on a drone or radar sensor to the centralized high-per-
formance ECU for the autonomous car is no easy task. Many different aspects have to
be considered:
1. hardware abstraction layer
2. OS abstraction layer
3. runtime layer
4. support for various programming languages
5. non-functional performance
6. security
7. safety
8. software updates
9. tools for the development, debugging, recording & replay, visualization, simulation
10. tools for the continuous integration and continuous deployment
11. interfaces to the legacy systems (such as e.g., AUTOSAR Classic)
12. execution management in the user applications
13. time synchronization
14. support for hardware acceleration
15. model-based development
In order to integrate these aspects into one coherent framework the following design
decisions were followed in the design of ROS 2:
1. Adopt positive lessons learned from the extensive deployment of ROS 1
2. Apply negative lessons learned from the extensive deployment of ROS 1
3. Use hourglass design in order to keep the core of ROS 2 as a single code base
4. Use industry-proven middleware and focus on the layers above the middleware
5. Newly develop mechanisms for security, safety, software updates, interfaces to the
legacy systems (such as e.g., AUTOSAR Classic), execution management in the
user applications, time synchronization, support for hardware acceleration, model-
based development
With these aspects in mind, the architecture shown in Fig. 3 was created for ROS 2.
The largest change from ROS 1 to ROS 2 is that the underlying communication layer
was implemented in a plug-in-like interface for many types of middleware solutions,
and also has tier 1 support for DDS implementations. On top of the middleware layer
7

is a middleware abstraction layer called the ROS middleware API, which is used to
prevent a DDS vendor lock-in. Implementation of the middleware layer for e.g.,
SOME/IP is with this design rather trivial. On top of this layer are the ROS 2 client
libraries written in C++ and Python. Using these client libraries developers can write
application code. With this so-called hourglass design, the thin client libraries always
wrap around the same code base for operating system and communication.


Fig. 3. Architecture of ROS 2

One of the concepts that proved itself the most in ROS 1 was the concept of a node as
a central logical unit. The node component brings together different publisher and
subscriber patterns, threading, queues, and the execution model - all abstracted behind
the intuitive set of APIs. An example of a ROS node is depicted in Fig. 4. We see that
with a node we can subscribe to one or more data streams (called topics), the data is
then aggregated and processed in a single-threaded or parallel loop and the result is
published on another topic. The node can also be parametrized, composed into one or
more processes and it also comes with its own lifecycle for starting and stopping.
Nodes also emit diagnostics information such as logging, heartbeat, and node state in-
formation.
One of the missing concepts in ROS 1 was the concept of executor which was
added in ROS 2. The executor provides spin functions and then coordinates the nodes
and callback groups by looking for available work and completing it, based on the
threading or concurrency scheme provided by the subclass implementation. An exam-
ple of available work is executing a subscription callback or a timer callback. The ex-
ecutor structure allows for a decoupling of the communication graph and the execu-
tion model — a feature much needed when building real-time and deterministic sys-
tems.
ROS 1 provided standardized messages [1] for diagnostics, geometric primitives,
robot navigation, and common sensors such as laser range finders, cameras, and point
clouds. This allows for having the agreed-upon contract between different software
and hardware modules and in turn allows for great flexibility and reusability of devel-
oped software components. These messages were also adopted in ROS 2 but imple-
mented with the OMG IDL language which enables things like message versioning
and message forward and backward compatibility.
8

Fig. 4. ROS node concept

ROS 1 had very high non-functional performance characteristics, especially in


terms of CPU usage and high bandwidth which allowed the developers to cope with
the throughputs of up to 6 Gb/s of data as produced by for example an autonomous
driving car. In ROS 2 this had to be preserved and even further enhanced by the intro-
duction of e.g., zero-copy transport mechanisms such as iceoryx [3].
These are just a few examples of proven in the field which could be improved be-
cause a) ROS 1 is an open-source framework [11], b) because there is a large commu-
nity of users constantly running into the edge cases and c) because it is accepted that
the best software is developed through the coding iterations and not through imple-
mentation solely based on specifications.

3.3 An awesome developer experience


ROS community has developed a various set of desktop and cloud tools that allow the
users to a) introspect and debug their systems, b) record, replay, and visualize the data
and c) emulate and simulate their systems. For the full set of tools, we would like to
direct the user to a dedicated webpage [14].
For ROS 2, we would like to single out two particular tools that contributed to the
improved developers’ experience and productivity.
The ADE Development Environment [1] is a modular Docker-based tool to ensure
that all developers in a project have a common, consistent development environment.
ADE creates a Docker container from a base image and mounts additional read-only
volumes in /opt. The base image provides development tools (e.g., vim, udpreplay,
9

etc.), and the volumes provide additional development tools (e.g., IDEs, large third-
party libraries) or released software versions. Furthermore, ADE enables easy switch-
ing between versions of the images and volumes.

Fig. 5. ADE development environment

Arguably the most significant effort required for the production deployment of au-
tonomous vehicles (AVs) is the proof that the system is safe for homologation. To pro-
vide such proof, a holistic, software-first, approach to testing for autonomous vehicles
must be available. In other words, we need a framework that will enable the writing of
the relevant tests and support extracting the proofs. To develop such a framework, it is
important to look at the automotive V-model and to understand what kind of tests one
is expected to write: Unit Tests, Component Tests, Integration Tests, Sub-System Tests,
System Tests, Non-Functional Tests, Acceptance Tests. Furthermore, it is important to
understand how input data is generated for these tests: manually created, manually gen-
erated, simulation generated, vehicle recordings, and in what kind of environments the
tests are executed: Software-In-The-Loop (SIL), Hardware-In-The-Loop (HIL), Vehi-
cle-In-The-Loop (VIL). Lastly, it is important to understand what kind of key perfor-
mance indicators are to be extracted from the tests.
There are plenty of tools available for unit and component testing (e.g. [5]) but upon
our extensive research we could not find a framework that would satisfy other types of
tests, cover all other input data and environments and that would clearly between the
following phases: test configuration, test setup, test running and results logging and
extraction. We thus wrote a tool in the open source called launch_testing [6] which is a
framework for the integration, system and sub-system, non-functional as well as ac-
ceptance testing of applications written with ROS 2 and Apex.OS. The framework pro-
vides a python API and provides such features:
• The exit codes of all processes are available for the tests.
• Tests can check that all processes shut down normally, or with specific exit codes.
• Tests can fail when a process dies unexpectedly.
• The stdout and stderr of all processes are available to the tests.
• The command line used to launch the processes are available to the tests.
10

• Some tests run concurrently with the launch and can interact with the running
processes.
• The results of the test can be logged and processed post test shutdown.
• The data can come into the test from a live source or pre-recorded or simulated
source.

3.4 A software architecture that scales to massive software systems


When we talk about the architecture that scales we mean the following dimensions of
scaling: a) scaling with the number of developers, b) scaling in terms of the non-func-
tional performance, c) scaling in terms of the integration of the 3rd party automotive
components and d) scaling in terms of centralized and decentralized applications archi-
tecture.
Apex.OS and ROS are the most field-proven frameworks worldwide in terms of user
adoption. Per the last ROS community metric report [12], there are ~200.000 users of
ROS worldwide and we know of companies that employ several 100s of engineers
working with ROS. Key enablers for this fact are open APIs, an ecosystem of tools,
vast best development practices, and tight integration with the various CI/CD systems.
Apex.OS and ROS non-functional performance we described in the great level of
details in [10]. In general, we are looking at the efficiency of the abstraction layers and
the efficiency of the transport protocols which must result in the acceptable CPU utili-
zation, memory consumption, latency, page faults, context switches, and other such
system metrics. As we explain in [10], it is also important to have a properly designed
tool for unbiased measurements and a methodology that mimics that of the actual ap-
plication.
The most substantial challenge that we had to solve was for the complex use case
depicted in Fig. 6. Apex.OS is in this case used as the framework for the L4 autono-
mous driving application but also as a first-class interface to the vehicle's proprietary
transport protocols (CAN, SOME/IP) and as a gateway interface between these safety-
critical components and the cloud-based fleet management software. We also had to
provide the capability for porting runnables from the AUTOSAR Classic world into the
Apex.OS. In terms of porting of the AUTOSAR Classic, we followed the following
approach:
• AUTOSAR Classic runnables are instantiated in Apex.OS Nodes (1:1 mapping)
• An RTE wrapper layer translates between RTE calls and the rclcpp API with pub-
lishers and subscriptions
─ The data provided with Rte_Write_XYZ( ) is written to a message that is sent
via a publisher
─ The data consumed with Rte_Read_XYZ( ) is read from a sample that was
received via a subscription
• Flexible mapping of signals to topics (similar to AUTOSAR Signal-to-Service
Mapping)
─ e.g., all the signals provided by an SWC are combined in one topic and sent
with one publisher
11

• Each update of a signal with Rte_Write() can trigger an update of the topic, or the
topic is updated after the runnable execution (corresponds to Logical Execution
Time approach (LET))
For SOME/IP communication we rely on the derived data mapping and serial-
ize/deserialize the data according to the SOME/IP and Apex.OS specification. For the
CAN communication, we implemented an extension that abstracts away the underlying
CAN device driver (e.g., socketCAN) and the CAN message definitions. For the com-
munication of the telematics data with the backend in the cloud we needed to consider
the safety-critical and non-safety data exchange, the interfaces to the state-of-the-art
backend brokers (e.g. MQTT), redundant communication paths, secure data transport,
and more frequent software updates of the gateway components written with Apex.OS.
Apex.OS enables such a solution because of its clear separation between the data bus
and the APIs accessing the data on the bus. Update mechanisms via the tools such as
AWS Greengrass, Mender, etc. are possible because of the well-defined Apex.OS APIs
and well-defined Apex.OS node lifecycle.

3.5 A software implementation based on modern software engineering


practices
To have a robust and maintainable codebase one needs to use a) well-defined but
friendly software development process and b) state-of-the-art software development
tools. Such SW development process is depicted in Fig. 8 and encompasses all the steps
from the input requirements all the way to the CI/CD. The keys to the success of such
a development process are:
• An integrated development environment, which at Apex.AI is centered around
Gitlab, Gitlab CI and docker.
• An integrated IDE, which at Apex.AI is Clion. Clion provides all of the state of
the art features such as code completion, debugging but also integration of exter-
nal tools such as e.g. gtest, valgrind, different build tools, doxygen, tool for code
test coverage.
• The steps implementing the development process must be fast to allow quick cod-
ing iterations.
• The local development environment and the CI/CD must be equivalent to be able
to reproduce CI failures.
• Integrations with the 3rd party tools, such as e.g. with the JAMA requirements
management tool, must be custom tailored to the particular team to allow for the
quick fixes and extensions.
• The main code repository should be as monolithic as possible and all of the de-
velopment artifacts (design documents, code, tests, documentation, …) should be
as co-located as possible.
12

3.6 Abstraction of the complexity of all underlying hardware and software


Software development for embedded automotive application is usually a lengthy pro-
cess because it follows the following three-fold process:
1. Prototyping in e.g., Python or Matlab by the research team.
2. Pre-development of applications in e.g., ADTF, ROS, … by the pre-development
team.
3. Product development of the application in the from-scratch-written-framework by
the series production team.
This process essentially re-invents the wheel three times and often results in an end
solution that by the time of series production a) it is already outdated with respect to
the state of the art and b) is flawed with the bugs introduced in the handing over by the
different teams. Apex.OS is a framework that can be used by all of the three mentioned
groups above which eliminates the need for any rewrites. It also allows well-versed
researchers and pre-development teams to write the production applications if need be.
In the rest of this chapter, we provide examples of such tools that are part of Apex.OS,
convenient to use and especially applicable for the series production systems:
1. System monitor: Apex.OS provides system monitoring features where the applica-
tion gets relevant system information, such as CPU load, temperature, memory us-
age, network load, along with the status of the application thereby abstracting away
the type of Operating system and the Hardware the particular application is being
executed on.
2. Real-time settings: Apex.OS provides interfaces to the application to set some of
the real-time settings, such as CPU pinning, real-time schedulers, thread priorities,
thereby abstracting away the type of operating system the particular application is
being executed on.
3. Inter-and intra-ECU Communication: Apex.OS provides an abstraction layer for
inter- and intra-ECU communication where the application just needs to publish its
messages. Underneath, the middleware layer decides the best communication pro-
tocol to be used for a particular situation (shared memory, network protocols such
as UDP) thereby abstracting away from the application the type of communication
to be used along with the type of operating system and the hardware being used.
4. Sensor and Actuator Interfaces: Apex.OS provides common interfaces for various
kinds of automotive sensors, thereby being able to easily integrate various sensors
into an ADAS stack. Apex.OS also provides interfaces to actuators (CAN, Flexray,
etc.) making it easy to integrate them into the ADAS stack.
5. Application-level virtualization: Our docker-based containerization solution pro-
vides binary level isolation and also process level runtime isolation thereby provid-
ing a reliable computing environment between various isolated applications.
13

4 From ROS 2 to Apex.OS Cert — ISO 26262 ASIL D


certification of an open-source codebase

4.1 Certification use case


The Apex.OS certification process started in May 2019, when Apex.OS based on the
dashing fork of ROS 2 was selected for certification. Apex.AI recruited a head of safety
and contracted with TÜV NORD as its rating agency. One of the first things we did
was to write a technical safety concept (TSC) for Apex.OS which defined an assumed
use case and ODD (operation design domain). For the first iteration of Apex.OS Cert
assumed use case consisted of a rather simple scenario where one or more publishers
communicate data to one or more subscribers. Of the 5Cs [15] of a typical robotics
framework (communication, coordination, computation, composition, and configura-
tion) hazards derived were restricted to only communications related. This greatly
helped to reduce the scope of Apex.OS Cert safety concept but at the same time was
sufficient to meet customers’ requirements at the time. Fig. 6 below depicts the assumed
use case of Apex.OS Cert 1.2.0.

Fig. 6. Apex.OS assumed general use case

The next step was to determine the boundary of Apex.OS Cert. We decided to restrict
the boundary to just the source code in the first iteration of certification and add libraries
to the scope in future releases of Apex.OS Cert. At this point, we wrote a rudimentary
safety case of Apex.OS Cert. Apex.OS Cert safety case is an internal document that
lays out what are our safety objectives and what artifacts we will generate to put for-
ward a convincing safety argument for our safety. In the first audit with TÜV NORD,
we got tacit approval for both of the aforementioned TSC and safety case.


Fig. 7. Apex.OS certification safety case
14

Fig. 7 shows the Apex.OS Cert safety case. While defining the safety case, we also
inventoried all the development tools that we need to run TCL (Tool Confidence Level)
reports (ISO 26262-8:2018 [6]) and planned our processes in FSLC (functional safety
lifecycle) such as our change management, safety culture, code review enforcements
and so on. These plans were also captured in functional safety management (FSM) ar-
tifacts as dictated in ISO 26262-2:2018.

4.2 Certification process


Subsequently, we applied a proprietary process of thirteen steps to every package that
was within the certification scope:
1. Reduction of the feature set of each package
2. Investigation to make APIs memory static and ensure no blocking calls
3. Static Code Analysis (SCA)
4. Structural Coverage for statement, branch and MC/DC
5. Notations of Designs (modelling diagrams)
6. Application of principles of software architecture and design
7. Control and data flow analysis completed
8. Integration tests and specialized tests completed
9. Requirements and traceability completed
10. Safety Analysis (FMEA) completed
11. Generate Safety Artifacts submitted
12. Testing on Target platform/hardware completed
13. Tool Classification and Qualification completed
Describing each step in detail is beyond the scope of this paper but we will discuss a
few key steps from the list below.
• Making APIs memory static: One of the important limitations of ROS 2 that
rendered it not suitable for safety-critical applications such as Autonomous driv-
ing was it not being real-time compliant. Non-deterministic run time memory al-
locations, blocking calls, and usage of standard STL packages such as threading
were the key reasons for its non-compliance to real-time. So Apex.AI took a series
of measures to make ROS 2/Apex.OS packages that it included in Apex.OS Cert
to be free of such behaviors. Fig. 8 gives a high-level overview of these measures.
• Structural Coverage: Apex.AI spent quality time achieving 100% statement,
branch and MC/DC coverage for all Cert packages as mandated by ISO 26262-
6:2018 for ASIL D.
• FMEA: Apex.AI conducted extensive safety analysis for every public API from
cert packages to derive additional safety requirements or R&R (restrictions and
recommendations) for its users.
• Requirements traceability: There were no formal requirements available from
ROS 2 fork. Apex.AI wrote several hundred safety and nominal requirements and
traced them to codebase and tests using a certified requirement management tool.
15


Fig. 8. Improvements in Apex.OS to achieve real-time performance

4.3 Certification challenges


There were several technical challenges encountered during this process. We will ana-
lyze the major ones as follows.
Achieving 100% structural coverage: ROS 2 packages came with low MC/DC cov-
erage, so we were aware of this challenging task from the very beginning. We selected
a modern and certified coverage tool to help us achieve this goal. ROS 2 is written in
modern C++ with heavily templated constructs. Unfortunately, the coverage tools are
still catching up to the challenges of modern C++ and fail to process complex C++
constructs such as back-to-back lambda functions. We had to work with our tooling
vendors to identify solutions and workarounds to these challenges. Furthermore, defen-
sive coding methodologies can make it difficult to reach conditional statements that are
gated by external libraries over which we have little control. We reverted to Google-
Mock [4] techniques for several such cases.

Fig. 9. Use of GoogleMock to get to hard-to-reach code


16

Checking for runtime memory allocations: Applicable tools for checking runtime
memory allocations and blocking calls are not available to our knowledge. We re-pur-
posed LTTng [8], the linux tracing tool next gen, to accomplish this task. Automation
was difficult and many things such as instrumenting code had to be done manually
which is suboptimal and undesirable for obvious reasons.

Fig. 10. Identifying runtime memory allocations and blocking calls.

Making Apex.OS runtime execution deterministic: ROS 2 uses standard containers


which allocate memory explicitly. We wrote our own containers for string, vector, and
map/set to not only make Apex.OS Cert runtime memory static but also make the exe-
cution as deterministic as possible.
Development tool classification and qualifications: ROS 2 uses external develop-
ment tools for activities such as parsing/configuring (e.g., YAML) and code generation
(e.g. rosidl_generator_cpp). When Apex.AI performed TCL reporting on these external
tools they were deemed TCL2 (medium confidence) and so they had to be qualified.
Qualification of external tools is quite challenging since we do not have access to the
source code and test base for them. Qualifying a large open-source tool such as YAML
parser (that also includes a library) is prohibitively resource intensive and time con-
suming. In case of YAML, Apex.AI redesigned its development flow such as YAML
parsing was broken down into 2 steps where in YAML in conjunction with an internal
simple tool is used to generate an offline configuration file which can be checked for
accuracy and then used to build the Apex.OS binary. Runtime configuration using
YAML is disallowed in Apex.OS Cert. See Fig. 11 below for new flow that Apex.AI
developed for parsing/configuring its binary. A similar approach was used to get around
the challenge of certifying external code generation tools (beyond the scope of this pa-
per to explain).

Fig. 11. New static runtime configuration of Apex.OS Cert binary.


17

5 Summary and Outlook

We have been able to prove that it is possible to certify an open-source project to the
highest level of integrity with the right expertise and strategic choices of scope and
tooling. Apex.OS Cert provides manufacturers and suppliers a fast track and scalable
process to build a safe automotive software stack. Our customers have shown applica-
bility to a wide range of automotive applications not limited to autonomous driving.
We have been able to accomplish development and certification in a very short amount
of time based on our prior expertise in ROS and through reuse of a proven architecture
and tooling. For future work, Apex.AI will not only expand the technical safety concept
of Apex.OS but also certify libraries including the important transport layer (middle-
ware) libraries. Apex.OS is also in the process of being integrated with AUTOSAR
Adaptive and SOME/IP.

References
1. ADE environment, https://ade-cli.readthedocs.io/en/latest/, last accessed 2021/05/08.
2. Commonly user messages in ROS., https://github.com/ros/common_msgs, last accessed
2021/05/08.
3. Eclipse iceoryx, https://github.com/eclipse-iceoryx/iceoryx, last accessed 2021/05/08.
4. GoogleTest Mocking (gMock) Framework, https://github.com/google/googletest/tree/mas-
ter/googlemock, last accessed 2021/05/08.
5. GoogleTest Framework, https://github.com/google/googletest, last accessed 2021/05/08.
6. ISO 2626 Road vehicles – Functional safety,
https://www.iso.org/obp/ui/#iso:std:iso:26262:-9:ed-2:v1:en, last accessed 2021/05/08
(2018).
7. launch-testing, https://index.ros.org/p/launch_testing/, last accessed 2021/05/08.
8. LTTng tracing framework for Linux, https://lttng.org/, last accessed 2021/05/08.
9. McKinsey & Company, Mapping the automotive software and electronics landscape
through 2030, https://www.mckinsey.com/industries/automotive-and-assembly/our-in-
sights/mapping-the-automotive-software-and-electronics-landscape-through-2030, last ac-
cessed 2021/05/08 (2019).
10. Pemmaiah, A., Pangercic, D., Aggarwal, D., Neumann, K., Marcey, K., Performance Test-
ing in ROS 2, https://www.apex.ai/post/performance-testing-in-ros-2, last accessed
2021/05/08 (2020)
11. Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.,
ROS an open source operating system, In: ICRA workshop on open source software (2009).
12. ROS Community Metrics Report, http://download.ros.org/downloads/metrics/metrics-re-
port-2020-07.pdf, last accessed 2021/05/08 (2020).
13. ROS core stacks, https://github.com/ros, last accessed 2021/05/08.
14. ROS tools, http://wiki.ros.org/Tools, last accessed 2021/05/08.
15. Vanthienen, D., Klotzbücher, M., Bruyninckx H. The 5C-based architectural Composition
Pattern: lessons learned from redeveloping the iTaSC framework for constraint-based robot
programming. In: 9th Journal of Software Engineering for Robotics, pp. 17–35.
www.joser.org (2014).
Theoretical Substitution Model for Teleoperation

Elisabeth Shi1, 2 [0000-0001-6742-7023] and Alexander T. Frey2


1
Technical University of Munich, Germany
2 Federal Highway Research Institute (BASt), Germany

Abstract. With emerging teleoperation and driving automation, diversity in ve-


hicle guidance possibilities increases. To be able to manage this diversity and to
talk about the multiple possibilities technology offers, we need a straightforward
conceptual basis that provides overview about how and to what extent which en-
tity contributes to vehicle guidance. We suggest a model that provides a context
for teleoperated and automated driving by acknowledging three different modes
by which vehicle motion control may be executed: (1) in-vehicle human driver,
(2) driving automation function and (3) teleoperating entity. According to our
substitution model, teleoperation may substitute the human driver or the driving
automation function. At the same time, the model discriminates between different
types of substitution characterized by whether the teleoperation provides assis-
tance or sustained driving, is planned or intervening and by the extent of substi-
tution. The Substitution Model for Teleoperation strives for theoretical compre-
hensiveness while acknowledging the fact that it remains an abstract simplifica-
tion.

Keywords: Teleoperation Model, Automated Driving, Human Factors.

1 Current lines of research

From the evolution of on-road transportation, the in-vehicle human driver’s role seems
very clear. The human driver uses steering controls (e.g. steering wheel and pedals) to
execute the dynamic driving task. Applying the terms of SAE International Standard
J3016 [1], this refers to the role of the ‘conventional driver’ ([1], p.16). Next, also driv-
ing automation functions have been discussed thoroughly, and classification schemes
have been established and developed to account for different kinds of driving automa-
tion functions [1, 2, 3, 4]. Additionally, interactions between those two modes of con-
trol (human, i.e. ‘manual driving’, and automation, i.e. ‘automated driving’) have been
addressed in several publications [e.g. 5].
It is the role of teleoperation that has been considered only recently in the context of
automated and manual driving and has not yet received so much response. Unsurpris-
ingly, the focus has rather been on very practical issues, such as latencies influencing
teleoperators’ quality of vehicle motion control [6]. A literature search in the databases
IEEE Xplore, Web of Science and Scopus using the terms ‘teleoperation’ and ‘auto-
mated driving’ has yielded only 33 papers in total, published between 1997 and 2021

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_6
2

(descriptive; publication year was no search criterion). Despite the search term ‘auto-
mated driving’, many of the resulting papers did not deal with automated driving, but
with medical surgery, other robots (e.g. welding, service, tentacles), general sensor de-
velopment, application in military context, educational distance learning or drones. Of
those that dealt with on-road driving, the majority addressed either questions related to
latency [e.g. 7, 8, 9] or questions related to human-machine-interfaces for human tele-
operators [e.g. 10, 11].
The two lines of research on teleoperation in the context of automated driving show
that teleoperation as a concept is indeed associated with automated driving. At the same
time, teleoperation seems to be treated as a stand-alone feature not yet substantially
embedded into automated driving.

2 Objective

The current framework shall serve as a link between teleoperation and automated and
manual driving. Not only the general research topics, but also the parallel lines of re-
search outlined above can be integrated more strongly, if we had a common language
and a common ground on the concept of teleoperation per se. This may also facilitate
overview about what teleoperation concepts are being discussed and researched, or in-
tentionally disregarded or unintentionally neglected.
Furthermore, with teleoperation entering as another vehicle guidance option, the
possibilities of how a vehicle’s motion is controlled increases. To be able to manage
this diversity in vehicle guidance possibilities and to talk about the multiple possibilities
technology offers, we need a straightforward conceptual basis that provides overview
about how which entity contributes to vehicle guidance.
The Substitution Model for Teleoperation aims at providing such a common ground
and common language for research and discussions on teleoperation in automated driv-
ing.

2.1 A theoretical framework


Because practical implementation of teleoperation is necessarily bound to technological
limitations of the time, we know that this framework needs to be neutral and open for
future technological developments and it should not be restrictive with regard to gen-
erating ideas and making progress in practice. To address these demands and meet the
objective outlined earlier, the Substitution Model for Teleoperation shall span the con-
ceivable application field of teleoperation, and should be treated solely as a theoretical
framework.

2.2 Discrimination between theoretical framework and practical


implementation
To provide overview about how which entity contributes to vehicle guidance, we need
to achieve theoretical comprehensiveness about the possibilities of vehicle guidance
3

options. The framework stays the course of theoretical comprehensiveness very strictly
(which will be discussed in section 6). Therefore, we strongly highlight that parts of the
presented theoretical framework are not practically relevant from today’s perspective,
and their mention in this article shall not be misunderstood as a recommendation or
claim to translate the theoretical option into practice. Discrimination between the pre-
sented theoretical framework and the practical implementations needs to be kept in
mind, especially as this article strongly goes beyond what is currently being discussed
as practical implementation.

3 Teleoperation in the context of road traffic

Teleoperation (tele greek for remote, operatio latin for operation) in general describes
the exertion of motion control from outside the moved entity by a third party [12]. Tel-
eoperation is applied in multiple fields such as underwater [13], medical [14], space
[15] and road traffic [6]. In the context of automated driving of on-road vehicles (i.e.
SAE Level 3 or higher), teleoperation is frequently discussed as a means to overcome
current limitations of driving automation functions, e.g. fallback regarding safety is-
sues.
For example, in the UrbANT project funded by the German Federal Ministry of Ed-
ucation and Research, a delivery robot is being developed. This robot is designed to
transport goods home, especially purchases and groceries, but also to serve as a mobile
parcel station. With a payload of up to 90 kg, the delivery robot can operate for about
four hours. Regardless of the robot's transport structure, which can vary in height, the
technical implementation of the basic platform (approx. 900 × 600 mm) is always iden-
tical: by means of camera and lidar sensors, the vehicle should be able to move auton-
omously in public road traffic on pavements at a maximum speed of 6 kph. Further-
more, the delivery robot will be able to follow a user automatically. For both function-
alities, the traffic environment will be precisely mapped. Normally, the driving auto-
mation will safely operate the robot. However, within the automation’s domain an un-
predictable situation (e.g. an insurmountable obstacle on the route) may occur, which
cannot be solved by the automation. This may also include unexpected complete failure
of the driving automation function. Replacing the driving automation by teleoperation
may solve these technologically challenging situations more efficiently than by seeking
the help of a person present on site. This prevents both the delivery robot's drive from
being interrupted and the robot from becoming a massive obstacle to traffic itself due
to obstruction. In the UrbANT project, consequently, a human teleoperator should be
able to resolve a situation that is unsolvable for the driving automation function and to
proceed the drive, but also to achieve a minimal risk condition in the traffic area if
necessary. With reference to section 4.1, the teleoperator in the UrbANT project thus
performs the task of remote driving in this case.
Motivated by research conducted in UrbANT, we propose a model that provides a
context for teleoperated and automated driving by acknowledging three different modes
by which vehicle motion control may be executed:
1. in-vehicle human driver
4

2. driving automation function


3. teleoperating entity

4 Overview on the Substitution Model for Teleoperation

We propose a substitution model in which teleoperation may substitute the conven-


tional driver or the driving automation function. Both the conventional driver and the
driving automation function are on board, whereas teleoperation takes place from the
outside of the vehicle. Fig. 1 depicts the three modes of operation (in-vehicle human
driver, driving automation function, teleoperating entity) and the substitution of either
of the former two by the teleoperating entity.

Fig. 1. Substitution Model for Teleoperation. (1) human, (2) automation, (3) teleoperating entity.
In the context of automated driving, teleoperation can substitute the in-vehicle human driver or
the driving automation function.

In terms of the substitution model, it is indifferent whether the in-vehicle human


driver or a driving automation function is substituted. Therefore, we subsume those two
as ‘primary controllers’. Furthermore, the teleoperator is regarded as a teleoperating
entity that does not need to be human, but may be a computerized remote operator.
Besides the three modes of operation, the model also discriminates between different
types of substitution. These types are characterized by whether teleoperation provides
assistance or sustained driving, is planned or intervening and by the extent of substitu-
tion.

4.1 Teleoperators’ tasks


The teleoperators’ tasks can be divided into two categories: remote assistance and re-
mote driving.
In case of remote driving, the teleoperator performs the driving task and is in control
of the vehicle motion. In the initial example of the child steering the toy car, the child
is the teleoperator and provides remote driving.
5

In case of remote assistance, the vehicle motion control is performed by either the
conventional driver or the driving automation function, and the teleoperator gives in-
formation or advice to the respective primary controller. For example, if a delivery ro-
bot’s driving automation function detects a large dark spot on the ground which cannot
be classified by the driving automation function, a teleoperator might be requested. The
teleoperator then may provide the information to the delivery robot’s driving automa-
tion function that the spot is a shadow. With this information, the driving automation
can continue the ride without further assistance by the teleoperator.
In contrast to remote driving, the primary controller (i.e. driving automation function
or conventional driver) is capable of performing the vehicle motion control, but requires
further information or advice by the teleoperator in order to solve the current situation
in terms of vehicle motion control.
The differentiation between remote assistance and remote driving refers to the task,
not to the teleoperating entity. For example, it is possible that one teleoperating person
performs both tasks.

4.2 Situative conditions of teleoperation


Teleoperation can be planned or can intervene in case the primary controller drops out
(e.g. defect in driving automation function or unresponsive conventional driver).
In case teleoperation is planned, it may follow a phase of manual driving or auto-
mated driving or the vehicle can be operated by teleoperation only, just like the remote-
controlled toy car. An example for teleoperation following an automated driving phase
might be a delivery robot that has reached its driving automation function’s limit. A
teleoperator may then continue the ride. An example for teleoperation following a phase
of manual driving by a conventional driver might be a teleoperated drive on the motor-
way that ends shortly before departure by the taking over of manual vehicle control.
In case of intervening teleoperation, the driving automation function has failed or
the conventional driver is absent for some reason. The teleoperator is then required to
perform the remote driving. The UrbANT delivery robot can provide an application
example. Therefore, we need to look at the UrbANT delivery robot control principle:
If these unpredictable traffic situations occur, a teleoperator can enable the vehicle to
continue driving or initiate a minimal risk maneuver. For this, the teleoperator who is
thought to be human in the UrbANT project, does not need to permanently monitor the
drive and the traffic environment. Because humans are poor supervisors of automated
processes, this should generally be discouraged [16, 17]. Instead, the robot signals to
the teleoperator that intervening control is required. As soon as the teleoperator has
gained sufficient awareness of the traffic situation, s/he can perform the teleoperation.
Although it is conceivable that intervening teleoperation may be performed with or
without a direct view onto the teleoperated vehicle, in the UrbANT project, the aim is
to benefit from teleoperation by a remote control center (i.e. without direct view). The
teleoperator thus uses the video images of the camera sensor system. Of course, this
results in the well-known challenges of a latency-afflicted teleoperation (the perception
latency interacts with the vehicle control latency) [7].
6

5 Applying current classification concepts to the Substitution


Model for Teleoperation

To achieve largest possible comprehensiveness, existing classification schemes for


manual and automated driving can be used for orientation, i.e. the Principles of Opera-
tion Framework and the SAE Standard J3016. In the following, we apply these concepts
to our Substitution Model for Teleoperation and discuss this approach in section 6.
The Principles of Operation Framework [4] offers a concept to describe possibilities
to take influence on vehicle guidance [18]. Three Principles of Operation are differen-
tiated: information and warning, sustained automation and temporary intervening auto-
mation. For further information on the Principles of Operation please refer to [3, 4]. We
apply the concept to teleoperation and differ between the teleoperation providing infor-
mation and advice, teleoperation providing sustained driving and teleoperation provid-
ing temporary intervention.

5.1 Teleoperation providing information and advice


Teleoperation providing information and advice is related to the teleoperators’ remote
assistance task introduced in 2.1.
In contrast to other forms of teleoperation, remote assistance does not involve exe-
cuting the driving task remotely. Rather, the teleoperating entity provides information
or warnings that the primary controller (either conventional driver or driving automa-
tion) might need in order to execute or to improve execution of the driving task. The
primary controller can respond to the provided information or warning by itself. For
example, snow remaining on the ground might constitute a functional limit of a truck’s
driving automation function because lane markings can no longer be detected. As the
driving automation detects light snowfall, the automation requests weather forecast in-
formation. If the teleoperating entity provides the information that the snowfall will
remain light and stop soon, the driving automation might remain active. If the teleoper-
ating entity provides the information that the snowfall will increase and remain, the
driving automation function might issue a takeover request to the in-vehicle truck
driver. Importantly, the teleoperating entity only provides the information. It is the pri-
mary controller (here the driving automation) that integrates the information and takes
the following decisions and steps by itself.

5.2 Teleoperation providing sustained driving


Teleoperation providing sustained driving is related to the teleoperators’ remote driving
task introduced in 2.1.
Analogous to SAE Standard J3016’s [1] differentiation between degrees to which an
automated driving system supports or releases a human driver in or from performing
the driving task, different degrees of remote driving can be differentiated. By applying
the SAE Standard J3016 to remote driving, remote driving can be described in more
detail according to the extent to which the driving task is executed by a teleoperator as
7

a remote driver instead of a primary controller inside the vehicle (driving automation
function or conventional driver). From the substitution model’s perspective, it is indif-
ferent which entity (whether the driving automation function or the conventional driver)
has been substituted. Table 1 summarizes the transfer of SAE Standard J3016 to tele-
operation.

Table 1. SAE Standard J3016 applied to teleoperation

Level driving automation teleoperation

0 The driver continuously performs The primary controller (automation or


the entire driving task. human driver) continuously performs
the entire driving task.

1 The driving automation function The teleoperating entity performs ei-


performs either the lateral or the ther the lateral or the longitudinal ve-
longitudinal vehicle motion con- hicle motion control. The remaining
trol. The remaining driving task, driving task, including supervising
including supervising the driving the teleoperating entity’s perfor-
automation function’s perfor- mance, is performed by the primary
mance, is performed by the driver. controller (automation or in-vehicle
human driver).

2 The driving automation function The teleoperating entity performs


performs both the lateral and the both the lateral and the longitudinal
longitudinal vehicle motion con- vehicle motion control. The remain-
trol. The remaining driving task, ing driving task, including supervis-
including supervising the driving ing the teleoperating entity’s perfor-
automation function’s perfor- mance, is performed by the primary
mance, is performed by the driver. controller (automation or in-vehicle
human driver).

3 The driving automation function The teleoperating entity performs the


performs the entire driving task entire driving task within a specific
within a specific domain. The domain. Another controller (automa-
driver is expected to be available tion or in-vehicle human driver) is ex-
and to respond to a takeover re- pected to be available and to respond
quest when a functional limit is ap- to a takeover request when a teleoper-
proached ation limit is approached.

4 The driving automation function The teleoperating entity continuously


continuously performs the entire performs the entire driving task
driving task throughout the entire throughout the entire ride within a
ride within a specific domain. A specific domain. Another controller
(automation or in-vehicle human
8

driver is not expected to respond to driver) is not expected to respond to


takeover requests by the function takeover requests by the function.

5 The driving automation function The teleoperating entity continuously


continuously performs the entire performs the entire driving task
driving task throughout the entire throughout the entire ride and in all
ride and in all domains. A driver is domains. Another controller (automa-
not expected to respond to takeover tion or in-vehicle human driver) is not
requests by the function. expected to respond to takeover re-
quests by the teleoperating entity.

Note. Level definitions are based on SAE J3016 [1]

5.3 Teleoperation providing temporary intervention


Temporary intervention is needed in case the primary controller drops out. Exemplary
cases could be a) the in-vehicle human driver drops out for medical reasons and needs
to be taken to hospital, or b) the inoperative driving automation function needs to be
fixed in a garage.
We apply Principle of Operation C’s levels of intervention [4] to teleoperated inter-
vention. First-order differentiation is made between the reason for intervention. If the
primary controller deviates from the expected behavior, the situation is categorized as
an abstract hazard. If a collision is imminent, the situation is categorized as a concrete
hazard the abstract hazard receives the index I, the concrete hazard receives the index II.
Second-order differentiation is made between type of intervention and duration of in-
tervention. Within the original Principle of Operation C’s scope, Level α categorizes
support of the driver initiated action, Level β categorizes a short takeover of control by
the automation, and Level γ categorizes a long takeover of control by the automation.
Table 2 applies the Principle of Operation C classification scheme for automated inter-
vention to teleoperated intervention.

Table 2. Intervention classification by Principle of Operation C applied to teleoperation

Level Automation Teleoperation

Abstract hazard

αI The automation supports the driver The teleoperating entity supports


by providing corrective interven- the primary controller (in-vehicle
tion human driver or automation) by
providing corrective intervention
9

βI The driver is absent. The automa- The primary controller is absent.


tion performs the driving task for a The teleoperating entity performs
short period of time to achieve a the driving task for a short period of
minimal risk condition. time to achieve a minimal risk con-
dition.
γI The driver is absent. The automa- The primary controller is absent.
tion performs the driving task for a The teleoperating entity performs
longer period of time to resolve the the driving task for a longer period
abstract hazard by achieving a of time to resolve the abstract haz-
minimal risk condition or by exe- ard by achieving a minimal risk con-
cuting other adequate maneuvers. dition or by executing other ade-
quate maneuvers.

Concrete hazard

αII The automation supports the driver The teleoperating entity supports
by reinforcing the driver’s initiated the primary controller (in-vehicle
maneuver. human driver or automation) by re-
inforcing the primary controller’s
initiated maneuver.
βII The automation performs the driv- The teleoperating entity performs
ing task for a short period of time the driving task for a short period of
to resolve the concrete hazard. The time to resolve the concrete hazard.
driver needs to perform the driving The primary controller (in-vehicle
task subsequently. human driver or automation) needs
to perform the driving task subse-
quently.
γII The automation performs the driv- The teleoperating entity performs
ing task for a longer period of time the driving task for a longer period
to resolve the concrete hazard. of time to resolve the concrete haz-
Level γI is initiated, in case the ard. Level γI is initiated, in case the
driver does not perform the driving primary controller (in-vehicle hu-
task subsequently. man driver or automation) does not
perform the driving task subse-
quently.
Note. Definitions are based on Principle of Operation C [4]

At this point, we highlight and refer to the objective of our model. Especially with
regard to the application of teleoperation in case of a concrete hazard, we do not suggest
to implement this in road traffic from today’s perspective.
10

6 Discussion

The Substitution Model for Teleoperation allows to describe the interplay of teleoper-
ation, automation and in-vehicle human driver in the context of vehicle guidance [18].
So far, teleoperation has been rather portrayed and regarded as a stand-alone feature
that researchers agree on being related and needed with emerging automated driving
[7, 8, 9, 10, 11]. Yet, the conceptual incorporation of teleoperation into the context of
driving automation and manual driving was lacking.
Our Substitution Model for Teleoperation may serve as a common conceptual basis
to clearly describe which entity is in control of vehicle guidance and to what extent.
Even though we have emphasized theoretical comprehensiveness, we acknowledge that
any model can only be an abstract simplification of the real world. As the core differ-
entiation between the three modes of operation (direct manual interaction by a human,
automation, teleoperating entity) is very general, it might be applicable to other tele-
operation use cases as well. Yet, since we are no experts in those other application
fields, we do not presume to evaluate sensible application.

6.1 Advantages and Disadvantages


Beyond the core differentiation between the in-vehicle human driver, driving automa-
tion function and teleoperating entity, we have applied the Principle of Operation
Framework [4] and the SAE Standard J3016 [1] to teleoperation to describe the extent
to which teleoperation substitutes the primary controller. We stayed the course very
strictly, which results in a very broad overview on vehicle guidance possibilities. This
approach is also accompanied by several advantages and disadvantages that need to be
addressed.
To begin with, the Substitution Model for Teleoperation offers a basic concept and
language to talk about teleoperation in the context of automated driving. The Substitu-
tion Model for Teleoperation makes it possible to integrate research and discussions
into the broader context. Our objective to approximate theoretical comprehensiveness
to be technology open and not restrictive regarding future ideas, led us to apply the
Principles of Operation and SAE Standard J3016 to teleoperation. This procedure re-
veals a vast field of vehicle guidance options. This includes both suitable use cases of
teleoperation in automated driving, and rather fanciful and possibly inappropriate use
cases that might be irrelevant for road traffic. For instance, from today’s perspective,
the idea of a teleoperated intervention in case of a concrete hazard seems highly inade-
quate to solve the concrete hazard. Usually an on-board automation is superior to hu-
man intervention, alone for the very reason of human reaction time. The same applies
to the idea of Level 1 and Level 2 sustained remote driving, meaning an in-vehicle
primary controller needs to supervise the teleoperating entity’s vehicle guidance. Espe-
cially, in case the primary controller is a driving automation function, this application
of teleoperation seems very fanciful and without added value to road traffic safety. This
leads us to the question of why we even include those use cases.
We strictly stayed to the course of applying existing classification schemes to reveal
the potential of teleoperation. However, we need to be aware to discriminate between
11

theoretical possibilities and practical application in road traffic. We did include all use
cases to make it possible to talk about what is suitable and appropriate and what is not.
This allows for informed decisions. On the one hand, we can deliberately decide to
exclude theoretically possible applications, rather than being unaware and ignorant to-
ward possibilities. On the other hand, we can decide which possibilities might be suit-
able for which use cases in particular. We favored theoretical comprehensiveness over
practical realistic implementations, because the implementation in practice is always
limited to the technology available at the time. But the conceivable use cases may reach
beyond these limits. For being technology open and non-restrictive, we therefore de-
cided to be strict and reveal the theoretical opportunities and to leave the practical im-
plementation open for today’s and future technology and emerging use cases.
To be clear, we highlight that we do not claim that all combinations of human manual
control, driving automation and teleoperation described in this article shall be realized.
We also do not claim that all combinations of human manual control, driving automa-
tion and teleoperation described in this article are similarly significant for practical im-
plementation. We see that not all of the revealed use cases are sensible. However, as
they are conceivable, we aimed to make them discussable and to provide the respective
language for both including and excluding conceivable teleoperation applications in
practice.
Furthermore, we are aware that by applying the existing concepts we do not only
transfer the strengths of approximating comprehensiveness. By transferring these con-
cepts, we also transferred the blind spots these concepts have. For example, cooperative
vehicle guidance is neither focused in SAE Standard J3016 [1] nor in the Principles of
Operation Framework [4]. Yet, we argue that having a primary controller who is re-
sponsible for the driving task is a convincing practical criterion, and it seemed most
promising to follow the schemes that have established and proved to gain acceptance
in the context of automated and manual driving.

References
1. SAE (2018): SAE International Standard J3016. (R) Taxonomy and Definitions for Terms
Related to Driving Automation Systems for On-Road Motor Vehicles.
2. Gasser, T. M. (2013): Legal consequences of an increase in vehicle automation. Consoli-
dated final report of the project group; Report on research project F 1100.5409013.01. Bre-
merhaven: Wirtschaftsverl. NW Verl. für neue Wiss (Berichte der Bundesanstalt für Stras-
senwesen F, Fahrzeugtechnik, 83). Available online at https://bast.opus.hbz-nrw.de/opus45-
bast/frontdoor/deliver/index/docId/689/file/Legal_consequences_of_an_increase_in_vehi-
cle_automation.pdf, checked on 8/6/2019.
3. Gasser, T. M., Frey, A., Seeck, A., Auerswald, R.: Comprehensive definitions for automated
driving and ADAS. (2017)
4. Shi, E., Gasser, T. M., Seeck, A., Auerswald, R.: The Principles of Operation Framework:
A Comprehensive Classification Concept for Automated Driving Functions. SAE Intl. J
CAV 3(1), 27-37 (2020). DOI: 10.4271/12-03-01-0003.
12

5. Jarosch, O., Gold, C., Naujoks, F., Wandtner, B., Marberger, C., Weidl, G., & Schrauf, M.:
The Impact of Non-Driving Related Tasks on Take-over Performance in Conditionally Au-
tomated Driving - A Review of the Empirical Evidence. In TUM Lehrstuhl für Fahrzeug-
technik mit TÜV SÜD Akademie (Chair), 9. Tagung Automatisiertes Fahren. Symposium
conducted at the meeting of Lehrstuhl für Fahrzeugtechnik mit TÜV SÜD Akademie,
(2019). Retrieved from https://mediatum.ub.tum.de/doc/1535156/1535156.pdf
6. Neumeier, S., Wintersberger, P., Frison, A.-K., Becher, A., Facchi, C., Riener, A.: Teleope-
ration. In: AutomotiveUI '19: Proceedings of the 11th International Conference on Automo-
tive User Interfaces and Interactive Vehicular Applications. pp. 186-197. ACM, New York,
NY, USA (2019).
7. Zhang, T.: Toward Automated Vehicle Teleoperation: Vision, Opportunities, and Chal-
lenges. In IEEE Internet Things J. 7 (12), pp. 11347–11354 (2020). DOI:
10.1109/JIOT.2020.3028766
8. Feiler, J., Hoffmann, S., Diermeyer, F.: Concept of a Control Center for an Automated Ve-
hicle Fleet. In : 2020 IEEE 23rd International Conference on Intelligent Transportation Sys-
tems (ITSC), pp. 1-6, IEEE 23rd International Conference on Intelligent Transportation Sys-
tems (ITSC), Rhodes, Greece (2020).
9. Georg, J.-M., Feiler, J., Diermeyer, F., Lienkamp, M.: Teleoperated Driving, a Key Tech-
nology for Automated Driving? Comparison of Actual Test Drives with a Head Mounted
Display and Conventional Monitors*. In: 21st International Conference on Intelligent Trans-
portation Systems (ITSC), pp.3403-3408, 21st International Conference on Intelligent
Transportation Systems (ITSC), Maui, HI, (2018).
10. Kettwich, C., Dreßler, A.: Requirements of Future Control Centers in Public Transport. In:
12th International Conference on Automotive User Interfaces and Interactive Vehicular Ap-
plications. AutomotiveUI '20: 12th International Conference on Automotive User Interfaces
and Interactive Vehicular Applications. Virtual Event DC USA, New York, NY, USA:
ACM, pp. 69-73 (2020).
11. Graf, G., Hussmann, H.: User Requirements for Remote Teleoperation-based Interfaces. In:
12th International Conference on Automotive User Interfaces and Interactive Vehicular Ap-
plications. AutomotiveUI '20: 12th International Conference on Automotive User Interfaces
and Interactive Vehicular Applications. Virtual Event DC USA, New York, NY, USA:
ACM, pp. 85-88 (2020).
12. Hokayem, P. F., Spong, M. W.: Bilateral teleoperation: An historical survey. Automatica 42
(12), 2035-2057 (2006). DOI: 10.1016/j.automatica.2006.06.027.
13. Giovanni Gerardo, M., Marcheschi, S., Fontana, M., Bergamasco, M.: Dynamics Modeling
of Human-Machine Control Interface for Underwater Teleoperation. Robotica 39 (4), 618-
632 (2021). DOI: 10.1017/S0263574720000624.
14. Mehrdad, S., Liu, F., Pham, M.T., Lelevé, A., Atashzar, S.F.: Review of Advanced Medical
Telerobots. Appl. Sci. 2021, 11, 209. https://doi.org/10.3390/app11010209
15. Cheng, R., Liu, Z., Ma, Z., Huang, P.: Approach and maneuver for failed spacecraft de-
tumbling via space teleoperation robot system. Acta Astronautica 181, 384-395 (2021). DOI:
10.1016/j.actaastro.2021.01.036.
16. Warm, J. S., Dember, W. N., & Hancock, P. A.: Vigilance and workload in automated sys-
tems. Automation and Human Performance: Theory and Applications, pp. 183-200, (1996).
17. Warm, J. S., Matthews, G., & Finomore Jr, V. S. (2018). Vigilance, workload, and stress. In
Performance Under Stress, pp. 131-158. CRC Press.
18. Donges, E.: Driver Behavior Models. In: Winner H., Hakuli S., Lotz F., and Singer C., edi-
tors. Handbook of Driver Assistance Systems: Basic Information, Components and Systems
13

for Active Safety and Comfort. pp. 19-33. Cham, Springer International Publishing (2016).
https://doi.org/10.1007/978-3-319-12352-3_2.
Physics-Based, Real-Time MIMO Radar Simulation for
Autonomous Driving

Jeffrey Decker1[0000-0002-4922-7922], Dr. Kmeid Saad 2[0000-0001-6455-1643], Dan Rey 3, Stefano


M. Canta4[0000-0002-4380-264X], Robert A. Kipp 5[0000-0003-1183-1339]
1,3,4,5 Ansys Inc., USA
2
Ansys GmbH, Germany

Abstract. Advanced driver assistance systems (ADAS) and autonomous v eh i-


cles (AV) require massive amounts of sensor data to test and train driving algo -
rithms and to design sensor hardware. In many practical cases, these data m u s t
be generated at or beyond real-time rates of up to 30 sensor frames per s e co n d
(fps). General-purpose, high-fidelity radar response simulators can take minutes
or hours to simulate a single coherent processing interval (CPI) comprised of
hundreds of radar chirps over many MIMO channels. This paper presents an
end-to-end GPU implementation of the shooting and bouncing rays (SBR)
method combined with algorithmic accelerations to achieve over 160 f p s f o r a
single-channel radar operating in a realistically complex traffic environment
and sustained real-time performance for five single-channel radars o r o n e 2 0 -
channel radar. In addition, this paper illustrates, using open and standardized in-
terfaces, the integration of this technology in closed loop simulation.

Keywords: Autonomous Vehicles, MIMO Radar, Closed Loop Simulation.

1 Introduction

Physics-based, real-time simulation is critical to developing advanced d riv er a ssis-


tance systems (ADAS) and autonomous vehicle (AV) systems. AV developers h ave
driven over 15 billion miles in simulation, with throughputs exceeding 1 0 0 d riv in g
years in simulation per day [1]. ADAS/AV requires real-time sensor simulation at u p
to 30 frames per second (fps) to train and evaluate driving algo rit hms an d su pp ort
hardware-in-the-loop (HiL) testing.
Millimeter-wave, range-Doppler imaging radars operating at 77 GHz are essent ial
components in modern ADAS/AV sensor suites because they pro vid e all-weat her,
nighttime, and long-range sensing. The technology behind these radars is rapidly a d-
vancing, particularly in the area of multiple-input multiple-output (MIMO) hard ware
and processing to reduce perception ambiguity and improve angular resolution. These

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_7
2

radars emit hundreds of ramped-frequency chirps per sensing frame, while processing
chirp responses from the environment on many receiver channels. Each received chirp
comprises hundreds of digitized frequency samples. When considering the need to a )
shoot millions of rays to sufficiently interrogate the 3D environment over b) hundreds
of chirps at c) hundreds of frequencies and filtered by d) tens of receiving a nt ennas,
general purpose electromagnetic scattering solvers may require hours to simulate each
coherent processing interval (CPI) (i.e., range-Doppler image f rame) o n a m odern
workstation. When real-time throughput is paramount, less rigorous sim u la tio n a p-
proaches that sacrifice physics are often adopted as a practical compromise.
In this paper, we present an all-GPU implementation of the shooting and bouncin g
rays (SBR) method that is optimized for the automotive radar application. Kn o wn a s
Ansys’ Real-Time Radar (RTR), it achieves roughly a factor of 3000 speed-u p o v er
HFSS SBR+ [2], its more general-purpose all-CPU antecedent. RTR simulates a 1-km
urban traffic scene with hundreds of objects at over 30 fps for f iv e sin gle -ch annel
radars or one 20-channel radar. A single-channel radar is simulated at the faster-than-
real-time rate of 160+ fps, and a 48-channel radar runs at 13 fps.
RTR incorporates arbitrary 3D scene and actor geometry, layered dielectric materi-
al treatments, 3D polarized antenna patterns, and pulse-Doppler and chirp -sequence
FMCW waveform processing to generate raw I and I+Q A/D sample outputs for mul-
ti-channel radar in dynamically changing driving scenarios. Objects (e.g., v eh icles,
pedestrians, road, infrastructure, etc.) can be assigned arbitrary positions, orientations,
and linear and angular velocities in a scene graph hierarchy through a ligh t -weigh t
API to characterize complex traffic scenarios with negligible simu la tio n o v erh ead.
Optional post-processing for range-Doppler image formation is also part of t h e GPU
implementation.
In Section 2, we present a brief review of the SBR methodology , a nd t h e mu lti-
chirp algorithmic improvement known as Accelerated Doppler Pro cessin g (ADP).
Section 3 presents key points of the GPU implementation using NVIDIA CUDA [3 ],
followed by example results and performance (simulation speed) data in Sect io n 4 .
Finally, Section 5 provides an overview on how to integrate the RTR with a 3rd party
driving simulator using the Open Simulation Interface (OSI).

2 METHODOLOGY

2.1 Shooting and Bouncing Rays


Real-Time Radar implements the SBR methodology [4] – [6], which is well est a b -
lished as an efficient technique for solving large-scale and geometrically realistic
scattering problems. For example, at the 4-mm wavelength of a 77-GHz radar, a scene
scale of nominally 200 m corresponds to 50,000 wavelengths. This problem scale is
completely impractical with “full-wave” Maxwell equations solvers, but it is rea d ily
managed by an asymptotic technique like SBR.
3

SBR uses geometrical optics (GO) to efficiently extend physical optics (PO) to
multiple bounces. Millions of ray tubes are launched in all directions from the radar’s
Tx antenna to sample the scene geometry. Escaping rays are ignored. For th ose ra ys
hitting an object, each carries a GO field initialized at the first-bounce h it p o in t a c-
cording to the gain pattern of the Tx antenna, the radar’s transmission power, and t h e
decay distance to the hit-point. The total GO field at the hit-point is determined f rom
this incident ray field, the incidence angle off the surface perpendicular, and the Fres-
nel reflection coefficient of the surface coating treatment. The latter can be efficiently
pre-computed based on the bulk electrical properties and thicknesses of the layers and
the selected backing material (vacuum, perfect metal, or half-space dielect ric). Th e
PO approximation is then used to convert the total GO field into equivalent elect ric
and magnetic currents. These currents are then radiated to each Rx antenna with free -
space Green’s function and spatially filtered according to its gain patt ern t o y ield a
received signal as a complex phasor.
Second-bounce rays are launched from the first-bounce hit poin ts, weigh t ed a c-
cording to the incident GO field and the same Fresnel reflection coefficient. If one o f
these rays hits an object in the scene, the same process is repeated, leading to second-
bounce currents, third-bounce rays, and so on. Multi-bounce signals coherently accu -
mulate at the receivers, each with a phase delay based on the total path len gt h f rom
the Tx antenna through each bounce and finally to the Rx antenna.
SBR also handles penetrable coatings by setting a vacuum backing under t h e d ie-
lectric layers and developing a corresponding set of transmissio n co efficient s. Fo r
example, this can be used to represent vehicle windshields. Rays hitting such surfaces
will continue to generate reflection rays but will now also generate transmission rays,
weighted this time by the transmission coefficients. Transmission rays are otherwise
handled in the same manner as reflection ra ys.

2.2 Accelerated Doppler Processing


To measure Doppler velocity, automotive radars transmit, receive, and pro cess h un-
dreds of chirps over each CPI. Hundreds of samples from each chirp are FFT-
processed into complex response amplitude vs. range. Then, by FFT processing each
range response from multiple chirps across the CPI, the radar forms a range-Dop pler
image.
Simulating hundreds of chirps in separate million-ray operations, as d escrib ed in
Section 2.1, is not practical while maintaining 30 fps, even in an all-GPU implem en-
tation. To address this, RTR implements Accelerated Doppler Processing (ADP).
ADP computes all chirp responses within a CPI from a single ray trace of the scene.
Using knowledge of the object dynamics within the scene, ADP p ro v id es m assiv e
acceleration of the radar chirp processing that also yields smoother and more accurate
results.
4

3 REALIZING REAL-TIME PERFORMANCE

RTR is built on NVIDIA’s GPU programming libraries. Computations are performed


in CUDA kernels implementing the steps in Table 1 [3]. Ray tracing is performed b y
OptiX, and cuFFT executes FFTs for signal processing [7] – [8]. Software design
choices throughout the implementation focus on performance to achieve or exceed the
real-time goal of 30 fps.

3.1 Perform All Computation on GPU


All computations are executed on the GPU after a minimal C PU set u p p hase. Th e
GPU pipeline operates most efficiently when execution proceed s f rom o n e C UDA
kernel to the next without waiting for results computed on the CPU. Even small com-
putations that do not benefit from GPU execution are done on-GPU to a void co stly
GPU-CPU data transfers.
Data structures on the GPU are also matched so the output structures from one ker-
nel are used directly by the next kernel without reorganization. This data-cen tric d e-
sign packs data into efficient GPU structures mapped to the computational algorithms
for maximum performance.

Table 1. SBR implementation on GPU showing computational steps, looping (indentation),


CPU activity, data transfers, and NVIDIA libraries used .

GPU
CPU Data I/O
OptiX CUDA cuFFT
Solver Initialization × To GPU × × ×
Loop over Time Steps ×
Update Object Positions × To GPU × ×
Loop over Radars
Multi-Bounce Loop
Shoot Rays from Tx × ×
Process Hits &
×
Create Next Rays
Compute Incident Field (1st
×
Bounce)
Compute Reflected & Trans-
×
mitted Fields
Radiate Currents to Rx ×
Compute Range-Doppler × ×
Transfer Results to CPU × To CPU ×
5

3.2 Optimize Data Transfers between CPU and GPU


Only essential data is transferred between CPU and GPU. As shown in Table 1, R TR
only transfers data twice per frame during simulations after the initia l so lv er set up.
One transfer at the start of simulation updates object positions, orientations, a nd v e-
locities. State data are packed into a single structure and transferred in a single C PU -
to-GPU operation, saving time compared to transferring the same d ata in m u lt ip le
operations. A second transfer moves radar results to the CPU at the end.
Data transfers use pinned memory to optimize I/O between GPU and CPU. Pinned
memory is page-locked memory that transfers data over 2 x f aster t han u n-p inn ed
buffers.

3.3 Launch Independent GPU Kernels in Parallel


RTR uses CUDA streams to launch kernels in parallel that d o n o t d ep end o n each
other. The GPU scheduler overlaps kernels from different streams to avoid id le GPU
execution units and optimize throughput. CUDA streams also overlap data t ransfers
with computation to hide the unavoidable latency when retrieving data from the GPU.

4 Results and Performance

In the following example, the vehicle carrying the radar, known as “Ego,” travels a 1 -
km road with 14 buildings, 70 vehicles, and over 375 streetlights and traffic sign a ls,
as shown in Fig. 1. Roads include important radar-reflecting features lik e cu rb s a nd
medians. The entire scene is modeled with 5.3 million triangular facets rep resent ing
221,000 square meters total surface area, or over 14.5 billion square wavelen gth s at
76.5 GHz.

Fig. 1. (top) Aerial view of benchmark scene with radar shown as a rainbow Tx antenna pat-
tern. (bottom) Radar point-of-view perspective.
6

Table 2. RTR performance for single- and multi-channel radars configurations.

Time Rate
Radar Type Radars Tx Rx Output
(ms) (fps)
1 1 1 512x512 6.2 161.6
Single-Channel
5 1 1 Range-Doppler 29.4 34.0
1 1 20 33.4 30.0
Multi-Channel Chirp Response
1 1 48 76.7 13.0

RTR simulates this scenario at real time for one 20-channel radar or five single-
channel radars mounted around the Ego vehicle (see arrangement in Fig. 3). The I +Q
channel chirp-sequence FMCW waveform has the following properties: cen ter f re-
quency = 76.5 GHz, bandwidth = 200 MHz, A/D sampling rate = 15 MHz, 2 6 7 A/ D
samples/chirp, CPI duration = 4 ms, and 200 chirps/CPI. This yields reso lu t ion s o f
0.75 m in range and 0.49 m/s in velocity before application o f a H ann win d o w f or
sidelobe control. Additional configurations are shown in Table 2 . RTR outputs either
raw chirp responses or range-Doppler images, as shown in
Fig. 2. Timing results are measured for an NVIDIA Quadro RTX 6000 wo rkstat ion
GPU. Times are averaged over 3000 frames from five separate runs of a 2 0-seco nd,
600-time-step scenario. On average, for this complex city scene and the quoted simu -
lation times, RTR shoots 1.4 million rays and processes over 880,000 hits per frame.

Fig. 2. (left) Raw chirp responses before image formation and (right) 512x512 pixel
range-Doppler image.
7

Fig. 3. Five radars mounted around Ego with scene views and range-Doppler images. Cam-
era images do not show the full radar field of view.

5 RTR Integration with Driving Simulator


RTR’s open interface enables its integration into modular archit ect ures – a way t o
manage the complexity of a problem by breaking it d o wn t o smaller m anageab le
modules – and other simulation frameworks. In the case o f Au t onomou s Driv in g
(AD) and/or Advanced Driving Assistance Systems (ADAS), Fig. 4 represents an
example on how cross-functional modules can be interconnected. Physics-based RTR
can be integrated into any simulation environment that calls f u n ctio ns d efin ed in
RTR’s programming interface.

Environment Simulation (Driving Simulator)


Traffic Driver Vehicle Dynamics «

Post Processing
RTR
Physics-Based Simulation
Custom Use-Case
Fig. 4. Example of a closed loop simulation framework

Adapting RTR’s open interface to OSI [9] makes its integration with 3 rd party driving
simulators seamless and effortless.
8

In the following section, we will first introduce the general concept of OSI and
proceed to demonstrate RTR’s integration with a 3 rd party driving simulator for closed
loop simulation.

5.1 Open Simulation Interface


Focusing on environment perception and automated driving functions, OSI is an inter-
face specification for models and components of distributed simulation. I t d efin es a
generic interface that ensures modularity, integrability and interchangeabilit y o f t he
simulation framework’s individual components. In addition, OSI wa s d evelop ed t o
address and align with the emerging standard for communication in t erfaces o f rea l
sensors, ISO 23150 [10]. This will eventually ensure a better correlation between
communication interfaces used in both virtual and real worlds.

3 rd Party Driving OSI:SensorView RTR


Simulator Physics-Based Simulation

Fig. 5. Driving Simulator and RTR connection via OSI.

Corresponding to OSI’s message description, the OSI:SensorView m essage repre-


sented in Fig. 5 contains the ground truth data that can be generated by any 3 r d p art y
driving simulator. In this message, information about the states of dynamic and stat ic
actors are included. A detailed description of OSI’s messages can be accessed f rom
the official GitHub repository [11].

5.2 Closed Loop Simulation using RTR


The term closed loop simulation in the context of autonomous vehicle d evelopment
usually refers to the perception of the surrounding environment via a sensing device,
post processing the sensor data into a semantic representation, decision makin g, a nd
then actuating vehicle controls accordingly.
The following example, represented in Fig. 6, illustrates RTR integrated into a
closed loop simulation framework. In this use case, CARLA [12] – a n o p en -source
simulator for autonomous driving research – is used as a 3 rd party driving sim u lato r.
The communication between CARLA and RTR is performed u sin g t h e ZMQ [ 1 3 ]
messaging library, where OSI messages are bi-directionally b ein g t ran sf erred a t a
frame rate of 40 fps, thus ensuring real time performance.
9

CARLA Sensor Simulation and Detection


ZMQ Model
Server
OSI:SV
World-VWDWH DQGDFWRUV¶XSGDWH Ansys RTR Object Detection
Sensor rendering.
Computation of physics Fetch 3D Models
...

Scalable Client-Server

Client
ZMQ
$FWRUV¶FRQWURO
OSI:SD
Setting world conditions SV: Sensor View
CARLA/OSI interpreter SD: Sensor Data

Fig. 6. Closed loop simulation using CARLA and Ansys RTR.

As represented in Fig. 6, CARLA’s ground truth data are converted into OSI:SV f or-
mat and directly communicated to RTR. In addition to position, orientation and veloc-
ity, the sensor view message also contains information about the 3D model reference
that indicates which 3D geometry represents a specific actor. RTR t h en f et ch es t h e
corresponding 3D models and their corresponding dielectric material properties.
In Fig. 6, we can see how a vehicle is disassembled into its subcomponents. Ea ch
subcomponent is a separate 3D model. Dielectric material properties can be applied to
an entire component or portions of a component. Component models also allow us t o
articulate parts of the vehicle independently, by modifying position and orientation, to
model sensor image features like the micro-Doppler effect of rotating wheels (in some
cases also referred to as Doppler smear of rotating wheels).

Vehicle Body Wind Shield and Car Windows

Head and Tail Lamps


Wheel

Number Plates

Fig. 7. Vehicle subcomponents featuring adequate assignment of material properties.


10

From the use case presented in Fig. 6, the main input to the object detection algorithm
are the range-Doppler data which are directly provided by RTR at a frame rate o f 4 0
fps. Several post processing algorithms execute on the range-Doppler data t o creat e
the target hit points output by the object detection algorithm. The hit points, converted
into OSI:SD format, are then communicated to CARLA, where t he serv er ren d ers
them on the screen. Similarly, these hit points could later be used b y p lann ing a n d
actuating algorithms to control the ego vehicle’s behavior.

Fig. 8 shows a graphical representation of the scenario at one inst ant o f t im e. I n


this use case, the ego vehicle is stationary and the leading vehicle (red ) m ov es a t a
constant speed of 2.5 m/s. The radar’s mounting position is represented by the orange
circle. On the top left part of the image we can see the range-Doppler image produced
by RTR. The vertical axis shows range (distance), and t he h o rizontal a x is sh o ws
relative velocity, centered at zero. The black squares on the leading v eh icle ren der
the hit points provided by the object detection algorithm.

Hit points from the


detection algorithm.

2.5 m/s

Fig. 8. RTR and object detection returns in a closed loop simulation.

6 Conclusion

The SBR methodology is already well established in providing radar response simula-
tion for large-scale and realistic scene geometry. In this paper, we have demonstrated
that by combining algorithmic acceleration with an all-GPU implementation of SB R,
it is feasible to generate multi-channel range-Doppler radar responses in real-t ime o r
faster. Finally, using RTR’s open interface, we also demonstrated its integratio n in a
real-time closed loop simulation.
11

7 Acknowledgment

The authors thank NVIDIA and Baskar Rajagopalan of NVIDIA for providing GPUs
to support RTR.

References

1. Waymo (2020) “Off road, but not offline: How simulation helps advance our Waymo
Driver”. [Online]. Available: https://blog.waymo.com/2020/04/off-road-but-not-of f lin e --
simulation27.html
2. Ansys (2021) “Ansys HFSS: High Frequency Electromagnetic Field Simulation Soft-
ware”. [Online]. Available: https://www.ansys.com/products/electronics/ansys-hfss
3. NVIDIA (2020) “CUDA C++ Programming Guide”. [Online]. Available:
https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html
4. H. Ling, R.-C. Chou and S.-W. Lee, "Shooting and bouncing rays: calculating the RCS o f
an arbitrarily shaped cavity," IEEE Trans. Antennas Propagat., vol. 37, no. 2, pp. 194-205,
Feb. 1989, doi: 10.1109/8.18706.
5. S-K Jeng, “Near-field scattering by physical theory of diffraction and shooting and bounc-
ing rays,” IEEE Trans. Antennas Propagat., vol. 46., no. 4, pp. 551 -558, Apr. 1998.
6. U. Chipengo, A. Sligar and S. Carpenter, "High Fidelity Physics Simulation of 128 Ch an -
nel MIMO Sensor for 77GHz Automotive Radar," in IEEE Access, vol. 8, pp. 160643 -
160652, 2020, doi: 10.1109/ACCESS.2020.3021362.
7. S. Parker, J. Bigler, A. Dietrich, H. Friedrich, J. Hoberock , D. Luebke, D. McAllister, M.
McGuire, K. Morley, A. Robison, and M. Stich, "OptiX™: A General Purpose Ray T r ac-
ing Engine," ACM Transactions on Graphics, vol. 29, no. 4, July 2010, doi:
10.1145/1778765.1778803.
8. NVIDIA (2020) “cuFFT Library User’s Guide”. [Online]. Available:
https://docs.nvidia.com/cuda/cufft/index.html
9. Hanke, T., Hirsenkorn, N., van-Driesten, C., Garcia-Ramos, P., Schiementz, M., Schnei-
der, S. & Biebl, E. (2017, February 03). A generic interface for the environment perception
of automated driving functions in virtual scenarios. Retrieved January 25, 2020, from
https://www.hot.ei.tum.de/forschung/automotive-veroeffentlichungen/
10. ISO 23150, https://www.iso.org/standard/74741.html.
11. OpenSimulationInterface, https://github.com/OpenSimulationInterface/open -simulation-
interface.
12. Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, Vladlen Koltun;
PMLR 78:1-16. CARLA: An Open Urban Driving Simulator.
13. ZeroMQ, https://zeromq.org/
Validation concept for scenario-based connected test
benches of a highly automated vehicle

Moritz Wäschle1, Kai Wolter1, Chenlei Han2, Urs Pecha3,


Katharina Bause1, Matthias Behrendt1
1 Institute of Product Engineering, Karlsruhe Institute of Technology, Germany
2 Institute for Vehicle System Technology, Karlsruhe Institute of Technology, Germany
3 Institute of Electrical Energy Conversion, University of Stuttgart, Germany

Abstract. Highly automated functions and components for vehicle guidance lead
to increasing demands while still expected high reliability and safety. The part-
ners within Smart Load project addresses these challenges with new validation
concepts such as the cross-company or institute connection of test benches. Based
on the IPEK-X-in-the-Loop approach applying mixed physical-virtual models,
use case specific validation environments are modeled, compared and built. The
authors realize the scenario ”cornering of a people mover” with a location-dis-
tributed connection of a simulation environment, a gearbox test bench and an
engine simulation in a closed-loop setup. All partners involved in the network
can implement independent fallback mechanisms and substitute models.
The connections is supported by modeling interrelated elements like scenarios in
the Systems Modeling Language. Furthermore, the technical implementation is
supported using substitute models, a toolchain for deriving concrete scenarios
and the use of an adapted Distributed Co-Simulation Protocol. In result, the dis-
tributed tests show the inter-dependencies of different components on distributed
test benches, connected to a total vehicle simulation.

Keywords: distributed validation, connected test-bench development, subsys-


tem validation, IPEK-XiL approach

1 Introduction

Highly automated functions and components for vehicle guidance lead to increasing
demands while still expected high reliability and safety. This increase occurs for inter-
acting individual components and the overall vehicle system interacting with the envi-
ronment. In order to meet these increased demands, an intelligent use of redundant
components or functions can be used. Therefore, the authors investigates within the
Project “SmartLoad” “New methods for reliability enhancement of highly automated
electric vehicles”, exemplarily the interactions of the redundant steering function and
components in the validation of highly automated vehicles.

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_8
2

In this context, new validation concepts such as the cross-company connection of


test benches or the communication between teleoperator and vehicle via mobile data
links are being investigated. Based on the mixed physical-virtual IPEK-X-in-the-Loop
approach [1], use case and scenario specific validation environments are modeled, com-
pared and built. For example, in this publication the authors realize the scenario: "cor-
nering of a people mover" with a location-distributed connection of a simulation envi-
ronment, a gearbox test bench and an engine simulation in a closed-loop setup. The
planetary gearbox, which is identical in construction to the real vehicle, is physically
mounted on a test bench and additionally equipped with acceleration sensors.

In this paper, a validation concept for building and testing connected validation en-
vironments is presented. The concept includes a model-based approach for describing
and creating such a validation environment as well as a concept based on the standard
distributed Co-simulation developed by Modelica (e.g. [2,3]). In addition, activities are
derived from a process model for validating connected validation environments. The
influence of network-relevant parameters is investigated and integrated into approaches
to compensate for related problems (e.g. [4,5,6]). For example, measurements are per-
formed to identify fault conditions at subsystem and overall system level (cf. [7]). The
concept includes a (partially) automated coupling of Model-Based Systems Engineer-
ing approaches and necessary partial results of the validation.

1.1 Objective
The objective of the paper is to address the challenges of validation in highly automated
driving with distributed validation environments. The environments depend on scenar-
ios and further elements like requirements, whch need to be considered in the valida-
tion.

1.2 Questions
The following questions are dealt with:
1. ”How can a validation concept with a closed-loop distribution of component test
bench and complete vehicle model support the validation of highly automated vehi-
cles? How to handle issues in the connection of test benches?”
2. ”How can Model-Based Systems Engineering support the networked validation en-
vironments using the example of a cornering?”

2 Use cases and scenarios

2.1 Description of use cases


In the context of highly automated driving, different systems interact and can achieve
redundant functions. In this paper, the focus lies on the use case of a people mover for
a cornering. This representative use case with real route data allows evaluating and
3

testing inter-dependencies between different subsystems of the vehicle like the gearbox
and the engine.
Hereby the use case is modeled in the Systems Modeling Language (SysML) in the
tool Cameo Systems Modeler. The modeling enables the traceability from stakeholder
needs to maneuvers and validation elements like test parameters. The stakeholder needs
of a quick, secure transportation and a comfortable ride are considered in the use case.
In addition, standards and norms like the European New Car Assessment Programme
are linked to the use case in focus. Hence, a change in standards or new ones may result
in changes in the use case and further elements. Following the classification of scenar-
ios in logical and concrete ones, each use case is related to concrete (product describing)
use cases, like the general use case of a trailer transport is related to the automatic dock-
ing process between trailer and vehicle. These use cases may contain one or multiple
logical and concrete scenarios.

2.2 Modeling in the Systems Modeling Language


According to the DIN SAE SPEC 91381, a scenario is a description of a temporal traffic
constellation [8]. According to the Project “PEGASUS” and further research, a scenario
covers scenes over a time span [9,10]. These scenes can be regarded as snapshots of the
environment [9,10]. The scenario can be derived from a scenario catalog.
A concrete implementation in SysML is done by linking stakeholder needs, use cases
and scenarios. This allows dependency matrices to show the traceability of these and
further elements. Based on the Project “PEGASUS” the scenario of the cornering is
described in two dependent scenarios [11]. Logical scenarios represent a human reada-
ble description of the scenario. The concrete scenario describes the entities and their
relationships with the help of unique parameters in the state space [12].

An example in Fig. 1 shows an Adaptive Cruise Control requirement of the standard


ISO 15622, which influence the scenarios, use cases and parameters. Further parameter
- , activity diagrams and instances support the traceability of the necessary data for the
concrete scenario and overall validation. For instance, MBSE can support the suitability
assessment of a validation environment and certain configurations of it. An example
would be the test track in chapter 4, which does not satisfy the minimum curve radius
requirement for the ACC curve capability test.

Fig. 1. Exemplary extract of the Relation Map with a trace from standard to parameter
4

To sum up, Model-Based Systems Engineering supports the traceability of elements in


the context of distributed validation. This helps to address changes, specify concrete
scenarios and model the validation environment to generate distributed test cases.

3 Technical implementation

3.1 Toolchain
To derive concrete (testing) scenarios the au-
thors develop the so-called SmartLoad
toolchain. The use case (see number one in Fig.
2) and the scenario catalog can be used to select
already existing logical scenarios (see number
two in Fig. 2) according to the PGE - product
generation engineering model. PGE describes
the development of a new product generation
based on a reference system [13]. New genera-
tions are derived from reference elements with
the three variation types carryover -, attribute -
and principle variation [14]. Hence, the sce-
nario catalog with its reference elements like
parameters and scenarios is part of the refer-
ence system. The selection is based on prioriti-
zation and objective setting e.g. using methods
in the area of functional safety like FMEA. The
logical scenario with its human-comprehensi-
ble parametrization of boundary conditions
Fig. 2. Toolchain concept in contains i.a. the determination of the stochastic
the Project “SmartLoad” distribution. Step five shown in Fig. 2 gener-
ates the concrete scenario with an x-fold auto-
matic generation of a logical scenario.

3.2 State machine and Communication protocol – DCPLite


For the connection of simulations, the Distributed Co-Simulation Protocol (DCP) was
published in 2019 [15]. This protocol is intended to connect different types of simula-
tions distributed across locations and thus ensure efficient use of resources and exper-
tise. In the broadest sense, an interconnection of test benches is also a type of distributed
simulation. Therefore, the described connectivity is based on the DCP. The DCP is very
detailed and contains many states and queries that are unnecessary for the linking of
test benches. Therefore, the protocol was adapted to the use of test benches. The DCP
works according to a master-slave principle. Here, a central node handles communica-
tion with the various participants and coordinates all interactions. This master-slave
principle was adopted for the connected test benches as well. The basic concept of the
finite state machine of the DCP was adopted and could be simplified significantly. The
5

result is a simplified finite state machine, which is shown in Fig. 3. This contains all
so-called Super-States from the DCP, which are necessary for a real-time networking.
The branch of the non-real-time domain from the DCP, however, was removed. This is
not relevant for the test benches, since the test benches must always act synchronously.
Furthermore, some intermediate states were neglected.

Fig. 3. State machine derived for the Project “SmartLoad” (left) and examples for the behavior
for different commands (right)

The commands to change the state of the finite state machine are also taken from the
DCP. This is intended to allow eventual integration of the full standard without having
to completely renew all existing implementations.
Since the speed of data transmission is much more important than data integrity
when linking test benches, especially in a closed control loop, UDP messages were used
for the linking. These have a low reliability, but are significantly faster compared to
TCP/IP messages. In order to ensure that a certain data plausibility exists, each test
bench involved is responsible for a plausibility check. For the communication between
the master and the slave, an int16 decoded value is used. This transmits the various
requests from the master to the slave, which can be seen in Fig. 4 on the right side in
the first eight bits. Subsequently, another eight bits are reserved to implement various
functions. Bit eight and nine allow the slave to inform the master if a request is invalid
or if a request cannot be processed at the moment because the slave is busy. The last
bit of the int16 value is used as watchdog toggle bit. Both sides send back the received
bit inverted. For example, if the master receives a zero from the slave, it returns a one.
The slave in turn sends back the bit received from the master in inverted form. Due to
the signal propagation time, this results in a square wave signal on both sides. Both
sides can use this square wave to check if the communication is still active. If a change
of the value takes too long, e.g. the slave can shut down and come to a controlled stop
and the master can terminate the whole network. The rest of the UDP message consists
of the values to be transmitted encoded as float. A representation of two exemplified
messages can be seen in Fig. 4.
6

Fig. 4. Exemplified UDP message between the master and one slave. Showing the commands
and answer in Bit 0 to 7 and further information's in bit 8 to 15, followed by some float values

3.3 Overview of the signal flows


Fig. 5 illustrates the used test bench architecture of the Project “SmartLoad”. The figure
visualizes signal flows and exchange parameters of different models and partners at
different locations. The model localized at the FAST – Institute for Vehicle System
Technology also represents the master of the network and has local access to corre-
sponding substitute models of all other models. This shows that many interfaces are
highly complex and lead to new challenges in networking. With the help of the use case
of a cornering, an initial verification of the validation concept is carried out.

Fig. 5. Test bench architecture in the Project “SmartLoad”

3.4 Description of the connected subsystems


This chapter introduces the connected test benches. The test benches are modeled in
SysML and linked to other modeling elements like requirements and test results. The
measured results are linked to the requirements in the entire model by updating model
instances.

Vehicle model at FAST – Institute for Vehicle System Technology


Since the driving resistances on the drive motors and the aligning torque on the steering
motor are calculated by the vehicle model, a precise vehicle model can improve the
7

reality of the test conception. This vehicle model is built up within the vehicle dynamic
software IPG-CarMaker. This vehicle model, compare to normal vehicle model, has no
model for the drivetrain, because the drivetrain is setup as a physical component. It is
yet necessary to equip the chassis kinematics to the vehicle model, so that the aligning
torque can be calculated realistically.
The test vehicle (Fig. 6) used in this project is a 1:1.5 scale vehicle. It is driven by
two electric motors at the front axle. It
has a double wishbone suspension con-
cept. The subsystem of the powertrain as
well as the steering system feature redun-
dant functionality.
The vehicle model has been validated
Fig. 6. Test vehicle used in the Project
by real driving tests. The validation con-
“SmartLoad” with setup of the mechatronic sists of two parts. To validate the longi-
system for the longitudinal and transverse tudinal dynamics, a roll-off test was car-
guidance of the vehicle (left) ried out. The lateral dynamics were vali-
dated by cornering driving maneuvers.

Test bench at IPEK – Institute of Product Engineering


The gearbox test bench set up at KIT consists of two identical motors connected to the
test gearbox in a back-2-back setup.

Fig. 7. Test bench used for the gearbox located at IPEK

In order to compensate for the increase in torque due to the test specimen gearbox, an
additional test bench gearbox is used. This reduces the torque downstream of the SuI
to match the motor power. In addition, torque sensors are located both before and after
the test specimen. This allows the torque applied to the test specimen to be used directly
for networking. For networking, the output motor (Fig. 7 on the right) is operated in
speed mode and the drive motor (Fig. 7 on the left) in torque mode.
A real-time Linux system with an Ethercat fieldbus is used to control the motors.
The real-time system is directly integrated into the connected validation environments
as a slave. An ADwin system from Jäger Messtechnik is used for data recording.
8

Test bench at IEW – Institute of Electrical Energy Conversion


The test bench used for the electric motor of the vehicle is a 210 kW test bench with a
maximum continuous torque of 500 Nm and a maximum rotational speed of
16,000 min-1. The driving machine (DM) is speed-controlled and receives the reference
speed 𝑛𝑛𝑟𝑟𝑟𝑟𝑟𝑟 via an Ethercat fieldbus, which is also used for communication between data
recorder and dSPACE real-time system, see Fig. 8.

Fig. 8. Test bench used for the electric motor located at IEW

The System under Investigation (SuI) represents one of the traction motors of the test
vehicle and, hence, is torque-controlled. The applied torque 𝑇𝑇 is measured by a sensor
and processed by the data recorder, which records 𝑇𝑇 as well as the voltages 𝑈𝑈 and the
currents 𝐼𝐼 of the electric machine and the DC-link. An optional battery-emulator can be
used to emulate the behavior of the vehicle's battery. The communication with the other
test benches is realized via an interface of the real-time system to the connected sys-
tems, allowing receiving the reference values and sending the measured values for
torque and speed, respectively.

An approach for a substitute model that emulates the behavior of the SuI on the test
bench is shown in Fig. 9. The model can be used when the test bench is not available,
e.g. due to maintenance, or connection issues of the network.

Fig. 9 Approach for substitute model of the SuI

𝑑𝑑/𝑞𝑞
Depending on the operating strategy, different currents 𝑖𝑖𝑟𝑟𝑟𝑟𝑟𝑟 need to be applied to the
electric machine. Using field-oriented control, two proportional-integral (PI)-current
9

𝑑𝑑/𝑞𝑞
controllers calculate the required voltages 𝑢𝑢𝑟𝑟𝑟𝑟𝑟𝑟 that are generated by the power elec-
tronics. The model needs to take into account the dynamics of controller, converter and
electric machine. Both converter and electric motor can be approximated as first order
lag systems with time constants 𝜏𝜏𝑝𝑝𝑝𝑝 and 𝜏𝜏𝑒𝑒𝑒𝑒 , respectively. While 𝜏𝜏𝑝𝑝𝑝𝑝 depends on the
switching frequency of the converter, 𝜏𝜏𝑒𝑒𝑒𝑒 depends on the inductance and resistance of
the electric machine.

4 Results

4.1 Exemplary test evaluation


Fig. 10 (a) shows the test track used in the Project “SmartLoad” for real driving tests.
On the right side in Fig. 10 (b) is the track modeled according to the real track in Car-
Maker-Environment.

Fig. 10. Trajectory of the test track at KIT (a) and in CarMaker (b)

Fig. 11 shows measurement data from a successfully performed maneuver. The top plot
(a) shows the torque of the motor handled by the IEW test bench.

15
(a)
10 Demanded Motor Torque
Torque in Nm

0
0 5 10 15 20 25 30 35 40 45 50 55
200
(b)
Gearbox torqe
Torque in Nm

100

0
0 5 10 15 20 25 30 35 40 45 50 55
200
(c)
Left Wheel Torque
Torque in Nm

100

0
0 5 10 15 20 25 30 35 40 45 50 55
Time in seconds

Fig. 11. Measurement data of a maneuver with the demanded motor torque (a), the gearbox
torque (b) and the corresponding torque at the wheel (c)
10

The middle part (b) shows the gearbox input torque, measured via torque sensors di-
rectly on the test specimen. The bottom part (c) shows the torque at the left wheel of
the vehicle model. The maneuver shown is used as a reference in the flowing chapters.
It shows a track execution without provoked errors or failures of one of the partners.

4.2 Influence of the physical gearbox on the distributed validation


environments
An ideal gearbox has a constant transmission ratio. This applies to both the speed and
the torque. In real gearboxes, on the other hand, there are variations in the transmitted
power. This can be reflected, for example, in a variation in rotational speed or in an
unevenly transmitted torque. Causes include backlash between gears, non-ideal gear
meshing, frictional losses and torsional stiffness. In a test setup, the simplest way to
simulate the gearbox would be to assume a constant transmission ratio. This would
reduce the speed by a factor of 16 and increase the torque. However, this would neglect
a large number of influences. How dominant the influence of a real gearbox is, can be
seen in Fig. 12. The figure shows two calculated gear ratios, based on the speeds (or-
ange) and the transmitted torque (blue). While the ratio of the speed shows a very con-
stant trend, the ratio of the torque is subject to significant variation. Causes for this can
be, as mentioned, e.g. non-ideal gear meshing, friction losses, torsional stiffnesses and
mass inertias. A significant interaction for the subsequent system components and the
driving dynamics of the vehicle model is the variation of the torque. In this example
setup, the output torque of the transmission is used as the wheel torque and thus has a
direct influence on the accelerations of the vehicle. Therefore, a detailed observation or
analysis of these influences should be made.

40 gear ratio based on torque (f = 10 Hz)


C

gear ratio based on velocity (f = 10 Hz)


gear ratio in -

20

0 5 10 15 20 25 30 35 40 45 50 55
time in seconds

Fig. 12. Calculated gearbox ratio based on velocity (orange) and torque (blue)

Simulating all of the above influences would take considerable time and would not
necessarily be practical. This shows the potential of distributed validation. This also
allows an early prototype to be integrated and thus interactions with the remaining sub-
systems to be considered without the prototype having to be deployed at a different
location. Thus, integration can take place at the respective competence carrier.
11

4.3 Exemplary differences when using a simplified model of one of the


partners
In order to evaluate the effects of a model on other components of the vehicle, a test
with a substantially simplified motor model was carried out. Instead of using the model
described in section 3.4 the reference torque 𝑇𝑇𝑟𝑟𝑟𝑟𝑟𝑟 is looped through; hence, no dead
time or time constants are considered. A scenario for this model could be the early stage
of the product development, where parameters of the motor are not yet available.

Fig. 13 shows the reference torque 𝑇𝑇𝑟𝑟𝑟𝑟𝑟𝑟 over time for the maneuver described in the
previous section, one with a complex motor model and one with a simplified motor
model.

10
Complex Motor Model
Reference Torque in Nm

Simplified Motor Model

0 5 10 15 20 25 30 35 40 45 50 55
time in seconds

Fig. 13. Comparison of a maneuver with simplified motor model and complex motor model

In case of the simplified model, the reference torque is generally reduced, and, espe-
cially at the start of the maneuver for 𝑡𝑡<5, a lot smoother. Since no dead time occurs in
the motor model, the desired torque is achieved almost instantaneously, improving the
responding behavior of the controller in CarMaker.

Differences induced by the simplified motor model can also be observed in the
torque and speed measurement of the gearbox. Both are now smoother than during the
standard maneuver. It can be concluded that due to the closed-loop operation, the uti-
lized models of one component have an impact on the behavior of subsequent systems.

4.4 Exemplary faults and their interception mechanisms


Tests were also carried out to check the failure mechanisms of the network. For this
purpose, the UDP connection of the various partners was interrupted at given time and
the remaining partners of the network remained active. To make this possible, the
watchdog described in chapter 3.2 was used. The watchdog constantly monitors the
status of the UDP connections of the individual network partners. If one of the partners
does not send a changed toggle bit for a defined time, the partner is assumed to be
offline. As soon as this is detected, the master uses a simulation model of the corre-
sponding partner instead of the values received from the test bench. For this purpose,
12

the signals sent to the corresponding partner are also continuously sent to a local simu-
lation model. This model represents the transmission behavior of the UDP connection
and the mechanical elements of the test bench. When switching to the simulation model,
the calculated variables of the simulation model are used instead of the received UDP
messages of the respective partner. This model runs continuously in parallel to the re-
spective partner and can thus be used immediately.

Communication fault IEW


Fig. 14 shows measurement data with such a failure. At second 28, the UDP signal from
the IEW test bench was intentionally interrupted. In subplot (b), it can be seen how the
torque remains at a constant level for a short period of time. This continues until the
network master detects a fault state. Afterwards, the substitute model is used and the
torque reduces again. In subplot (a), one can see how this leads to a brief increase in
the rotational speed. This is due to the close-loop operation of the network. As soon as
the substitute model is activated, however, the operation returns to normal. This shows
a very good tolerance of the network to signal interference, even in closed-loop opera-
tion.

Fig. 14. Measurement data of the IPEK test bench, where the UDP signal of the IEW test
bench was interrupted after approx. 28 seconds.

Communication fault IPEK


Fig. 15 shows a failure scenario in which the UDP signal of the IPEK test bench is lost.
The data from subplot (a) was recorded at the FAST test bench and the data from (b) at
the IPEK test bench. At approximately 28 seconds, the signal is lost and as Fig. 15 (b)
shows the test bench located at IPEK identifies the fault and immediately halts and
shuts down its machines. In addition, subplot (a) shows how the network master detects
the error and switches to a substitute model. Therefore, the remaining test benches can
13

continue their operation. This gives the CarMaker model a slightly different torque dur-
ing the time of the UDP failure (orange line in subplot a). In addition, when the gearbox
test bench is reconnected, a short high peak in torque can be seen.

Fig. 15. Measurement data from a maneuver during which the UDP signal of the IPEK test bench
was interrupted after approx. 28 seconds. The Data was recorded at the FAST test bench for (a)
and at the IPEK test bench for (a)

4.5 Conclusion
The measurement results are implemented in SysML in the tool Cameo Systems Mod-
eler. Based on the shown fallback mechanism, further security measures can be imple-
mented. For example, the fallback mechanisms are linked to the DCPLite state machine
and requirements of the developers. In the event of a failure of several partners, a deci-
sion can be made in the state machine to shut down the entire network. At the same
time, each participant can independently decide which additional measures should be
implemented locally as soon as there is no valid signal available locally. The use of the
models also enables the connected validation environments to be used without individ-
ual partners. If one of the partners is not operational, the corresponding model can be
used and the network remains operational, at least to a limited extent.

5 Discussion, Summary and Outlook

5.1 Discussion
The first research question is answered by a proposed toolchain, a DCPLite standard
and a process model for the validation of connected test benches. The concepts are
applied in the context of the Project “SmartLoad” for one specific use case of cornering
of a people mover.
14

With the support of Model-Based Systems Engineering the elements in the context
of distributed validation can be modeled and linked. Furthermore, the described con-
cepts to support the distributed validation can be modeled and linked to these elements
as well. For instance, the FMEA of the test bench connection or the state machine of
the DCPLite are linked to certain requirements and functions. Hence, MBSE can sup-
port the distributed validation by creation and adaption of views and models.
However, the research does not apply a broad range of use cases and focus on user-
friendly applicable concepts rather than on one integrated concept.

The purpose of distributed validation must be discussed for the use cases and conse-
quential connection parameters like sending frequency or real-time loop. In the Project
“SmartLoad”, the fastest transfer rate and sending frequency of 1.5 ms = 666 Hz and
average rate of 4.5 ms = 222 Hz only achieves some basic interrelation research. For
instance, an analysis of the inverter with a switching frequency of >16 kHz or the whole
bandwidth of a structure-borne sound (0 to 1.25 MHz) is not jet achievable with a real-
time testing of connected test benches. The frequency alone is not necessarily the de-
cisive criterion. The dead time of the signal transmission must also be considered and
can have a significant influence, especially in the closed-loop. In contrast to real-time
testing, sequential testing allows to simulate models or test benches and send the meas-
ured results to the next model or test bench. This allows testing systems with high fre-
quency. The disadvantage is the missing inter-dependencies between different systems.

5.2 Summary and Outlook


Approaches of MBSE are integrated into the validation of highly automated vehicles.
Thus, based on model-based validation environments, concrete scenario-based test
cases can be derived. With the investigated DCPLite, a new approach for cross-location
connection of test benches is suggested. The use case of a cornering people transport
shows different fallback possibilities of total vehicle simulation, a transmission model
and a gearbox test bench.
In future applications, the connection of test benches may have faster data transport
rates with technologies like direct tunneling of signals without firewalls. Furthermore,
the suggested DCPLite standard, models and the linkage of test results may be config-
ured automatically to quickly achieve the connection in an early stage.

Acknowledgement
This work has been supported by the Federal Ministry for Education and Research
(BMBF) in the project “New methods to increase the reliability of highly automated
electric vehicles (SmartLoad)” with the funding reference number: 16EMO0363.

References
1. Albers, Albert; Behrendt, Matthias; Klingler, Simon; Matros, Kevin. Verifikation und Vali-
dierung im Produktentstehungsprozess. In Udo Lindemann (Ed.). München: Carl Hanser
Verlag, pages 541–569, 2016.
15

Martin Krammer, Martin Benedikt, Torsten Blochwitz, Khaled Alekeish, Nicolas Amringer,
Christian Kater, Stefan Materne, Roberto Ruvalcaba, Klaus Schuch, Josef Zehetner, Micha
DammNorwig, Viktor Schreiber, Natarajan Nagarajan, Isidro Corral, Tommy Sparber,
Serge Klein, and Jakob Andert. The distributed cosimulation protocol for the integration of
realtime systems and simulation environments. In Proceedings of the 50th Computer Simu-
lation Conference, 2018.
2. Peter Baumann, Martin Krammer, Mario Driussi, Lars Mikelsons, Josef Zehetner, Werner
Mair, and Dieter Schramm. Using the distributed co­simulation protocol for a mixed
real­virtual prototype. In 2019 IEEE International Conference on Mechatronics (ICM), vol-
ume 1, pages 440–445, 2019.
3. Timo Combe, Thomas Cartus, Martin Schuessler, and Kurt Engeljehringer. Ver­fahren zur
dynamischen totzeitkompensation der abgaskonzentration, 2010.
4. Albert Albers, S. Boog, J. Berger, J. Matitschka, and M. Basiewicz, editors. Modellbildung
von Koppelsystemen in der dynamischen Validierung von Antriebssys­temkomponenten:
SIMVEC – Simulation und Erprobung in der Fahrzeugentwick­lung, VDI­Berichte.
VDI­Verlag, 2016.
5. Wenxu Niu, Ke Song, Yongqian Zhang, Qiwen Xiao, Matthias Behrendt, Albert Albers, and
Tong Zhang. Influence and optimization of packet loss on the internet­based geographically
distributed test platform for fuel cell electric vehicle powertrain systems. IEEE Access,
8:20708–20716, 2020.
6. Noushin Mokhtari and C. Guehmann. Classification of journal bearing friction states based
on acoustic emission signals. tm ­ Technisches Messen, 85, 2018.
7. DIN SAE SPEC 91381:2019­06, Begriffe und Definitionen in Bezug auf die Prüfung auto-
matisierter Fahrzeugtechnologien, 2019.
8. Sebastian Geyer, Marcel Baltzer, Benjamin Franz, Stephan Hakuli, Michaela Kauer, Martin
Kienle, Sonja Meier, Thomas Weißgerber, Klaus Bengler, Ralph Bruder, Frank Flemisch,
and Hermann Winner. Concept and development of a unified ontology for generating test
and use–case catalogues for assisted and automated vehicle guidance. IET Intelligent
Transport Systems, 8(3):183–189, 2014.
9. Maike Scholtes, Lukas Westhofen, Lara, Katrin Lotto, Michael Schuldes, Hendrik Weber,
Nicolas Wagener, Christian Neurohr, Martin Bollmann, Franziska Körtke, Johannes Hiller,
Michael Hoss, Julian Bock, and Lutz Eckstein. 6­layer model for a structured description
and categorization of urban traffic and environment. arXiv pre­print server, 2021.
10. Scenario description: Requirements & conditions – stand 4, https://www.pegasuspro-
jekt.de/files/tmpl/PDF-Symposium/04_Scenario-Description.pdf, last accessed 2021/04/09.
11. Gerrit Bagschik, Till Menzel, Andreas Reschka, and Markus Maurer. Szenarien für entwick-
lung, absicherung und test von automatisierten fahrzeugen. In 11. Workshop Fahrerassis-
tenzsysteme. Hrsg. von Uni­DAS e. V, pages 125–135, 2017.
12. Albert Albers, Nikola Bursac, and Eike Wintergerst. Produktgenerationsentwicklung – be-
deutung und herausforderungen aus einer entwicklungsmethodischen perspektive. In Bert-
sche Binz, editor, Stuttgarter Symposium für Produktentwicklung 2015 SSP 2015, 2015.
13. A. Albers, S. Rapp, J. Fahl, T. Hirschter, S. Revfi, M. Schulz, T. Stürmlinger, and M. Spa-
dinger. Proposing a generalized description of variations in different types of systems by the
model of pge - product generation engineering. Proceedings of the Design Society: DESIGN
Conference, 1:2235-2244, 2020.
14. Modelica Association Project (MAP) Distributed Co­Simulation Protocol. Distributed
co­simulation protocol (dcp), March 4, 2019.
Automatic emergency steering interventions
with hands on and off the steering wheel

Alexander K. Böhm1, Luis Kalb1, Yuki Nakahara2, Tsutomu Tamura2,


Dr. Robert Fuchs2
1
Chair of Ergonomics, Technical University of Munich, Boltzmanstr. 15, 85748 Garching,
Germany
2
System Innovation R&D Department, JTEKT Corporation, 333 Toichichō, Kashihara,
Nara 634-8555, Japan

Abstract. With the continuous automation of the driving task, driving while the
hands are off the steering wheel is progressively becoming reality. This situa-
tion is therefore of interest when evaluating driver reactions to an automatic
emergency steering (AES) system. So far, only the reactions with hands on the
steering wheel were investigated and this study is a first step towards expanding
currently available knowledge about AES to the case of hands off the steering
wheel. A driving simulator study was done where only steering was possible
and three interaction designs (Manual Driving (MD), MD with AES and Auto-
mated Driving (AD) with AES) were investigated. A suddenly appearing obsta-
cle from the side of the road while the vehicle was driving at a velocity of 80
km/h was used as driving scenario and the reaction of the driver was evaluated
objectively and subjectively. Results point in the direction that if an AES inter-
vention happens while driving automatically (hands-off), it is most likely that
the steering wheel will be gripped again while the intervention is still in pro-
gress. When compared to the reactions while the car was being driven manually
prior to the intervention, no tendency towards greater opposition of the inter-
vention or difference of the subjective variables could be found.

Keywords: Automated Emergency Steering, Driver Reaction, Automated Driv-


ing

1 Introduction

To support the driver in potentially dangerous situations, there exists nowadays a


multitude of Collision Avoidance Systems (CAS). These systems range from simple
warnings, such as Forward Collision Warning (FCW) systems, to systems that support
the driver’s action or even execute actions in the driver’s stead, such as Automated
Emergency Braking (AEB), Evasive Steering Assist (EVA) or Automatic Emergency
Steering (AES). Especially the use of AEB in new cars has significantly increased
over the recent years and has proven to be quite effective, reducing rear-end crashes at
low speed by 38% [1]. However, once the velocity increases, an evasive maneuver
can avoid crashes where braking alone cannot. This is because the minimal distance

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_9
2

required for an evasive maneuver becomes lower than the one required for braking
[2]. Currently available CAS for evasion are EVA systems, which support the driver
during the evasion by providing additional torque in the direction of the evasion ([2]-
[6]). A more advanced version of EVA is AES, which does not only support the driv-
er in the evasion, but executes it fully automatically, similar to AEB systems for brak-
ing. The present study focuses on such an AES system and its human-machine inter-
action.
Up until now, CAS were designed under the aspect of supporting or replacing the
actions of a human driver. Since the CAS initiates its specific function if no reaction
occurs until the last point, it does not matter whether the car is being driven by an
automatic driving (AD) system or a human driver. However, it is of great importance
for the human driver whether he/she or a system is in control of the car. In the former
case, his/her hands are generally on the steering wheel and he/she has to pay attention
to all the tasks required for driving (e.g. [7]). In the latter case, the driver can, depend-
ing on the level of automation, take his/her hands off the steering wheel and does no
longer need to pay attention to the driving tasks but can engage in non-driving related
tasks (NDRT) such as reading or working on a laptop. Since these two cases (manual
driving vs. automatic driving) are completely different for a human, one can imagine
that his/her response to an AES intervention will also be different.
Although multiple studies can be found which investigate the human-machine in-
teraction with an AES during MD, there are none which evaluate human reaction
when the vehicle was being driven autonomously before an AES intervention hap-
pened. There is therefore a need for research and the goal of this study is to extent
current available knowledge of human interaction with an AES.

2 Theoretical background

As the name of CAS indicates, the situations of interest are imminent collisions where
the driver needs to react quickly in order to avoid an impact. Without any assistance,
most drivers tend to brake although braking alone cannot avoid a collision. However,
an evasion can, which is even more obvious at higher velocities as the available time
for a reaction decreases [8]. Equipping the vehicle with an AES system which exe-
cutes an evasion in the driver’s stead could therefore be helpful to avoid collisions.
The characteristics for such an intervention need to be chosen with care. Drivers tend
to oppose automated steering by either applying a torque in the opposite direction,
which in turn reduces the effectiveness of the intervention [9]. This initial opposition
lasts for 200 to 600 ms and is assumed to be the time drivers need to decide whether
to subdue the intervention or start an evasive maneuver [10]. There are also indica-
tions that the opposition as well as the amount of opposition depends on the torque
applied by the AES system, and that too high torques lead to more resistance of the
driver [11]. A possible counter measure could be to inform the driver prior to the AES
intervention, as an information with an auditory cue prior to the intervention as well
as at the moment of the start of the intervention show tendencies towards less driver
resistance [9], [12].
3

Not only is it important that the intervention is not hindered by the driver’s reac-
tion, it should also be subjectively accepted or approved by the driver. One possibility
to assess the subjective opinion would be to ask specific questions; for example if one
would be willing to buy a car equipped with a specific CAS system. In the case of an
AES system it was shown that more people answered “yes” after having experienced
such a system than before the experience [13]. Another possibility can be a subjective
scale, preferably standardized and also used by other studies so that results can be
compared across multiple studies by different authors. One such scale was developed
to measure user-satisfaction and usability of Advanced Driver-Assistance Systems
(ADAS). It was tested in six studies, where two of them were about CAS [14]. An-
other scale assesses the criticality of a situation [15]. Although it was initially devel-
oped to investigate driver reactions to errors of a steering system, it was later speci-
fied that the scale could also be used for other disturbances as it allows driver to com-
pare the situations to those of everyday life [16]. In addition to the errors of a steering
system, the scale was also used for a Traffic-Jam-Assistant (TJA) [17], a true positive
activation of an AES intervention [12], as well as a false positive one [9], [18].

3 Research Question & Hypothesis

Research Question. With the increasing automation of the driving task, there already
exist systems [19], [20] that take control of the vehicle and allow the driver to take
his/her hands off the steering wheel. For the future development of AES systems it
becomes therefore necessary to understand how a driver will react to an AES inter-
vention with his/her hands initially off the steering wheel, for which no existing study
could be found. The goal of the current study is to extent the knowledge of driver
reactions to an AES intervention and the research question is thus as follows:

 How does a driver react in response to an automatic emergency steering interven-


tion during manual and automated driving?

Hypotheses. The focus was laid on a mental state in which the driver is able to per-
ceive the situation and react to the best of his/her capabilities in case of an emergency.
This is the reason why the driver is not engaged in a NDRT in this study. On the ob-
jective side, multiple studies found that drivers tend to counteract an AES intervention
because it happens before the driver can react to the situation. For this study, it was
assumed that this intervention will become stronger and a greater amount of steering
by the driver will be observed, as the driver is not in direct control of the vehicle at
the moment of the obstacle appearance during AD. The first hypothesis reads:

 Drivers, who are not engaged in an NDRT, will be more likely to steer during an
AES intervention when the vehicle was previously being driven automatically than
when it was being driven manually.

On the subjective side, because the driver is no longer in direct control of the vehi-
cle, it was assumed that the driver would feel more endangered during AD when an
obstacle suddenly appeared. The 2nd hypothesis thus reads
4

 The situation will be perceived as more dangerous when the vehicle was being
driven automatically before the intervention.

4 Methodology

4.1 Participants

A total of 18 persons (4 females) with normal to corrected vision participated in this


study. All participants were recruited from the Research Division of JTEKT and the
study was undertaken during working hours. The participants were on average 37.2
years old (SD 6 years) and had a driving experience of 16 years (SD 6.53 years).
Eight participants did not have any simulator experience prior to this study.
It is important to note that the data of all 18 participants was only used for the
analysis of the initial deviation (see section 5.1.). The other analyses (section 5.2 to
5.4) were done with the data of 14 participants as the results from four participants
could not be used because a safety mechanism was triggered and the data became
unusable.

4.2 Apparatus
The study was undertaken at the R&D Headquarters of JTEKT in Nara (Japan). A
basic, static driving simulator, shown in Fig. 1, equipped with a mass-produced steer-
ing system of type column EPS made by JTEKT was used. The monitor was a BenQ
XR3501 with a resolution of 2560x1080 pixels, a diagonal length of 35 inches and a
curvature of 2000 R.

Fig. 1. Driving Simulator

Preliminary tests on the simulator showed that it was much more difficult to drive
in the simulation while controlling both the steering wheel and the accelerator/brakes
than in a real car. Because of this difference, the velocity is controlled by the simula-
tion and only steering is required. The display of the simulator is shown in Fig. 2 and
it shows the speedometer and the status of the driving system. A generic engine sound
was played at a level of 65 dB that changed frequency according to the RPMs of the
simulated engine.
5

Fig. 2. Display of the driving simulator

4.3 Road & Driving Scenario


A two-lane road with left-hand traffic was used. Although most studies control the
timing of the driving scenario with the Time-to-Collision (TTC) parameter, it was not
at disposition with the software used in this study. As a replacement, the Time-to-
Center-of-Gravity (TTCOG) was used as reference, which means that the actual TTC
of this scenario is slightly lower (≤ 0.1 s) than the indicated TTCOG.

The road and the driving scenario are shown in Fig. 3 and Fig. 4. The parameters used
in [13] were taken as a basis and whenever possible, the same parameters were cho-
sen. The vehicle is driving at a velocity of 80 km/h when the initially hidden obstacle
in form of a car becomes first visible at TTCOG = 1.8 s (Fig. 3). The obstacle reaches
its final obstruction of 1.5 m (half lane-width) 0.3 s later (Fig. 4). When the obstacle
has reached its final position, a FCW is given and the automatic steering intervention
is triggered, if included in the condition of the experiment.

Three settings of the driving scenario were created in an attempt to counter possible
learning effects. The obstacle stayed the same, but the obstruction, the scenery as well
as the distance travelled before the obstacle appeared were different.

 Village: The obstruction was created with a suburban house. The driving time
from start of the setting to the appearance of the obstacle was around 2:11 min. A
truck was parked in a parking spot to the left of the road and was passed 1.7 s be-
fore the obstacle appeared.
 Forest: The obstruction was created with a billboard and the driving time was
around 2:11 min. A car driving on the oncoming lane was passed about 3.7 s be-
fore the obstacle appeared.
 Rural: The obstruction was created with a rural house and the driving time was
around 3:15 min. A car driving on the oncoming lane was passed about 9.3 s be-
fore the obstacle appeared.
6

Fig. 3. Time instant of obstacle appearance

Fig. 4. Obstacle reaches full displacement across the road

4.4 Driving System

Forward Collision Warning. The recommended characteristics according to [21]


were used as FCW. A five beep sound (played at 5 Hz with a duty cycle of 50%) was
played. The sound was at a frequency of 2000 Hz and the level rose to 78 dB when
the FCW audio cue was played.

Steering Intervention. The steering intervention is such that an evasion trajectory


y=f(x) is computed at the time instant of activation, with y the lateral coordinate and x
the longitudinal coordinate aligned with the road orientation. Since an evasion trajec-
tory is very similar to a lane-change trajectory, the results in [22] were taken as a
basis to choose an appropriate trajectory. In order to guarantee that the evasion trajec-
tory can be followed without exceeding the maximum lateral acceleration ay,max possi-
ble, the following relation between this acceleration and the longitudinal velocity vx
and the minimal radius along the trajectory ρmin was used:
𝑣𝑥2
𝑎𝑦,𝑚𝑎𝑥 = (1)
𝜌𝑚𝑖𝑛

Because of its analytic simplicity the ramp sinusoid was chosen


7

𝑥 1 𝑥
𝑦(𝑥) = 𝑦𝑒 [ − sin (2𝜋 )] (2)
𝑥𝑒 2𝜋 𝑥𝑒

with ye = 1.5 and xe = 33.33.

Control Strategy. A shared-control architecture according to [23] was used. The


driver can overlay a steering angle and receives haptic feedback, pointing towards the
heading of the evasion trajectory. In the case of the AES intervention, full control of
the vehicle is given back to the driver once the intervention is completed.

Automatic Driving System. The implemented AD system possessed two modes; MD


and AD, which was indicated by an LED on the lower right of the display (see Fig.
1). The abbreviation of the modes, MD and AD, were written above the LED and
different colors were used for each mode. MD was indicated by white color and light
blue was used for AD, which is shown in Fig. 5 and Fig. 6. Upon transitioning, a
female voice announced the mode by saying “Manual Driving” or “Automatic Driv-
ing” and the sound was created with [24] and the voice “Kendra (US)”.

Fig. 5. LED during MD mode Fig. 6. LED during AD mode

4.5 Interaction Design


Three different interaction designs were tested. These were:

 MD (control group): The car is driven manually and only a FCW is given at
TTCOG = 1.5 s.
 MD & AES (hands on): In addition to the FCW, the AES intervention starts also
at TTCOG = 1.5 s. Once the intervention has finished, full control of the vehicle is
given back to the driver.
 AD & AES (hands off): Almost the same as MD and AES but the car is being
driven automatically previously to the AES intervention. The AD system is acti-
vated 7.5 s before the obstacle appears.

4.6 Experimental Design


A one-factor within-subject design, where the interaction design is the changing fac-
tor, was chosen. While the order of the scenario settings always stayed the same; Vil-
lage, Forest, Rural, only the order of the interaction design was different between
8

participants. This gives the possible combinations shown in Table 1, where one com-
bination represents one whole experiment with one participant.

Table 1. All possible combinations of the interaction design with the driving scenario setting
Driving Scenario Setting
Combination
Village Forest Rural
1 MD MD & AES AD & AES
2 MD AD & AES MD & AES
3 MD & AES AD & AES MD
4 MD & AES MD AD & AES
5 AD & AES MD MD & AES
6 AD & AES MD & AES MD

4.7 Dependent Variables

The dependent variables can be divided into objective and subjective variables.

Objective

 Lateral deviation at TTCOG = 1.5 s: initial lateral deviation y0, y = 0 represents the
lane center and negative values point towards the road center.
 Reaction time during MD: a reaction occurs if the column angle as well as its ve-
locity are below -1° or below -1°/s.
 Hands-on time: the time the driver needed to put his/her hands back on the steering
wheel during AD and AES.
 Steering factor fsteering: this factor was designed such that it represents the steering
input of the driver during the AES intervention relative to the time over which
his/her hands are on the steering wheel. It compares the actual column angle θ to
the one which would occur without any driver interference θreference. During the
condition MD and AES, the time window starts at tAES, start, which represents the
point in time when the AES intervention is activated. It ends at tAES,end, which
stands for the point in time when the AES intervention ends and full control is giv-
en back to the driver.
1 𝑡 2
𝑓𝑠𝑡𝑒𝑒𝑟𝑖𝑛𝑔 = ∫ 𝐴𝐸𝑆,𝑒𝑛𝑑 (𝜃
𝑡𝐴𝐸𝑆,𝑒𝑛𝑑 −𝑡𝐴𝐸𝑆,𝑠𝑡𝑎𝑟𝑡 𝑡𝐴𝐸𝑆,𝑠𝑡𝑎𝑟𝑡
− 𝜃𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒 ) 𝑑𝑡 (3)

During AD and AES, the driver cannot steer as long as he/she does not put his/her
hands on the steering wheel. The time duration of the integral is therefore adapted
and starts at thands,on.
1 𝑡 2
𝑓𝑠𝑡𝑒𝑒𝑟𝑖𝑛𝑔 = ∫ 𝐴𝐸𝑆,𝑒𝑛𝑑 (𝜃
𝑡𝐴𝐸𝑆,𝑒𝑛𝑑 −𝑡𝑟,ℎ𝑎𝑛𝑑𝑠,𝑜𝑛 𝑡𝑟,ℎ𝑎𝑛𝑑𝑠,𝑜𝑛
− 𝜃𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒 ) 𝑑𝑡 (4)
9

Subjective

 Rating on criticality scale [15], ranging from 0 (imperceptible) to 10 (uncontrolla-


ble)
 Did you hear auditory information? (yes/no)
 Did you recognize any steering intervention? (yes/no)
─ If yes: How did you react to it? (steering/nothing/other)
 Rating on usability and acceptance scale [14], ranging from -2 (rejection/ inusabil-
ity) to +2 (acceptance/ usability). Not asked for MD interaction design.

4.8 Procedure
Each participant was welcomed directly at the driving simulator, where the study and
the different driving systems used were explained and, as opposed to other studies, the
presence of an AES system was also included in the explanation. The participant was
instructed to:

 Avoid collisions with other road users


 Take over control if the system does not act in the way he/she wants
 Take his/her hands off the steering wheel once the AD mode is activated.

A training session was held before the start of the real experiment. It was done on a
separate training circuit (one round ≈ 4.8 km), where all three scenario variants (with-
out the suddenly appearing obstacle) were placed along the circuit to make them not
stand out during the experiment. About two and a half rounds were done on the train-
ing circuits with multiple activations and deactivations of the AD mode as well as a
demonstration of the AES at the end of the second round. The driving times during
the training session were 7:44 min. in MD mode, 1:44 min. in AD mode with hands
off and 1:29 min. with hands on the steering wheel.

5 Results

The data of all 18 participants is only used in Section 5.1. A safety mechanism to
prevent excessive torques on the steering wheel was activated during the evasion of 4
participants, rendering the data unusable for further analyses and thus only the data of
the remaining 14 participants was used.

5.1 Initial Deviation


It was observed that some participants drove more on the right of the lane center at
the village scenario setting. The initial lateral deviation y0 at the start of the AES in-
tervention was therefore compared across the three settings and the result is shown in
Fig. 7. A Shapiro-Wilk test for normality showed significant deviations from normali-
ty for the village (p < .001) and rural (p = .002) setting. A Friedman’s ANOVA re-
vealed that the means across settings differ significantly (χ2(2) = 12.333, p = .002).
10

Fig. 7. Initial deviation across driving scenario settings

Wilcoxon signed-rank tests were done as post-hoc tests and a significance level of
0.0167 according to the Bonferroni correction was employed. The initial deviations of
the village setting were significantly different from the forest setting (W = 4,
p < .001, rb = -0.953 (rank-biserial correlation)), but not from the rural setting
(W = 55, p = .196). Lastly, the forest setting showed no significant change compared
to the rural setting (W = 122, p = .119).

5.2 Overview of Reactions

The observed reactions during all interaction designs could be classified into three
categories:
 Safe: The evasion happened without collision with the obstacle and the
road was not left.
 Collision: The obstacle could not be evaded and a collision happened.
 Off Road: Although the obstacle was evaded successfully, the road was
left during the evasive maneuver.
The results according to this classification are shown in Fig. 8.

Fig. 8. Types of observed evasions across interaction designs

The average reaction time during MD was 0.79 s (SD = 0.2 s).
During AD and AES the hands-on time could not be analyzed in detail due to the low
number of participants. Nevertheless, an overall classification into three categories
11

depending on the time when the steering wheel was gripped again was done, which is
shown in Table 2. The first two categories, during AES intervention and after AES
intervention, should need no further explanation but the category “always” signifies
that the hands were not taken off the steering wheel although AD mode had been
enabled. This reaction, where the participants did not let go of the steering wheel, was
only observed during the village setting.

Table 2. Observed gripping reactions during AD & AES


Gripping of Steering Wheel Number of Participants (14 total)
During AES intervention 8
After AES intervention 4
Always 2

5.3 Steering Factor


As explained in Section 4.7, the steering factor fsteering is a measure to quantify the
amount of steering done by the human driver during the AES intervention. The calcu-
lated factor for the interaction designs with AES for the 14 participants is shown in
Fig. 9.
A Shapiro-Wilk test for normality showed no significance for the difference
fsteering,MD,AES – fsteering,AD,AES (W(14) = 0.864, p = .092). A paired-samples t-test was
therefore used for further analysis. It revealed that the steering factor for AD and AES
(Mean = 0.352, SD = 0.543) was not significantly higher than for MD and AES
(Mean = 0.832, SD = 0.383), T(13) = 2.87, p = .993.

Fig. 9. Steering factor for the interaction designs with AES

5.4 Subjective Ratings


The criticality ratings according to [15] as well as the Usability/Acceptance ratings
according to [14] are shown in Table 3. A Friedman’s ANOVA revealed no signifi-
cant differences of the criticality ratings across interaction designs (χ2(2) = 1.378,
p = .502). The same was shown with a t-test for the acceptance ratings (t(13) = -0.582,
p = .571) and a Wilcoxon signed-rank test for the usability ratings
(W = 35.5, p = .506).
12

Table 3. Subjective ratings


Interaction Criticality Ratings: Acceptance Ratings: Usability Ratings:
Design Mean (SD) Mean (SD) Mean (SD)
MD 4.57 (3.93) - -
MD & AES 5.07 (2.76) 0.13 (0.84) 0.56 (0.77)
AD & AES 4.57 (2.62) 0.27 (0.52) 0.93 (0.41)

6 Discussion

The aim of this study was to understand drivers’ reactions to an AES intervention
during both manual and automatic driving. Three settings of the same driving scenario
were initially created to counter learning effects but unfortunately, these settings cre-
ated another influence on the reaction of the driver. The lateral position of the vehicle
on the road when the obstacle appears was different across settings and its effect on
the results should therefore always be taken into account when analyzing and inter-
preting the results of this study.
Higher collision rates were expected during the MD interaction design since simi-
lar parameters to the driving scenario in [13] were chosen, where only 7 % of the
participants were able to avoid the obstacle without touching it. This difference is
most likely due to the fact that a reaction time of 0.3 s was required to evade the ob-
stacle safely in [13]. In this study, however, the participants who evaded successfully
had an average reaction time of 0.7 s which indicates that the scenario was less criti-
cal than the one used in [13]. Another possible explanation is the fact that participants
could only steer, as the option of braking was not possible. Therefore the possibility
that collisions occurred because the participants braked although it was not possible to
avoid a collision by braking alone, as was reported by [9] and [10], was eliminated.
Another big difference in driver reactions compared to other studies is that almost
half of the participants left the road during their evasive maneuver during the MD
interaction design, which was never reported in any of the studies reviewed in Section
2. The reason for this difference is believed to be the higher velocity at the start of the
evasion. This means that although the time available for an initial reaction is roughly
the same, subsequent reactions need to be executed much faster because the TTCOG
of the current study was similar to the TTCS of the other studies.
During the interaction design AD and AES with hands off the steering wheel, 8 out
of 14 participants gripped the steering wheel during the AES intervention. Although
the decision whether to grip the steering wheel again also comes down to a question
of trust in the system, gripping the steering wheel during the intervention seems to be
the most probable reaction. Since this kind of emergency normally occurs quite rarely
on the road and the driver has probably forgotten about the AES system by that point,
the real percentage of people who would grip the steering wheel during the AES in-
tervention could probably be even higher.
The hypothesis that drivers would be more likely to steer during AES when the ve-
hicle was being driven automatically could not be verified by the use of the steering
factor. One reason could be that the vehicle was driving in the middle of the lane dur-
13

ing AD and AES (since the car was being driven automatically) which was normally
not the case when it was driven manually during MD and AES. The higher amount of
steering for MD and AES could be explained by the characteristics of the AES inter-
vention used in this study, which always aims for a relative lateral deviation of 1.5 m
and does not consider the initial lateral deviation. This means that the intervention
gets less and less optimal the more the vehicle is driving away from the lane center.
Although general suppression of AES intervention have also been reported in other
studies ([9], [10]), it is not clear in the current study whether the suppression hap-
pened because the intervention was judged to be exaggerated or because of an instinct
reaction to the suddenly moving steering wheel. Another possible explanation could
have been different levels of acceptance or perceived usability. However, no such
tendencies could be found as neither acceptance nor usability ratings were different
for the two interaction designs with AES, which is similar to the result in [12].
The second hypothesis that the situation would be perceived as more dangerous
when the car was being driven automatically prior to the evasion could also not be
verified. The criticality showed no significant difference across interaction designs
which is in line with the results of [9] and [12]. This is most likely due to individual
scales for danger. Examples are that a safe evasion was rated 10 (the maximum)
whereas an evasion where the road was left was rated 5. Another reason for these
individual scales of danger could also be the subjects’ occupation, which ranged from
testing engineers for real cars to secretaries.

7 Conclusion

The main goal of this study was to extend currently available knowledge on AES to
the situation where the car is being driven automatically and the hands are off the
steering wheel and a first step in this direction was done. However, there were factors
that influenced the outcome more than anticipated. The driving scenario setting
proved to be such a factor, as a dependence of the initial lateral deviation at TTCOG =
1.5 s on the scenario setting was discovered. No such influence has ever been reported
in the reviewed studies and it certainly points in the direction of [12] that further in-
vestigation of the influence of traffic scenario parameters is needed for better under-
standing of driver reactions.
Only a low number of participants could be recruited and the driving simulator was
simple. However, even with these limitations, it is believed that a general reaction to
an AES with hands off the steering wheel could be found. It seems that gripping the
steering wheel during the AES intervention while the car is being driven automatical-
ly is quite likely to occur in reality since half of the participants did so in this study. A
detailed analysis of the hands-on time was not possible and a next step should be to
investigate how the point in time of the gripping influences the success and quality of
the automatic evasion maneuver.
Finally, the aspect of a false positive activation of the AES intervention has not
been looked at in any way during this study. Since the hands are off the steering
wheel during AD, the question of controllability becomes quite difficult to answer and
14

appropriate intervention designs are required. Future studies on AES intervention


during AD should also investigate this aspect such that the safety of an automatic car
can be guaranteed in the future.

References
1. Fildes, B., Keal, M., Bos, N., Lie, A., Page, Y., Pastor, C., Pennisi, L., Rizzi, M., Thomas,
P., Tingvall, C.: Effectiveness of low speed autonomous emegency braking in real world
rear-end crashes. Accidents Analysis & Prevention (81), 24-29 (2015).
2. Zegelaar, P., Bosh, H., Allexi, G., Schiebahn, M., Vukovic, E., Notelaes, S.: Ford Evasive
Steering Assist – Steering You Out of Trouble. In: 27th Aachen Colloquium Automobile
and Engine Technology (2018)
3. ZF Homepage, https://www.zf.com/products/en/cars/products_30987.html, last accessed
2021/04/22.
4. Mercedes-Benz Homepage, https://www.mercedes-benz.com/en/innovation/by-far-the-
best-mercedes-benz-assistance-systems1/, last accessed 2021/04/22.
5. Volvo Homepage, https://www.volvocars.com/en-eg/support/topics/use-your-car/car-
functions/collision-avoidance-assistance, last accessed 2021/04/22.
6. Ford Homepage, https://www.ford.com/technology/driver-assist-technology/evasive-
steering-assist/, last accessed 2021/04/22.
7. Geiser, G.: Mensch-Maschine Kommunikation im Kraftfahrzeug. Automobiltechnische
Zeitschrift (87, 2), 77-84 (1985).
8. Lechner, D., Malaterre, G.: Emergency maneuver experimentation using a driving simula-
tor. SAE Technical Paper (1991).
9. Schieben, A., Griesche, S., Hesse, T., Fricke, N., Baumann, M.: Evaluation of three differ-
ent interaction designs for an automatic steering intervention. Transporation Research Part
F: Traffic Psychology and Behaviour (27), 238-251 (2014).
10. Schneider, N., Purucker, C., Neukum, A.: Comparsion of Steering Interventions in Time-
critical Scenarios, Procedia Manufracturing (3), 3107-3114 (2015).
11. Iwano, K., Raksincharoensak, P., Nagai, M.: A study on shared control between the driver
and an active steering control system in emergency obstacle avoidance situations. IFAC
Proceedings Volums (19), 6338-6343 (2014).
12. Sieber, M., Siedersberger, K.-H., Siegel, A., Farber, B.: Automatic Emergency Steering
with Distracted Drivers: Effects of Intervention Design. 18th IEEE Conference on Intelli-
gent Transportation Systems (2015).
13. Bender, E., Landau, K., Bruder, R.: Driver reaction in response to automatic obstacle
avoiding manoeuvres. VDI Berichte (1931), 219-228 (2006).
14. Van Der Laan, J.D., Heine, A., De Waards, D.: A simple procedure for the assessment of
acceptance of advanced trasnport telematics. Transportation Research Part C: Emerging
Technologies (5,1), 1-10 (1997).
15. Neukum, A., Krüger, H.-P.: Fahrerreaktionen bei Lenksystemstörungen - Untersuchungs-
methoden und Bewertungskriterien. VDI Berichte (1791), 297-318 (2003).
16. Neukum, A., Reinelt, W.: Bewertung der Funktionssicherheit aktiver Lenksysteme: ein
Human Factor Ansatz. VDI Berichte (1919), 161-179 (2005).
17. Naujoks, F., Purucker, C., Neukum, A., Wolter, S., Steiger, R.: Controllability of Partially
Automated Driving functions – Does it matter whether drivers are allowed to take their
hands off the steering wheel?, Transportation research Part F (35), 185-198 (2015).
15

18. Heesen, M., Dziennus, M., Hesse, T., Schieben, A., Brunken, C., Löper, C., Kelsch, J.,
Baumann, M.: Interaction design of automatic steering for collision avoidance: Challenges
and potentials of driver decoupling. IET Intelligent Transport Systems (9, 1), 95-104
(2015).
19. Cadillac: CT6 Super Cruise™ Convenience & Personalization Guide,
https://www.cadillac.com/content/dam/cadillac/na/us/english/index/ownership/technology/
supercruise/pdfs/2020-cad-ct6-supercruise-personalization.pdf, last accessed 2021/04/22.
20. Mercedes-Benz Research & Development North America. Introducing DRIVE PILOT: An
Automated Driving System for Highways.
https://www.daimler.com/documents/innovation/other/2019-02-20-vssa-mercedes-benz-
drive-pilot-a.pdf, last accessed 2021/04/22.
21. Sadekar, V.K., Grimm, D.K., Litkouhi, B.B.: An Integrated Auditory Warning Approach
for Driver Assitance and Active Safety Systems. SAE Technical Paper (2007).
22. Sledge, N. H. Jr., Marshek, K. M.: Comparison of Ideal Vehicle Lane-Change Trajectories.
SAE Technical Paper (1997).
23. Shoji, N., Yoshida, M., Nakade, T., Fuchs, R.: Haptic Shared Control of Electric Power
Steering: A Key Enabler for Driver-Automation System Cooperation. ATZ 2019 Confer-
ence on Automated Driving (2019).
24. Free Text-to-Speech and Text-to-MP3 for US English, https://ttsmp3.com/, last accessed
2021/04/22
Hand Over, Move Over, Take Over - What Automotive
Developers Have to Consider Furthermore for Driver’s
Take-Over

Miriam Schäffer1, Philipp Pomiersky1 and Wolfram Remlinger1


1 Institute for Engineering Design and Industrial Design, University of Stuttgart,
Pfaffenwaldring 9, 70569 Stuttgart, Germany

Abstract. Autonomous driving allows for the first time from a legal point of view
to permanently pursue non-driving related tasks. While during highly automated
driving (SAE Level 3) the driver must be constantly ready to take over, this is no
longer the case in fully automated mode (SAE Level 4). Nevertheless, there will
be situations in which take-over is required. The take-over situations in Level 4
will be more complex, since more activities will be permitted. Automobile man-
ufacturers must ensure a safe take-over process with the aid of appropriate vehi-
cle interior design. With the help of the HoMoTo-approach presented here, take-
over scenarios can be broken down into substeps and fixed time values can be
assigned to the individual movement sequences using the Methods-Time-Man-
agement technique. Two examples show that the application of this method is
suitable for optimizing the take-over process, however further adjustments to the
procedure are necessary in order to obtain valid results.

Keywords: Autonomous Driving, Take-over Scenarios, Methods-Time Man-


agement, SAE-Level.

1 Introduction

1.1 Autonomous Driving


Definition. Autonomous driving is defined as the independent execution of the driving
task by the vehicle system [1]. For classification purposes, individual systems ranging
from simpler driving assistance systems to autonomous driving are classified into levels
0 to 5 within the framework of SAE, with the systems taking over the driving task
successively more with each level [2].

Levels. Levels 0 to 2 represent driver assistance functions that support the driver in
longitudinal as well as lateral directions. In level 3, vehicle control is highly automated,
which is why the driver is not required to continuously monitor the vehicle. However,
the driver must be permanently able to take over the driving task within a certain time
window (7 to 15 seconds time budget). [1, 2, 3]

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_10
2

Continuous take-over of the complete driving task by the automated driving function
occurs in Level 4 vehicles during fully automated driving. In certain use cases, these
vehicles reach the destination independently and can do so without returning the driving
task to the driver. [2]
In Level 5, the vehicle reaches the destination autonomously and independently of
the prevailing boundary conditions. The occupants act exclusively as passengers and
have no possibility to intervene in the driving process. [1]

Current State of Development. The market launch of the first Level 3 driving func-
tions in the form of Congestion Assistants up to 60 km/h has been announced for 2021.
A foreseeable further development is the so-called highway pilot, which steers the ve-
hicle up to 130 km/h and carries out overtaking maneuvers independently. [4]
Level 4 and 5 driving functions are currently being tested and further developed in
prototype vehicles by automobile manufacturers, suppliers, entrepreneurs from outside
the industry, service providers and university research groups [5].
Considerable simulation and testing effort is required to develop and validate these
automated functions. Since, from Level 3 onwards, responsibility also passes from the
driver to the vehicle and thus to its manufacturer during the automated journey from a
legal point of view, functional errors and failures not only pose a considerable risk to
road users, but also a significant financial liability risk for the manufacturer [6].

1.2 Taking Over the Driving Task


Level 3. In highly automated driving in Level 3, vehicle control and responsibility are
taken over by the vehicle [6]. Accordingly, the driver has the opportunity to engage in
non-driving-related activities (NDRA). These include reading a book, consuming food,
or relaxing.
However, if the system reaches its limits, the vehicle requests the driver to take over
control of the vehicle (Take-Over Request, TOR) within a defined time window. The
driver must be able to perceive the TOR and take over the driving task (Level 2). The
take-over procedure requires the termination of the passenger activity including the
stowage of the used items and the preparation and taking of the driver's seat before the
driver can acquire the required situational awareness. The transition from Level 3 to
Level 2 is also accompanied by the transfer of legal responsibility from the vehicle back
to the driver. Due to the continuous readiness to take over in Level 3, the driver can
only temporarily relinquish his driving task and devote himself to a limited selection of
other activities at his driving position. [7]

Limitations of Level 4. Due to the permanent and complete take-over of the driving
task by the system, the driver, by definition, does not have to remain ready to take over
[2]. The occupants of the vehicle can actually devote themselves completely and per-
sistently to other activities during the complete journey [6], leaving the driver's seat
unoccupied. Possible NDRA include, in addition to consuming media, sleeping or
working on a laptop.
3

For practical applications, this means that when a driving request is entered, the ve-
hicle must do a final assessment of whether it can provide a fully automated trip to the
destination before the actual trip begins. According to the current state of the develop-
ment, a possible driver take-over during the driving process from Level 4 to Level 2
should be induced exclusively by the driver and not be required on the vehicle side.
This complete autonomy of the vehicle during fully automated driving in Level 4
will nevertheless be feasible only to a limited extent over a longer introductory phase
depending on the situation (weather, traffic, etc.) and space. Due to the complexity of
the routes (intersections, oncoming traffic, pedestrians, etc.), the first Level 4 journeys
will probably only take place on highways and their feeder routes (first / last mile). In
the medium term, the high costs incurred by manufacturers for protection and approval
will further restrict the widespread use of Level 4. [8]
It is therefore to be expected that a take-over request will also be made to the vehicle
occupants for Level 4 journeys, comparable to the procedure from Level 3. Namely,
whenever it seems opportune to leave the routes approved by the vehicle manufacturer
for Level 4. An example is the bypassing of a highway congestion on a Level 2 route.
The spatial restrictions on the area of use will mean that a take-over in Level 4 mode
by a human driver will by no means only be made at the start of the journey or in a
stationary vehicle, but also on the road during the journey.

1.3 Research Gap


Current Studies on Take Over Scenarios. Driver take-over has been scientifically
studied with respect to the following aspects: driver state, fatigue, mental workload,
cultural influences, cognitive load/stress, well-being, secondary activity, driver and oc-
cupant posture and movement, driver-vehicle interaction, HMI and GUI design ap-
proaches, misuse, take-over duration, mode confusion, competence deficits or loss, iro-
nies of automation, anticipation of vehicle system behavior, acceptance, user under-
standing, and mental models.
Although the previous studies include other influencing factors in the take-over pro-
cess, they mainly focus on the change from Level 3 to Level 2. For example, the influ-
ence of driver state and HMI design and interaction [7] as well as the influence of driver
experience on the take-over procedure [9] have been investigated. Furthermore, alt-
hough the exercise of NDRA has been added in studies of the take-over procedure, only
the influence of NDRA on fatigue state has been studied [10]. Other authors focus on
the take-over quality and duration, but without including the previously performed ac-
tivity [11, 12]. In the development of an attention and activity assistant, NDRA are
considered, but the studies are mainly limited to the Level 3 mode and regarding Level
4, they only refer to the take-over duration after a sleep phase [13]. In another study,
the influence of the degree of automation on the subjects' handling of a secondary task
is investigated for Levels 1 - 3 [14].
The "RUMBA" project [15] focuses on Level 4, but investigates the design of the
interior to increase the feeling of safety and comfort only during the exercise of NDRA.
Other studies do analyze driver take-over from Level 4, taking into account the phase
4

between NDRA and driving, but focus only on the actual driver take-over [16]. In con-
trast, NDRA is not considered in a study investigating the effect of the driver's passive
role and distraction on safe take-over and subsequent driving behavior [17].

Relevance. Current research and development projects are primarily concerned with
the transfer procedure from Level 3 to Level 2. With regard to Level 4, only the actual
take-over phase has been considered so far. However, the take-over of the driving task
in Level 4 is more important in the medium term than previously assumed.
Since the type of activity during passive driving has a considerable influence on the
take-over time required, each activity must be protected separately. Due to the liability-
relevant transfer of responsibility, it can be assumed that the manufacturer may only
allow the occupants to perform activities that he has also secured.
Standardizable, computer-readable and documentable procedures are required for
the simulative protection and approval of activities during passive driving and driver
take-over. The procedures must be able to clearly describe activities, actions, postures
and movements of vehicle occupants in interaction with automated driving functions.
In addition, an evaluation based on time requirements, interruptibility, importance, at-
tention as well as cognitive, psychological or emotional stress is necessary.
Since such a procedure does not yet exist, this article is intended to provide an ap-
proach as to how the individual substeps of the driver take-over from Level 4 to Level
2 can be described and evaluated in a uniform and standardized manner from the most
diverse activities and states.
First, the concept HoMoTo is presented, which divides the take-over scenarios into
individual subtasks in order to describe them systematically. The focus is placed on the
action and movement sequences, the interior design is included and the various factors
influencing the take-over time are taken into account. In order to evaluate the take-over
procedure on a granular level and under temporal aspects, the Methods-Time Measure-
ment (MTM) method originating from labor sciences is consulted. Finally, the model-
ing of the take-over procedure and the application of the standardized process language
of the Methods-Time Measurement procedure are evaluated and discussed.

2 HoMoTo – Hand Over, Move Over, Take Over

2.1 HoMoTo as Extended Take-Over Process


Current studies focus on the preparation of the driver and the actual start of the driving
task, the so-called Take-Over. In some cases, it is assumed that the driver is already
ready to take over the driving position, and in others, the phase between TOR and take-
over is not considered in relation to the previously performed NDRA. However, for
user-oriented interior design and safe driver take-over, the steps between the driver's
passive role and the take-over must be explicitly defined and investigated, since these
steps can have a decisive influence on the overall take-over time.
We added two terms for the driver's authoritative action steps before the take-over:
Hand-Over and Move-Over.
5

Hand-Over first implies the termination of the activities of passive driving. In addi-
tion, the handing over to the vehicle of the objects used during passive driving and the
picking up of the objects required for driving, among other things, take place in this
course. Move-Over includes the physical, psychological and emotional adaptation of
the driver, as well as the geometric and informational adaptation of the vehicle in prep-
aration for taking over the driving task. This phase also implies the cognitive and emo-
tional processing or termination of the NDRA. Depending on the activity performed
and the corresponding modification of the interior, Hand-Over and Move-Over can pro-
ceed in parallel or sequentially (see Fig. 1). Take-over consists of the driver creating
situational awareness of the current traffic situation and the vehicle as part of it, and
finally taking over the driving task.

Fig. 1. System-initiated expanded transition model from automated to manual driving, figure
modified and expanded based on [16].

2.2 Factors Influencing HoMoTo


The time between TOR and the ultimate performance of the driving task is significantly
influenced by four factors (see Table 1).
In addition to the NDRA, the driver's characteristics in terms of experience, person-
ality, sociodemographic characteristics, anthropometry, and sensory physiology have
an influence. In addition, the current situational state in terms of mental and physical
condition as well as motivation and strain are relevant.
The vehicle interior, with its specific geometry and dimensional concept, is also a
decisive influencing factor. Appropriately placed storage facilities, stowage spaces and
holders can reduce the duration between the execution of the activity and the vehicle
control. The performance of certain NDRA must therefore be taken into account at an
early stage in vehicle development.
6

Last but not least, the environment, i.e., current traffic events, the traffic situation,
and other road users affect the duration of the take-over. Despite prediction, vehicle
control can be handed over at shorter notice in a critical situation, if necessary, and thus
with a smaller time frame. In addition, traffic-independent factors such as weather con-
ditions are influencing factors that must be taken into account when designing the take-
over process.

Table 1. Factors influencing HoMoTo.

Driver NDRA Vehicle Environment


Traits Type of activity Dimensional concept Traffic situation
Current state Functions Weather

3 Materials and Methods

3.1 Further Elaboration of the HoMoTo-Approach


With the Hand-Over, Move-Over and Take-Over phases, the HoMoTo approach pre-
sented here forms the theoretical basis for the structured analysis of take-over proce-
dures. For a future standardized process description, the individual phases must be bro-
ken down into their basic components. To this end, a selection of take-over procedures
from relevant NDRA is modeled in terms of the timing of their substeps. The commo-
nalities, differences and recurring activities are extracted and grouped into categories.
After structuring the categories in terms of time, a generally applicable process is de-
rived. This paper focuses on the analysis of the newly introduced Hand-Over and Move-
Over phases.

3.2 Application of MTM to HoMoTo


One approach for a standardized process description of HoMoTo represents the use of
Methods-Time Management (MTM). MTM is based on the fact that human movements
consist of highly trained and repetitive basic elements that are often executed intuitively
or in parallel. Since vehicle use also consists of motion sequences that are largely au-
tomated, a basic applicability of MTM in the vehicle is assumed.

Definition and Methodology of MTM. MTM belongs to the "Systems of Predeter-


mined Times" and is an analytical-calculative method for determining the time required
to perform an activity. With the aid of method-specific application rules and consider-
ing time influencing variables, predominantly manual work processes are described.
For this purpose, work processes are divided into individual motion elements, to each
of which defined activity times are assigned. By adding the target times of the individ-
ual elements, the times of entire work sequences can be calculated. The method was
developed in the 1940s under the conditions of industrial mass production. [18]
7

Basic Motion Elements of MTM. MTM is based on 19 basic motion elements that can
be used to describe all human movements. The basic motion elements are divided into
three groups, see Table 2. [19]

Table 2. Overview of the basic motion elements of MTM. [19]

Categories Motion Elements


Fingers, hand, arm motions Reach, grasp, move, position, release, apply pressure, disengage
Eye motions Eye travel, eye focus
Foot motion, leg motion, side step, turn body, walk, bend and
Body, leg, foot motions arise from bend, stoop and arise from stoop, kneel and arise
from kneel, sit, stand

MTM Form. A standardized form (see Table 3) is used to describe workflows using
MTM. Left hand (columns 2-4) and right hand (columns 6-8) are considered separately.
In addition to a brief description of the motion sequence (Description), the form also
records any repetitions of the motion (quantity x frequency, QxF) and the associated
Code. As part of the standardized process language, codes describe a motion with the
aid of abbreviations to which information, such as the type of motion, is clearly as-
signed. For most motions, the code consists of the three parts Type of Basic Motion
(such as G for grasping), Length of Motion (such as 20 for 20 cm) or Angle of Rotation
and the Case of Motion. In addition, there are body movements that can be uniquely
described with fewer abbreviations or only with additional abbreviations. [19]
Based on the code and a metric card, the defined target time in the Time Measure-
ment Unit (TMU) is assigned to each element and summed up, where one TMU corre-
sponds to 0.036 seconds. [19]

Table 3. MTM Form. [19]

No. Description QxF Code TMU Code QxF Description


1

n
∑ Sum

Approach. In order to verify to what extent MTM is applicable for the description of
HoMoTo, several analyses are performed. In the first analysis, basic analogies between
the sub-aspects of MTM and vehicle use are elaborated. For this purpose, typical mo-
tion sequences of the driver during car driving in manual mode are considered and the
motion elements of MTM are added. This analysis forms the basis for the further ap-
plication of the MTM method to HoMoTo.
In the second analysis, it is checked whether the basic motion sequence from the
exercise of an NDRA to the assumption of the driver posture can be described by means
of the MTM process language. This analysis is performed using the upper body posture
8

as an example. The basic postures of the upper body during the execution of NDRA
identified by Fleischer & Chen: forward, neutral, backward, twisted and leaned are
used as starting postures [20]. Since the entire motion sequence towards the ready-to-
take-over position at the driver's workstation is to be analyzed, the sitting direction is
included as another factor. Accordingly, the five basic postures are combined with the
three seat direction categories: in driving direction (0°), against driving direction
(180°), and in between (5° to 175°). Based on these 15 initial postures, the resulting
motion sequence towards driver take-over is described and translated into the MTM
process language.
Thirdly, the application of the MTM method to the take-over procedure is demon-
strated exemplarily for two activities and the elaborated motion sequences are referred
back to in dependence of the upper body postures from the second analysis. For this
purpose, the activities necessary for the take-over of the driving task are first divided
into individual subtasks using the HoMoTo approach presented. Then, the transfer of
the basic MTM elements to the elaborated subtasks is performed using the MTM form
(see Table 3). As examples, reading a book in the driver's seat in the direction of travel
and watching a movie using VR glasses with the body leaning back against the direction
of travel are considered. The two examples differ in terms of their technical complexity
and feasibility in the interior of autonomous vehicles and can therefore highlight the
possible limitations of the MTM approach.

4 Results

4.1 Categories of Hand-Over and Move-Over


The results show that the different take-over procedures can be traced back to a com-
mon structure. In Table 4, the general categories and subcategories that are most ap-
propriate for structuring hand-over and move-over based on the analysis are shown in
the first three columns. Since the activity during passive travel critically affects Ho-
MoTo, the NDRA, the posture adopted during it, and the additional elements used to
perform the NDRA are listed as separate categories.
Regarding the timing, the initialization of the two phases Hand-Over and Move-Over
takes place directly after the Take-Over Request. For all other categories, it is not pos-
sible to derive a uniform chronological sequence.
During initialization on the driver and vehicle side, a decision is made as to which
substeps are performed and in which way. For example, an object used in the passive
phase can be placed in the designated tray for the long term or placed on the passenger
seat for a short time to be used again after a short break. In the initialization phase, it is
also decided which processes run sequentially and which run in parallel.
The Hand-Over phase can be categorized based on the additional items used during
passive driving and during the hand-over scenario. Personal items are those that the
driver himself brings into the vehicle. These include, for example, clothing, mobile
phone and notebook, but also food and drinks. Vehicle elements are provided by the
manufacturer by design and can serve both the NDRA and the driving task. Examples
are control elements or integrated entertainment applications. Both personal items and
9

vehicle elements are transferred from the driver to the vehicle and vice versa during the
hand-over procedure.

Table 4. Categorization of HoMoTo and examples for NDRA.

Phase Category Subcategory Example 1 Example 2


Activity NDRA Reading a book Watching a movie with
during VR glasses
passive Initial pos- Driving posture Against driving direc-
driving ture tion, leaning back
Additional Book, reading glasses VR glasses
items
Take Over Request
Initializa- Vehicle Open tray Adjust seat
tion Driver Book in tray, glasses VR glasses on hook,
in storage tray, adjust put on shoes, adjust
posture posture
Hand- Personal Driver to vehicle Book in tray, reading VR glasses on hook
Over items glasses in storage tray
Vehicle to driver - Shoes from footwell
Vehicle Driver to vehicle - -
items Vehicle to driver - -
Move- Vehicle Seat - Raise backrest, rotate
Over seat, move forward
Steering weel - -
Driving pedals - -
Interior - -
Information - -
Driver Adjust posture Hands to steering Straighten and align
wheel, feet to pedals, body, hands to steering
look ahead wheel, feet to footwell,
feet to pedals, look
ahead
Prepare body - Put on shoes
Sensory percep- Process book emotion- Process movie emo-
tion ally tionally
Take Over

The Move-Over phase is divided in the first step with regard to vehicle and driver tasks.
On the vehicle side, the second level is subdivided into the interior elements that are
adapted during the move-over. On the driver side, the subcategory "adjusting posture"
includes both the adjustment of the body posture itself and the posture-induced, driver-
side adjustment of interior elements. "Prepare body" refers to the restoration of a ready-
to-drive condition. This includes putting on and taking off clothes or cleaning hands
10

after eating. In this subcategory, move-over and hand-over are coupled. The sensory
perception subcategory includes shifting focus from the NDRA to the driving task.
The detailed structuring of HoMoTo confirms that primarily the NDRA itself, the
driver, the vehicle characteristics and environmental influences affect the number, the
temporal sequence as well as the execution speed of the individual steps.

4.2 Suitability of MTM for describing HoMoTo


As expected, the results show a large number of analogies between MTM and vehicle
use. Examples are reaching for the gearshift lever or the turn signal without visual or
haptic control. Basic elements that can be adopted from the process language of MTM
are e.g. for hand and arm movements Reaching (to buttons, knobs, controllers), Press-
ing or Turning (these), Grasping (the gearshift) and Bringing (the lever to shift). Re-
petitive Foot and Leg Movements during driving are the movement of these same to the
accelerator pedal. In addition, eye movements, such as Visual Control for views in the
side and rearview mirrors, and Body Rotations can be used to check blind spots.
The further analysis for the basic representation of the motion sequences in the au-
tonomous driving mode with the MTM process language is shown for the example of
the upper body posture in Table 5. For the straightening from a bent, reclined, and
laterally inclined posture, the Arise from Bend (AB) motion element is approximated.
For the rotated upper body, the Body Rotation according to Case I (TBC1) is also used
approximately. However, essential, time-determining movements, such as the time re-
quired for the vehicle-side rotation and straightening of the seat, cannot be described
with MTM elements and must therefore be measured separately (noted as n.d.). In line
with the basic idea of MTM, these can be compared with setup times in production.

Table 5. Torso postures and subsequent movements.

Posture Seat direction Motion sequence MTM


In driving direction Arise from bend AB
Forward Rotated (5°<x<175°) Arise from bend, rotate seat (5°<x<175°) AB, n.d.
Against driving direction Arise from bend, rotate seat (180°) AB, n.d.
In driving direction - -
Neutral Rotated (5°<x<175°) Rotate seat (5°<x<175°) n.d.
Against driving direction Rotate seat (180°) n.d.
In driving direction Arise from bend AB
Back-
Rotated (5°<x<175°) Arise from bend, rotate seat (5°<x<175°) AB, n.d.
ward
Against driving direction Arise from bend, rotate seat (180°) AB, n.d.
In driving direction Turn body TBC1
Twisted Rotated (5°<x<175°) Turn body, rotate seat (5°<x<175°) TBC1, n.d.
Against driving direction Turn body, rotate seat (180°) TBC1, n.d.
In driving direction Arise from bend AB
Leaned Rotated (5°<x<175°) Arise from bend, rotate seat (5°<x<175°) AB, n.d.
Against driving direction Arise from bend, rotate seat (180°) AB, n.d.
11

4.3 Examples
The sequence of the two examples considered is shown in Fig. 2. It should be mentioned
here that this is one of several possibilities of the flow.

NDRA HoMoTo Driving


Reading Book
Watching Movie

Fig. 2. Visualization of the start, intermediate and end positions of the examples.

The structuring of the NDRA "reading a book" according to the HoMoTo categories is
shown in Table 4 and the detailed description by means of MTM basic elements is
shown in Table 6. Since movements can occur in parallel, for completeness the codes
are provided with corresponding symbols ([ and |), which are not further discussed here.
It is assumed that the center console contains a large compartment with a roller blind
for objects that is opened automatically by the vehicle, as well as a tray without a lid
for the safe storage of eyeglasses. Since the driver is already in the direction of travel,
it is not necessary to straighten and turn the seat to take over the driving task.

Table 6. NDRA „Reading a book“ described in the MTM process language.

No. Description QxF Code TMU Code QxF Description


Upright sitting position in the driving direction
1 Close book M18A 9,0 M18A Close book
2 Release RL1] 5,6 G2 Regrasp
3 13,3 M30B Bring to tray
4 2,0 RL1 Release
5 13,0 R50A Hand to glasses
6 2,0 G1A Pick up grasp
7 16,8 M45B Bring to storage tray
8 2,0 RL1 Release
9 Hand to steer. wheel R30A 9,5 R30A Hand to steer. wheel
10 20,0 |ET Look ahead
11 |LM20 Foot to pedal
∑ Sum 93,2
12

If the driver receives a TOR, he must first close the book he is holding in both hands
(1). He must then place the book in the center console with his right hand (2-4) and
stow his reading glasses (5-8). Finally, to take over, he must place his hands on the
steering wheel (9), look ahead (10) and simultaneously guide his foot to the pedal (11).
As another example, the NDRA "watching a movie with VR glasses" is structured
using the HoMoTo concept in Table 4 and described in the MTM process language in
Table 7. In this example, the driver is reclined and facing the direction of travel, so
appropriate seat adjustments must be made for the driver take-over. To stow the VR
glasses used, it is assumed that a corresponding holder is located on a side interior wall
and the driver's shoes are placed next to the seat.
First, upon receiving a TOR, the driver must remove and stow the VR glasses (1-5).
Then, he must straighten his seat, put himself in an upright position (6-9), and rotate
his seat 90° (10-12) to put on his shoes (13-26). After that, he must again rotate his seat
to the driver position (27-29) and advance (30-32). Finally, as in the previous example,
he must bring his hands to the steering wheel (33), direct his gaze forward (34), while
placing his foot on the pedal (35). The durations of the motor movements of the seat
(straightening, turning, advancing) are based on assumptions only.

5 Discussion

5.1 Method Suitability


The categorization of the HoMoTo approach makes it possible to divide the transfer
process into substeps and to analyze these substeps with regard to activities, postures,
movements, various influencing factors and the objects used. The tabular listing of all
relevant NDRA creates a clear matrix that can be used to analyze focal points and make
comparisons between the NDRA. However, HoMoTo does not take into account the
temporal reference, which is particularly relevant in the case of overlaps between the
Hand-Over and Move-Over phases. This gap is closed by the MTM method, which
allows the description of action and movement sequences on a granular level and with
standardized time specifications.
A number of examples have demonstrated the basic applicability of MTM to move-
ment sequences in the vehicle interior during manual driving. In addition, the transfer
of MTM elements to basic or upper body postures during an NDRA proved to be suit-
able, even if this cannot describe all movements or interior adjustments.
The results also show that the description of the HoMoTo process with MTM is
possible. The use of the MTM method for the analytical description and temporal eval-
uation of the partial actions is evaluated as promising. However, not every available
descriptor is applied in the examples presented. Movements with repetitions are not
performed in these examples (QxF fields are empty). In addition, approximations have
to be assumed, such as Arise from Bend for straightening from a lying posture or Foot
Movement with strong pressure (FMP) for putting on shoes.
13

Table 7. NDRA „Watching a movie with VR glasses“ described in the MTM process language.

No. Description QxF Code TMU Code QxF Description


Lying against the driving direction
1 Hand to glasses R50A 13,0 R50A Hand to glasses
2 Pick up grasp G1A 2,0 G1A Pick up grasp
3 Loosen glasses D3E 22,9 D3E Loosen glasses
4 15,8 M40A Bring to hook
5 2,0 RL1 Release
6 9,5 R30A Hand to switch
7 10,6 APA Press switch
8 111,1 n.d. Raise backrest (4 s)
9 31,9 AB Arise from bend
10 4,5 R6A Hand to switch
11 10,6 APA Press switch
12 83,4 n.d. Rotate seat (90°) (3 s)
13 29,0 B Bend
14 Hand to 1st shoe R40A 11,3 R40A Hand to 1st shoe
15 Pick up grasp G1A 2,0 G1A Pick up grasp
16 Bring to foot M20B| 10,5 |M20B Bring to foot
17 |LM15 Leg to shoe
18 19,1 FMP Put on shoe
19 Release 2,0 RL1 Release
20 Hand to 2nd shoe 7,5 R18A Hand to 2nd shoe
21 Pick up grasp G1A 2,0 G1A Pick up grasp
22 Bring to foot M20B| 10,5 |M20B Bring to foot
23 |LM15 Leg to shoe
24 19,1 FMP Put on shoe
25 Release 2,0 RL1 Release
26 31,9 AB Arise from bend
27 9,5 R30A Hand to switch
28 10,6 APA Press switch
29 83,4 n.d. Rotate seat (90°) (3 s)
30 3,4 R4A Hand to switch
31 10,6 APA Press switch
Move seat forward
32 83,4 n.d.
(3 s)
33 Hand to steer. wheel R60A 14,7 R60A Hand to steer. wheel
34 20,0 |ET Look ahead
35 |LM40 Foot to pedal
∑ Sum 699,8

The method forces developers to structure the individual substeps of the driver take-
over at a granular level. This is essential for optimized interior design. Furthermore, the
MTM method focuses on time as the dominant factor and thus makes it possible to
14

identify potential for optimization in terms of time. The standardized description of the
transfer procedures enables a systematic and objective comparison of different activi-
ties and interior variants. This comparability makes it possible to identify optimization
potential with regard to ergonomic interior design at an early stage of development and
allows the take-over process to be designed more ergonomically, more efficiently and
with greater customer orientation.
When comparing the MTM analyses of the two examples, it becomes apparent that
the type of the NDRA can make the take-over procedures significantly more complex,
thereby significantly increasing the take-over time. Contributing factors include the
items used and the adaptation of the interior for performing the driving task. In addition,
the take-over time is also significantly increased by technical variables, such as the
motorized rotation of the seat. The times in the examples listed are based on assump-
tions only. To calculate the total duration of the HoMoTo phases, the automotive man-
ufacturer must record these times and take them into account in the take-over analysis.

5.2 Impacts on Automobile Manufacturers


The type and number of possible NDRA while driving will be one of the most important
differentiating features and thus purchasing criteria of future automated vehicles of the
various brands. Automobile manufacturers must therefore intensively address the
NDRA desired by future customer groups. In addition to passive safety, a safe adoption
process from the respective NDRA is also necessary for the approval of an NDRA. In
the course of this, the understanding of quality for driving and thus brand differentiation
may also change. Thus, a comfortable and safe take-over of the driving task can go
hand in hand with a high quality experience in terms of safety. Above all, the take-over
duration and the interior design to support a safe take-over play a decisive role in this
context.
The HoMoTo concept presented, in conjunction with the MTM process, enables the
automotive manufacturer to analyze the take-overs from different NDRA in terms of
their movement sequences at a granular level. On the one hand, this enables the manu-
facturer to determine, describe and evaluate the time requirements for each substep and
thus optimize the take-over procedures in terms of time. On the other hand, the sup-
ports, objects and aids required for convenient transfer can be identified. Since the con-
cept can be applied at an early stage of development, late modification costs can be
avoided.

5.3 Limitations
The MTM method is optimized for the field of production ergonomics and not the ve-
hicle interior. The application areas differ to some extent. For example, individual an-
thropometric characteristics of persons have a greater influence on the execution of
movements due to the limited space in the interior. Additionally, the variant variety of
different preferred motion strategies has a higher importance. Individual actions and
movements of the driver during the take-over procedure are not to be weighted as highly
practiced activities in the sense of the MTM principle and are therefore subject to a
15

higher uncertainty in the time calculation. The factors mentioned above have an influ-
ence on the standardized Time Measurement Units and their interindividual scatter.
Furthermore, not all conceivable activities of the vehicle occupants can be mapped
with the existing description catalog. In particular, the movements performed by the
vehicle cannot be described with MTM.
The influence of the driver's body dimensions, postures and forces must be analyzed
and described in a separate simulation system such as the 3D human model RAMSIS.
For the influence of the mental and psychological state of the driver during the take-
over process, the authors are currently not aware of any suitable description and analy-
sis procedure. Overall, an adaptation of the MTM method to the vehicle-specific bound-
ary conditions is necessary.

6 Outlook

Further work on this topic will require a more in-depth analysis, application and exten-
sion of the MTM process for use in vehicles. This initially involves a comprehensive
analysis of today's conventional activities in the vehicle. In addition, the development
of a specialized MTM application module appears to make sense, with which all activ-
ities and processes in the vehicle interior of today's and future vehicles can be analyzed
comprehensively and precisely. The detailed validation of the method and measurement
of vehicle-specific Time Measurement Units as well as the movement times of the ve-
hicle interior parts represents another indispensable work package.
A special challenge for further research work will be the coupling with anthropo-
metric as well as cognitive human models. This is absolutely necessary for a consistent
computer simulation of all driver and passenger scenarios in the vehicle interior.

References
1. VDA, https://www.vda.de/dam/vda/publications/2015/automatisierung.pdf, last accessed
2021/04/12.
2. SAE International, https://www.sae.org/standards/content/j3016_201401/preview/, last ac-
cessed 2021/04/12.
3. Zhang, B., de Winter, J., Varotto, S., Happee, R., Martens, M.: Determinants of Take-Over
Time from Automated Driving: A Meta-Analysis of 129 Studies. Transportation Research
Part F: Traffic Psychology and Behaviour 64, 285-307 (2019).
4. Daimler AG, https://www.daimler.com/innovation/case/autonomous/interview-hafner.html,
last accessed 2021/04/19.
5. ADAC, https://www.adac.de/rund-ums-fahrzeug/ausstattung-technik-zubehoer/autonomes-
fahren/technik-vernetzung/aktuelle-technik/, last accessed 2021/04/12.
6. Gasser, T. M., Seeck, A., Smith, B. W.: Rahmenbedingungen für die Fahrerassistenzent-
wicklung. In: Winner, H., Hakuli, S., Lotz, F., Singer, C. (eds.) Handbuch Fahrerassistenz-
systeme – Grundlagen, Komponenten und Systeme für aktive Sicherheit und Komfort, 3rd
edn., pp. 27–54. Springer Vieweg, Wiesbaden (2015).
16

7. Radlmayr, J.: Take-Over Performance in Conditionally Automated Driving: Effects of the


Driver State and the Human-Machine-Interface. Technical University of Munich, Munich
(2020).
8. Schwab, B., Kolbe, T. H.: Requirement Analysis of 3D Road Space Models for Automated
Driving. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sci-
ences IV-4(W8), 99–106 (2019).
9. Wright, T. J., Samuel, S., Borowsky, A., Zilberstein, S., Fisher, D. L.: Experienced Drivers
Are Quicker to Achieve Situation Awareness Than Inexperienced Drivers in Situations of
Transfer of Control Within a Level 3 Autonomous Environment. In: Proceedings of the Hu-
man Factors and Ergonomics Society Annual Meeting, pp. 270–273. SAGE Publications,
Los Angeles (2016).
10. Jarosch, O.: Non-Driving-Related Tasks in Conditional Driving Automation. Technical Uni-
versity of Munich, Munich (2019).
11. Gold, C. G.: Modeling of Take-Over Performance in Highly Automated Vehicle Guidance.
Technical University of Munich, Munich (2016).
12. Greatbatch, R. L., Kim, H., Doerzaph, Z. R., Llaneras, R.: Human-Machine Interfaces for
Handover from Automated Driving Systems: A Literature Review. In: Proceedings of the
Human Factors and Ergonomics Society Annual Meeting, pp. 1406–14010. SAGE Publica-
tions, Los Angeles (2020).
13. Robert Bosch GmbH, https://projekt-tango-trucks.com/, last accessed 2021/04/16.
14. Carsten, O., Lai, F. C. H., Barnard, Y., Jamson, A. H., Merat, N.: Control Task Substitution
in Semiautomated Driving: Does It Matter What Aspects Are Automated?. Human Factors
54(5), 747–761 (2012).
15. Bundesministerium für Wirtschaft und Energie, https://www.bmwi.de/Redaktion/DE/
Downloads/P-R/projektsteckbriefe-automatisiertes-fahren.pdf?__blob=publicationFile&v
=6, last accessed 2021/04/10.
16. Marberger, C., Mielenz, H., Naujoks, F., Radlmayr, J., Bengler, K., Wandtner, B.: Under-
standing and Applying the Concept of “Driver Availability” in Automated Driving. In: In-
ternational Conference on Applied Human Factors and Ergonomics, pp. 595–605. Springer,
Cham (2017).
17. Damböck, D.: Automationseffekte im Fahrzeug – von der Reaktion zur Übernahme. Tech-
nical University of Munich, Munich (2013).
18. Schlick, C., Bruder, R., Luczak, H.: Arbeitswissenschaft. 3rd edn. Springer-Verlag, Berlin,
Heidelberg (2010).
19. Bokranz, R., Landau, K.: Handbuch Industrial Engineering: Produktivitätsmanagement mit
MTM. 2nd edn. Schäffer-Poeschel, Stuttgart (2012).
20. Fleischer, M., Chen, S.: How Do We Sit When Our Car Drives for Us?. In: International
Conference on Human-Computer Interaction, pp. 33–49. Springer, Cham (2020).
Navigation with Uncertain Map Data for Automated
Vehicles

Christopher Diehl, Niklas Stannartz and


Univ.-Prof. Dr.-Ing. Prof. h.c. Dr. h.c. Torsten Bertram

TU Dortmund, Institute of Control Theory and Systems Engineering,


44227 Dortmund, Germany

Abstract.
Using external map information for automated driving is beneficial, as this fol-
lows an extension of the sensor range and global navigation to a priori specified
goal. However, common systems use high definition maps, which are expensive
to construct and hard to maintain. Therefore, the paper at hand proposes an sen-
sor-independent approach for navigation based on uncertain map data. This work
first builds an environment model and plans a global route based on publicly
available OpenStreetMap-Data. Afterward, it plans a trajectory considering the
uncertainty in the map. Experiments in simulation and on real-world data show
the efficiency of the approach.

Keywords: Global Navigation, Motion Planning, Environment Modelling

1 Introduction

Safe and efficient navigation is an indispensable component of automated vehicles. Due


to the limited perceptive field of the vehicle sensors and restricted on-board computing
capacity, the motion planning problem is often divided into several subproblems. A
global planner first computes a path to the desired goal without real-time requirements.
Then the vehicle follows this path in real-time by generating a trajectory considering
kinematic and dynamic constraints. The global path is often computed based on high-
definition map data which are expensive to construct and hard to maintain. Publicly
available map services like OpenStreetMap (OSM) [1] provide almost but not in every
case complete information about the road topology. However, difficulties occur when
using these maps for automated navigation. These are mainly caused by inaccurate and
coarse map information and the uncertainty about the position of the vehicle relative to
the map. As a result, safety-critical scenarios can occur when the automated vehicle
predominantly relies on following the global path. The work at hand presents a sensor-
independent approach to navigation based on inaccurate maps. The approach can be
applied in different scenarios like for example automated valet parking (see Figure 1
(b)), navigation in one-way street systems or to guide shuttle systems in unstructured
environments.

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_11
2

The remainder of the paper is organized as follows. Section 2 describes related work.
Section 3 introduces the developed navigation approach. Section 4 presents simulations
and real-world experiments with data from our institute’s test vehicle (Figure 1 (a)).
Section 5 concludes the paper and gives a further outlook.

(a) (b)

Fig. 1. (a) Test vehicle of the Institute of Control Theory and Systems Engineering. (b) Planning
result based on real-world data. Illustrated are the occupancy grid map in grey, the Open-
StreetMap graph in blue, the planned global route in red, and the determined trajectory in green.

2 Related Work

Various approaches so far have focused on navigation with both high-definition maps
and inaccurate maps in unstructured environments. [2] use 3D-lidar scans to localize a
robot in an uncertain map. While achieving impressive robustness for global localiza-
tion, they rely on the classification of the scans into the classes of no street and street.
This approach becomes difficult when using other sensor technologies like radar sen-
sors. Another approach [3] corrects the inaccurate OSM path using a Markov-Chain
Monte-Carlo technique to match the roads from the map with the sensor data. There-
fore, semantic terrain information generated from the lidar scans is needed. The authors
of [4] construct a path with a Dijkstra-based global planning algorithm using a known
map and follow this path using nonlinear model predictive control. Another publication
[5] proposes a trajectory planning approach for navigation with uncertain GPS coordi-
nates in the context of global outer navigation. Therefore, they use a semantic occu-
pancy grid map as an environment model. Salzmann et al. [8] apply a deep learning
based method for path planning utziling camera images and high-level route infor-
mation. The authors in [9] propose an autonomous navigation approach utilizing only
OSM information. Therefore, they perform a topometric registration utilizing seg-
mented road information from lidar measurements. For an overview of different motion
planning techniques in the context of automated valet parking the interested reader is
referred to [6].
The contribution at hand builds on the work of Fassbender et al. [5]. In contrast, this
work uses a state-of-the-art classical occupancy grid mapping approach [7] and OSM
3

information. The algorithm constructs the local grid map with 3D-lidar data from a
realistic simulation and from the introduced test vehicle, which is illustrated in Figure
1 (a). However, in contrast to other publications, the proposed approach is sensor-inde-
pendent by using only occupancy information within the environmental model and thus
can also be used with other sensor technologies like radar or ultrasonic where less se-
mantic information can be extracted. Moreover, this work proposes a strategy for
switching between the goals resulting from a route planning module.

3 Navigation with Uncertain Map Data

This section describes the developed approach for navigation based on uncertain map
information. Therefore, first, it introduces the problem formulation and system archi-
tecture. Afterward, it presents the different modules of the navigation approach.

3.1 System Architecture


The goal of the work is to navigate in an unstructured environment to a specified goal
state. Since the goal can lie far out of the sensing range, this work subdivides the plan-
ning process into two parts. First, a route planning module calculates the shortest global
path. Therefore, the algorithm assumes access to a public available topometric map
(e.g. OSM [1]) as illustrated in the system architecture in Figure 2. Subsequently, the
perceptions module estimates a local environment model based on raw sensor data. This
work uses 3-D lidar data. However, the proposed approach is sensor-independent and
the algorithm can for example use radar measurements as well. Further used sensor
information are provided by a Global Navigation Satellite System (GNSS) and vehicle
odometry measurements.

Raw Sensor Data Perception Motion Planning

Binary Grid

Occupancy Grid Trajectory Planning

Uncertain Map Data Distance Map Route Planning

Fig. 2. System architecture of the developed approach for automated navigation.


4

3.2 Perception
Based on sensor measurements, the perception module constructs different environ-
ment representations, which are used by the subsequent planning approach. This work
first builds an occupancy grid map [10]. Therefore, the environment is discretized into
cells of fixed size, which include occupancy probabilities. The algorithm filters lidar
measurements, which belong to ground reflections, based on the measured height in-
formation. Then this work applies a 2-D lidar SLAM approach [7], whose result is the
occupancy grid map together with an estimation of the relative pose of the ego vehicle
with respect to the map and as visualized in Figure 3 (a). Black color denotes cells with
a high occupancy probability. Light grey cells are likely not occupied. Green gray color
represents cells, whose occupancy state is unknown. Note, that this environment repre-
sentation can also be constructed with other sensor technologies like radar sensors [11]
or cameras [12]. Afterward, a binarized grid is generated by marking all cells whose
occupancy probabilities fall below a threshold as free space. Moreover, this work also
considers all unknown cells, resulting from missing lidar reflections, as not occupied to
avoid false-positive obstacles in front of the vehicle. This is common practice in occu-
pancy grid mapping [13]. Note, that this may result in an overly optimistic behavior of
the automated vehicle, especially in scenarios with occlusion. However, during motion
planning this work considers the occupancy uncertainty of the cells as described in
Section 4. Figure 3 (b) visualizes the environment model after binarization. Lastly, the
perception module applies the Euclidean distance transformation [14] to the binarized
grid map. Afterward, the product of the grid resolution and the distance values are
stored. The result is a grid map illustrated in Figure 3 (c), whose cells contain a distance
(in meters) to their closest obstacle.

(a) (b) (c)


Fig. 3. Environment representations of the approach. (a) Occupancy grid map. (b) Binarized grid
map. (c) Map after Euclidean distance transformation.

3.3 Motion Planning


The motion planning module first loads the topometric OSM map as illustrated in the
bottom left of Figure 2. Next, a graph-structure is built and a route from the current
position to a goal position is planned using the A* algorithm. The nodes of the OSM
route then define goal regions in the subsequent local trajectory planning algorithm.
5

Optimal Trajectory Planning. The following notation is partially adopted from [5]
and [16]. The goal of the trajectory planning module is to find a collision-free, kino-
dynamically feasible trajectory 𝜋(𝑡) from the vehicle’s start state 𝐬s ∈ 𝑆 at time 𝑡 to a
goal region 𝕊g ⊆ 𝑆. The state space 𝑆 is the union of an 𝑛-dimensional real space ℝ𝑛
and non-Euclidean rotation groups 𝑆𝑂(2). The set of all allowed configurations at time
𝑡 ∈ [0, 𝑇] is denoted as 𝕊free ⊆ 𝑆 and guarantees collision avoidance. The predicate
⃛, … ) represents differential constraints and ensures dynamic constraints on the
𝐷(𝜋̇ , 𝜋̈ , 𝜋
trajectory and kino-dynamic feasibility. Let 𝐽(𝜋) be a cost functional, whereas Π(𝑆, 𝑇)
denotes the set of all continuous functions [0, 𝑇] → 𝑆. Then the optimal trajectory 𝜋 ∗ is
given by
𝜋 ∗ = argmin 𝐽(𝜋) (1)
𝜋∈Π(𝑆,𝑇)

Subject to 𝜋(0) = 𝐬s ,
𝜋(T) = 𝕊g ,
𝜋(t) = 𝕊free ,
⃛, … ) ≥ 0.
𝐷(𝜋̇ , 𝜋̈ , 𝜋

This work describes the state by 𝐬 = [𝑥, 𝑦, 𝜃, 𝑣, 𝑐]𝑇 , whereas 𝑥 ∈ ℝ and 𝑦 ∈ ℝ denote
the 2-D position in a fixed coordinate system. 𝜃 ∈ 𝑆𝑂(2) is the heading angle be-
tween the longitudinal vehicle axis and the 𝑥-axis of the fixed coordinate system. 𝑣 ∈
ℝ denotes the vehicle’s velocity and 𝑐 ∈ ℝ its curvature.
Sampling-based Solution. This work follows a sampling-based optimization scheme
to find a globally optimal solution for the planned trajectory. In contrast to approaches
based on numerical optimization, sampling-based methods not always get stuck in local
minima and can deal with non-differentiable cost functions. Following the ideas of [5],
this work constructs a search tree by sampling reachable states.
Let 𝒩 be the set of all nodes. A node 𝑁 ∈ 𝒩 describes a clothoid arc defined by a
start pose, an initial curvature, a rate of change of curvature and a terminal velocity.
Based on this information the algorithm can estimate the pose and curvature at the end
of the clothoid arc given a fixed arc length.
Further, let 𝐽R (𝑁) be the running cost of the path from the root node to 𝑁. 𝐽H (𝑁)
denotes a heuristic cost from 𝑁 to the final node. The cost of a node 𝑁 is given by
𝐽(𝑁) = 𝐽R (𝑁) + 𝐽H (𝑁) (2)

This work stores the graph’s leaf nodes in a priority queue, called 𝒩OPEN ⊂ 𝒩. The
priority queue only contains the root node (i.e. the start state) at the beginning of the
planning stage. The elements are sorted in ascending order by their cost values 𝐽(𝑁)
and the collision-free node with the lowest cost is expanded first. The planner stops the
expanding and returns a solution when a runtime limit is reached.
6

Fig. 4. Two heuristics guide the expansion of the search tree. Green denotes the Dubins path to
the orange goal pose. Purple visualizes the collision-free path.

Heuristics. Similar to [5], the paper at hand uses two heuristics to estimate the cost
𝐽H (𝑁). The first one 𝐽H,Dubins (𝑁) describes the length of the Dubins path connecting
the start pose 𝐩s = [𝑥s , 𝑦s , 𝜃s ]𝑇 and the goal pose 𝐩g = [𝑥g , 𝑦g , 𝜃g ]𝑇 . Further this work
estimates the shortest path between the start pose 𝐩s and the goal pose 𝐩g , whereas the
vehicle kinematics are approximated by a single integrator, and its shape is approxi-
mated by one circle with radius 𝑟 ∈ ℝ+ 0 . Then, the cell with index 𝑏 ∈ ℕ is computed
which passes through the path and is closest to the sampled pose 𝐩𝑁 = [𝑥𝑁 , 𝑦𝑁 , 𝜃𝑁 ]𝑇 .
The second heuristic 𝐽H,obs (𝑁) is then given by the sum of the distance from 𝐩𝑁 to the
cell with index 𝑏 and the path from this cell to 𝐩g . Figure 4 visualizes the Dubins path
and the shortest path. Then, the cost heuristic cost is given by
𝐽H (𝑁) = max(𝐽H,Dubins (𝑁), 𝐽H,obs (𝑁)). (3)

Clothoid Construction. The planner expands its leaf nodes by constructing clothoid
arcs with fixed arc length 𝐿 ∈ ℝ+0 while considering the non-holonomic constraints as
in [15]. This work follows the method described in [5]. First, a set 𝒱 of potential termi-
nal velocities for child nodes of 𝑁 is estimated. This set contains both forward and
backward velocities. Assuming a constant acceleration 𝑎 ∈ ℝ with |𝑎| < 𝑎max ∈ ℝ+ 0 ,
a set 𝒦 of curvature changes is computed using the maximum steering rate 𝛿max ̇ ∈ ℝ+
0
, the maximum steering angle 𝛿max ∈ ℝ+ +
0 , the maximum curvature 𝑐max ∈ ℝ0 and the
sampled velocity 𝑣s ∈ 𝒱. For a more detailed description, the interested reader is re-
ferred to [5]. The variable 𝑘𝑁 ∈ 𝒦 describes a change of curvature between the current
and subsequent leaf node 𝑁l ∈ 𝒩. Then, following [17], this work estimates the end
pose of the clothoid arc by the parametric expressions
𝐿
𝑥(𝐿) = 𝑥𝑁 + ∫0 cos(𝜃(𝜉)) d𝜉 , (4)
𝐿
𝑦(𝐿) = 𝑦𝑁 + ∫0 sin(𝜃(𝜉)) d𝜉 , (5)
1
𝜃(𝐿) = 𝜃𝑁 + 𝑐𝑁 𝐿 + 𝑘𝑁 𝐿2 . (6)
2
7

Here, 𝑐𝑁 is the curvature of node 𝑁. 𝑥(𝐿), 𝑦(𝐿) and 𝜃(𝐿) describe the pose at the end
of the clothoid. This pose is also the new start pose 𝐩𝑁l = [𝑥𝑁l , 𝑦𝑁l , 𝜃𝑁l ]𝑇 of the subse-
quent leaf node. Solving the integrals in equations (3) and (4) in closed form is not
possible whereas numerical approaches come with a high computational burden. There-
fore, the algorithm approximates the sine and cosine terms by Tayler series expansion
as described in [18]:
𝜆2𝑛+1 𝜆 𝜆3 𝜆5 𝜆7
sin(𝜆) = ∑∞ 𝑛
𝑛=0(−1) (2𝑛+1)! = − + − ±…, (7)
1! 3! 5! 7!

𝜆2𝑛 𝜆0 𝜆2 𝜆4 𝜆6
cos(𝜆) = ∑∞ 𝑛
𝑛=0(−1) (2𝑛)! = − + − ±…. (8)
0! 2! 4! 6!

Using this approximation, the integral only needs to be solved once before planning
and the solution is generated at runtime by inserting the parameters.

Collision Avoidance. After constructing new nodes, the algorithm checks if these are
collision-free. Only when this condition is fulfilled the newly sampled nodes are added
to the search tree. Therefore, the approach subsamples the arc with a sample distance
of 𝐿𝑠 ∈ ℝ+ 0 . Then the proposed method checks if every subsampled pose is collision-
free. For this, the vehicle shape is approximated by three circles of equal radius 𝑟 ∈ ℝ+ 0
as illustrated in Figure 5. The center of the first circle lies in the center of gravity of the
vehicle. The centers of the other two circles lie in the middle of the front and rear axle.
1
The radius is defined by 𝑟 = 𝑤 + 𝑑safe , whereby 𝑤 ∈ ℝ+ 0 is the width of the vehicle
2
+
and 𝑑safe ∈ ℝ0 is a safety distance. Then the algorithm determines the grid cells, which
belong to the centers of the circles, and looks up the corresponding values in the dis-
tance map. If this value is smaller than radius 𝑟, the pose is subject to a collision. Hence,
per pose the algorithm uses three queries at most, resulting in a fast collision check. If
all poses per clothoid arc are collision-free the newly sampled lead node 𝑁l is added to
the queue 𝒩OPEN .

(a) (b)
Fig. 5. (a) Vehicle shape approximation by three circles for collision checking (b) Circles over-
laid on the distance map.
8

Goal Region. If the new generated leaf node 𝑁l satisfies the terminal condition it is not
inserted into queue 𝒩OPEN , but into another queue called 𝒩CLOSED ⊂ 𝒩. This queue is
also sorted in ascending order by its cost values. The target manifold consists of two
parts.
Let 𝑔term be the orthogonal line running through the goal pose 𝐩g . The minimum
Euclidean distance to the orthogonal is the first part of the target manifold. The second
part of the target manifold is the angular difference ∆𝜃𝑁l ,g between the target pose and
the final pose of the node. The node lies within the target manifold if the pose of 𝑁l is
less than 𝑑𝑁l ,g,max ∈ ℝ+ +
0 away from the target line and ∆𝜃𝑁l ,g < ∆𝜃𝑁l ,g,max ∈ ℝ0 .

Cost function. After adding the node 𝑁l , the costs result in


𝐽R (𝑁l ) = 𝑤0 𝐿s (𝑁l ) + 𝑤1 𝑐max (𝑁l ) + 𝑤2 𝑑obs (𝑁l ) + 𝑤3 (1 − 𝑜(𝑁l )) + 𝑤4 𝑘𝑁l + 𝑤5 𝑣𝑁l
(9)

Here 𝑤0 , 𝑤1 , … , 𝑤5 denote manually tuned weights. 𝐿s,g (𝑁l ) is the total length of the
path from the root node to the respective node 𝑁l . The maximum curvature of the path
is denoted by 𝑐max (𝑁l ) and 𝑑obs (𝑁l ) is the smallest distance of the path to an obstacle.
𝑜(𝑁l ) is the number of occupancy grid cells within a square with edge length 𝑑edge
around the node whose occupancies are unknown. 𝑘𝑁l and 𝑣𝑁l are the curvature change
and the velocity of the node 𝑁l .
Further, this work applies the pruning approach of [5] that filters out nodes whose
final states are almost identical. Hence, it prevents a too large search tree and limits the
required computing time. After reaching its time limit, the planner returns the least-cost
trajectory corresponding to the node at the top of the queue 𝒩CLOSED . Note that [5]
additionally computes a velocity profile such that the final state of the trajectory has a
velocity of zero.

Goal Region Switching. The route planning module provides a set 𝒫 =


{𝐩1g , 𝐩2g , … , 𝐩𝑃g } of 𝑃 ∈ ℕ target poses, which form a reference path. To enable naviga-
tion guided by this path, the goal pose switches as soon as the vehicle meets the follow-
ing conditions. First, the distance 𝑑s,g between the vehicle and the target line must be
less than 𝑑s,g,max ∈ ℝ+ 0 . Furthermore, the angular difference ∆𝜃s,g between the vehicle
and the target pose must meet the condition ∆𝜃s,r < 𝜃s,r,max ∈ ℝ+ 0 . The threshold values
are determined empirically, taking into account GPS and localization inaccuracies. Fig-
ure 6 visualizes the conditions for changing the goal pose.

4 Experimental Results

This section evaluates the developed approach in a realistic simulation and on real-
world data. The proposed approach is implemented with C++ in ROS using an AMD
Ryzen 5 3600 processor with 3.6 GHz base clock speed. The parameter configuration
is visualized in Table 1.
9

The goal of our experiments is to answer the following questions: (Q1) Is navigation
with the approach possible using different levels of map and localization accuracies?
(Q2) How efficient is the planning strategy compared to other methods?

‫‰ʹܘ‬

ॺ‰

‫‰ͳܘ‬

‫•ܘ‬ ݀•ǡ‰
οߠୱǡ୥

Fig. 6. Example scenario of a goal region switching. The vehicle approaches the goal region 𝕊g
and fulfills the condition for the distance 𝑑s,g to the target line and the heading difference ∆𝜃s,r .
Hence it switches it goal pose from 𝐩1g to 𝐩2g . Blue color denotes the OSM graph. Red and green
colors illustrate the planned route and trajectory, respectively.

Table 1. Parameters used during experiments.


Variable Value
𝐿 5m
𝐿𝑠 1m
𝛿max 𝜋/10 rad
̇
𝛿max 0.5 rad s−1
𝑎max 2 m s−2
𝑤 1.76 m
𝑑safe 0.2 m
∆𝜃𝑁l ,g,max 𝜋/10 rad
∆𝜃s,r,max 𝜋/10 rad
𝑑𝑁l,g,max 6m
10

Fig. 7. The urban simulation scenario. The green rectangle denotes the ego vehicle. The red lines
denote the planned route.

4.1 Simulative Evaluation


The utilized simulation environment [19] uses Gazebo and implements an urban sce-
nario. It simulates a vehicle equipped with a 3D lidar (OS1-64) sensor. This sensor is
also used in the real-world experiment. The simulation environment is visualized in
Figure 7.
To answer question (Q1), this work simulates different accuracy levels of the route.
That is motivated by the fact that OSMs are constructed by users who often use low-
cost GNSS measurements. These can be error-prone and route inaccuracies can also
occur if the global vehicle localization has an erroneous estimate. Figure 8 (a) visualizes
the planning performance for a high-accuracy route. Here, the vehicle should drive
straight and follow the lane. Figure 8 (b) illustrates the planning performance in a left-
turn scenario. The route is inaccurate and passes through obstacles. This could result
from inaccurate map data or from a localization error. Hence, strictly following the
route would cause a collision. However, the proposed approach plans a collision-free
trajectory in both scenarios. That underlines that the method is suitable for navigation
in unstructured environments using different topometric map accuracies.
To answer question (Q2), this work further compare the planning performance to an
alternative path planning approach. The other method first samples a state-lattice [20]
under the use of an exact steering function. As a result, kinematic feasible Dubins paths
[21] connect the sampled poses. This work discretizes the state space in longitudinal
vehicle direction with a resolution 5 m and in the lateral direction with a resolution of
0.5 m. For each sampled position, this work also generates heading proposals in the
𝜋 𝜋 𝜋
interval [− rad, rad] with a resolution of rad. Then an A* graph search estimates
4 4 8
the least-cost, collision-free path which satisfied the previously described terminal con-
dition. In the following, we name this approach the state space sampling planer (SSSP).
This experiment compares the performance and efficiency of both planning algo-
rithms. For a fair comparison, the weights in the cost function 𝑤1 , 𝑤4 , 𝑤5 are set to zero
since these cost terms do not have any contribution within the SSSP. Both planners are
configured such that they need the same computational time of approximately 100ms.
11

Figure 9 shows the open-loop planning results in green. The vehicle should follow the
red route. In both time steps, the SSSP plans a much shorter path, whereas it is not able
to plan a path around the corner in the first time step. This is due to the fixed grid size
in the case of the sampling in the state space. Note, that this behavior would not occur
when the discretization is increased. However, this would also increase the computation
time. In contrast, the proposed trajectory planner uses an iterative sampling scheme,
which is informed by the two described heuristics. Therefore, this planner is more sam-
ple efficient. That is also proved in Table 2. This experiment runs both planners in
parallel and measures the average number of collision-free nodes 𝑉c ∈ ℕ over the whole
scenario. The scenario consists of 494 planning iterations. The proposed planner based
on [5] generates more collision-free nodes than the SSSP and explores the free space
more efficient.

(a) (b)
Fig. 8. Comparison of the trajectory planning result using different map accuracy levels (a) Plan-
ning result using accurate map data. Here the route to follow lies approximately in the middle of
the actual road. (b) Planning performance using inaccurate route information. The route to follow
is not collision-free.

(a) (b)

Fig. 9. Comparison of the planning performance in two different time steps. (a) Proposed trajec-
tory planner based on [5]. (b) SSSP.

Table 2. Average number of collision-free nodes in the simulation.


Metric This work SSSP
𝑉c 347 207
12

4.2 Evaluation using Real-World Data


This work further evaluates the proposed approach on real-world data from the test
vehicle, which is shown in Figure 1 (a). The test vehicle is equipped with an Ouster
OS1-64 3D lidar sensor. Moreover, in this experiment, we use additional data from a
Real-Time Kinematic GNSS (RTK-GNSS) with a coupled Inertial Measurement Unit
(IMU). All components are connected via CAN to a host PC, running ROS as the mas-
ter. This experiment uses the proposed trajectory planner to answer question (Q1).

Localization Concepts. In this experiment, two different localization concepts are


tested. The first concept uses global information from the RTK-GNSS. The high preci-
sion mode was intentionally not used to model real localization errors of up to 1,3 m.
Current approaches (e.g. [22]) which use lidar, GNSS, and IMU measurements can
achieve similar global localization accuracies. Using this localization scheme results in
a successful open-loop navigation in multiple scenarios. Illustrative results are beyond
the scope of this paper. The second concept assumes that the vehicle is accurately lo-
calized at the beginning of the experiment, e.g. by averaging over multiple GNSS meas-
urements while the vehicle is standing still. An initial heading estimate can be obtained
at the beginning of the experiment when the vehicle starts moving using the first GNSS
measurements assuming that the vehicle drives on a straight line. Afterwards, the vehi-
cle determines its relative pose with respect to the start location using the SLAM ap-
proach [7]. Note, that this approach does not use additional GNSS information after the
initial global localization. In the remainder of this section, the second approach will be
used.

Results. The data was collected from a manual driver in an automated valet parking
scenario. Figure 10 (a) shows a satellite image of the scene and Figure 10 (b) visualizes
the OSM graph in blue and the driven route in red. Further, it illustrates the constructed
global occupancy grid map in grey color. Note, that some parts of the OSM graph run
very close to obstacles due to map and localization errors. A strict path following rule
would lead to a collision here. Figure 10 (c) shows the successful open-loop trajectory
planning results over multiple time steps. This proves the applicability of the approach
in real-world scenarios.

5 Conclusion and Future Work

The contribution at hand proposed an approach for navigation in unstructured environ-


ments with uncertain low-cost map data. The used environment representation in this
work can be constructed with several sensor technologies. This makes the method sen-
sor-independent. After constructing the environment model, the vehicle plans its route
based on topometric map data. This provides a set of goal poses. A subsequently de-
scribed planning module computes a trajectory towards a goal region defined around
the goal pose. It is based on an existing sampling-based trajectory planning approach.
A search tree construction is guided through multiple heuristics and afterwards, a min-
imum cost trajectory is returned. This work further described a strategy for goal pose
13

switching. The approach has been first evaluated in a realistic simulation environment.
Further, this work compared it to another sampling-based path planning approach.

(a)

(b)

(c)
Fig. 10. Evaluation of the navigation approach on real-world data. (a) Satellite image of the
automated valet-parking scenario (image source: Google Maps). (b) OSM graph (blue) and
driven route (red). (c) Trajectory planning results in three time steps. Additionally illustrated are
the goal pose (orange arrow), target line (orange line), and constructed occupancy grid (grey).
14

This comparison demonstrated the sample efficiency of the proposed trajectory planner.
A second experiment showed the planning results on real-world data from the test ve-
hicle. This work also described two localization strategies which can be used in com-
bination with the proposed planning approach.
Future work will couple the method with a model predictive control approach based
on numerical optimization [4] and test the method in a closed-loop experiment. Further,
we achieved first results using the proposed environment model in conjunction with
radar sensors. Lastly, we plan to apply topometric registration techniques using a clas-
sical occupancy grid map to make our approach more robust to high localization and
map errors.

Acknowledgment
We thank Christoph Wunsch for his comprehensive contribution to the research pre-
sented in the article at hand.

References
1. OpenStreetMap contributors: OpenStreetMap, https://www.openstreetmap.de/, last ac-
cessed 2021/04/12.
2. P. Ruchti, B. Steder, M. Ruhnke, W. Burgard. :Localization on OpenStreetMap data using
a 3D laser scanner," 2015 IEEE International Conference on Robotics and Automation, Se-
attle, WA, USA, 2015, pp. 5260-5265
3. B. Suger, W. Burgard. : Global outer-urban navigation with OpenStreetMap. 2017 IEEE
International Conference on Robotics and Automation, Singapore, 2017, pp. 1417-1422
4. C. Rösmann, A. Makarow, T. Bertram.: Online Motion Planning based on Nonlinear Model
Predictive Control with Non-Euclidean Rotation Groups. In: European Control Conference
(accepted) (2021).
5. D. Fassbender, A. Mueller, H.J. Wuensche.: Trajectory Planning for Car-Like Robots in
Unkown, Unstructured Environments. In: 2014 International Conference on Intelligent Ro-
bots and Systems (IROS). IEEE, Chicago IL, USA, 2014, pp. 3630-3635.
6. H. Banzhaf, D. Nienhüser, S. Knoop and J. M. Zöllner.: The future of parking: A survey on
automated valet parking with an outlook on high density parking, 2017 IEEE Intelligent
Vehicles Symposium (IV), Los Angeles, CA, USA, 2017, pp. 1827-1834
7. W. Hess, D. Kohler, H. Rapp and D. Andor.: Real-time loop closure in 2D LIDAR SLAM,
2016 IEEE International Conference on Robotics and Automation, Stockholm, Sweden,
2016, pp. 1271-1278
8. T. Salzmann, J. Thomas, T. Kühbeck, J. Sung, S. Wagner and A. Knoll.: Online Path Gen-
eration from Sensor Data for Highly Automated Driving Functions, In IEEE Intelligent
Transportation Systems Conference, Auckland, New Zealand, pp. 1807-1812, (2019)
9. T. Ort et al.: MapLite: Autonomous Intersection Navigation Without a Detailed Prior Map.
In IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 556-563, (2020).
10. A. Elfes.: Using occupancy grids for mobile robot perception and navigation. In Computer,
vol. 22, no. 6, pp. 46-57, (1989).
11. C. Diehl, E. Feicho, A. Schwambach, T. Dammeier, E. Mares and T. Bertram.: Radar-based
Dynamic Occupancy Grid Mapping and Object Detection, IEEE 23rd International Confer-
ence on Intelligent Transportation Systems, Rhodes, Greece, pp. 1-6, (2020).
15

12. L. F. Posada, A. Velasquez-Lopez, F. Hoffmann and T. Bertram.: Semantic Mapping with


Omnidirectional Vision. IEEE International Conference on Robotics and Automation
(ICRA), Brisbane, QLD, Australia, 2018, pp. 1901-1907 (2018)
13. F. Homm, N. Kaempchen, J. Ota and D. Burschka.: Efficient occupancy grid computation
on the GPU with lidar and radar for road boundary detection. In 2010 IEEE Intelligent Ve-
hicles Symposium, La Jolla, CA, USA, pp. 1006-1013, (2010)
14. A. Rosenfeld and J. L. Pfaltz.: Distance Functions on Digital Pictures, Pattern Recognition,
vol. 1, no. 1, pp. 33–61, (1968).
15. J. Kong, M. Pfeiffer, G. Schildbach and F. Borrelli.: Kinematic and dynamic vehicle mod-
els for autonomous driving control design. In IEEE Intelligent Vehicles Symposium (IV),
Seoul, Korea (South) pp. 1094-1099 (2015)
16. B. Paden, M. Čáp, S. Z. Yong, D. Yershov and E. Frazzoli.: A Survey of Motion Planning
and Control Techniques for Self-Driving Urban Vehicles. In IEEE Transactions on Intelli-
gent Vehicles, vol. 1, no. 1, pp. 33-55, (2016)
17. M. Brezak and I. Petrović.: Real-time Approximation of Clothoids With Bounded Error for
Path Planning Applications. In IEEE Transactions on Robotics, vol. 30, no. 2, pp. 507-515,
(2014)
18. M. Vázquez-Méndez and G. Casal Urcera.: The Clothoid Computation: A Simple and Effi-
cient Numerical Algorithm. In Journal of Surveying Engineering. 142. 04016005:1-9.,
(2016).
19. Wil Selby Homepage, https://www.wilselby.com/2019/05/ouster-os-1-ros-gazebo-simula-
tion-in-mcity-and-citysim/, last accessed 2021/04/09
20. M. Likhachev and Dave Ferguson.: Planning Long Dynamically-Feasible Maneuvers for
Autonomous Vehicles. In Proceedings of Robotics: Science and Systems, (2008).
21. L. E. Dubins.: On curves of minimal length with a constraint on average curvature, and with
prescribed initial and terminal positions and tangents, American Journal of Mathematics.
vol. 79, pp. 497–516, (1957).
22. Chang L, Niu X, Liu T.: GNSS/IMU/ODO/LiDAR-SLAM Integrated Navigation System
Using IMU/ODO Pre-Integration. Sensors (Basel). 20(17):4702. (2020).
Scenario Generation for Virtual Test Driving on the Basis
of Accident Databases

Alexander Frings

IPG Automotive GmbH, Karlsruhe, Germany

Abstract. In the vehicle development process, virtual validation and scenario-


based testing will play an even greater role than before. Time-to-market and cost
savings are just two drivers of this trend. A crucial point for the use of virtual test
driving is that not all scenario variations can be discovered in the real test drive
or reconstructed on the proving ground.
To ensure vehicle safety in all conceivable traffic situations, it is imperative
to consider an extremely large number of scenarios in the tests. This can be
achieved, among other things, by relying on scenario definitions from accident
databases, which are a valuable source of scenarios because they classify types
of accidents that have actually occurred as well as their severity.
The challenge here is to transfer the large amount of data into the simulation
in order to make it usable in virtual test driving. This means, for example, going
beyond a replay simulation to use logical scenarios to generate variations of the
meaningful parameters in conjunction with the existing test strategy.
For this purpose, IPG Automotive developed a solution called ScenarioRRR
(Record, Replay, Rearrange) to transform measurement data into scenarios. This
solution can be used with accident databases and other sources to provide the
relevant elements for virtual test driving. The captured trajectories of dynamic
road users can thus be placed on road definitions to significantly increase test
diversity and coverage.
In this paper, experiences from different use cases with accident databases and
resulting boundary conditions will be explained.

Keywords: Vehicle development, virtual test driving, accident databases.

1 Motivation

In 2019, the Federal Statistical Office recorded 2.6 million accidents, 384,000 injuries
and 3046 fatalities on public roads in Germany [1]. In the European Union as a whole,
the number of road fatalities is around 22,800 [2]. Even if the numbers are slightly
declining, driver assistance systems and autonomous driving functions should never-
theless contribute to further reducing this number.
New virtual development methods are needed to fully validate these complex systems.
Because vehicle safety must be ensured in all imaginable situations, it is crucial to test
a very large number of scenarios. Since there will always be corner cases that are not

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_12
2

known and unexpected, it will never be possible to test all relevant scenarios, which
means that complete test coverage cannot be achieved.
The so-called scenario-based testing using virtual test driving, combined with data-
driven development methods for the identification of scenarios can contribute to reduc-
ing this limitation as much as possible.
In real testing, there are many test parameters that are difficult or impossible to handle.
Virtual test driving offers the possibility of being able to handle some of these aspects,
such as traffic situations that are difficult to reproduce as well as rarely occurring criti-
cal situations, by creating corresponding scenarios. The scenarios are a fundamental
component of virtual test driving. A scenario contains the time-based representation of
dynamic traffic situations in a static environment model, which consists of the road
layout, topology, road markings, etc., as well as infrastructure like stationary traffic
signs, buildings or temporary changes such as construction sites. Scenarios also include
the definition of all dynamic road users and objects, for example other vehicles and
pedestrians.
An extremely large number of driven or simulated kilometers is necessary to validate
autonomous driving functions. However, it is imperative to have the right scenarios
available. The more situations are analyzed and transformed into virtual test runs, the
more critical scenarios can be tested. A high number of scenarios helps to challenge
developed driving functions.
There are several options to recreate the real environment in the virtual world. To re-
duce ambiguity in virtual validation, it is necessary to take advantage of all available
test scenario sources. Some examples are derived test cases from specified system re-
quirements, accident data bases, standardized and normed tests or records from field
tests.

2 Accident Databases as a Basis for Virtual Test Driving

This paper will mainly focus on the application of accident databases for simulation.
These are an essential source for scenarios because they classify real types of accidents
and their severity. They show situations in which a driving maneuver has not been car-
ried out as intended in the past, resulting in a collision - either with another road user
or surrounding objects. It is neither known nor relevant which advanced driver assis-
tance systems (ADAS) were already installed in the vehicle and whether and how they
contributed to the accident.
Accident databases usually contain speed profiles and trajectories of all involved
road users for about five seconds prior to the accident. The dimensions of those in-
volved in the accident are provided, as well as the object class to which they belong. In
addition, the road in the environment is roughly described by indicating lane markings,
signs, walls and the lane boundaries. In some cases, a longitude and latitude are also
given to determine the position, which can later be used to extract further information.
Likewise, the estimated vehicle state can be supplemented by other signals such as ac-
celeration with the help of simulation environments.
3

2.1 User groups of accident databases

There are different user groups of accident databases, for which it can be very ben-
eficial to connect these databases to a simulation environment like the open integration
and test platform CarMaker (see Fig. 1). These groups and their different use cases are
described in more detail below: First, there are accident researchers looking at the con-
sequences of an accident. For them it is crucial that the recorded event of the accident
is rendered faithfully. They commonly work in the field of passive safety, but also in
the field of active safety, with the task of reducing the influence of the impact point on
the severity of the accident. For their task, the recorded accident event must be exactly
reproduced. This is complicated because assumptions have already been made when
generating the trajectories and velocity profiles for the databases. On the one hand, the
velocity plot has to be accurate. On the other hand, both the point of collision and the
angle at which the vehicles collide have to match precisely. To this end, the position of
the ego vehicle is indicated externally which relocates the vehicle. In this case, the sim-
ulation is no longer a closed-loop vehicle dynamics simulation.

Fig. 1. Different user groups and use cases with crash data

Performing a re-simulation based on the database is also an option. This is an interesting


approach for the second user group: Function developers and engineers working on
active safety and accident avoidance in the field of function research and development.
Their task is to develop new functions and increase the safety of vehicles. They perform
a closed-loop simulation with the virtual prototype of the new vehicle under test (VuT).
The virtual prototype is a digital twin, i.e. a parameterized vehicle model of the VuT
and all of its relevant subcomponents. It has the ability to use different types of sensor
models as well as to integrate ECU models.
4

2.2 Requirements for the use of accident databases for real-time full vehicle
simulation

Today, systems such as automatic emergency braking (AEB), adaptive cruise control
(ACC), forward collision warning (FCW), lane keeping assist (LKA), evasive steering
assist and highway pilot are often tested and validated with accident databases. Here,
the integration of driving functions plays an important role to determine what would
(not) have happened if a new function had been implemented. In this use case, instead
of driving the actual trajectory from the accident database, the user might need to drive
an intended trajectory. The intended trajectory describes the expected trajectory if the
real driver or an integrated ADAS had not reacted, i.e. without braking by the driver or
an AEB and without human avoidance. For example, in many data sets, a "wrenching"
of the steering wheel can be observed shortly before a collision. Such an intervention
is usually recognizable in the accident data, so that the expected uncontrolled behavior
can be derived.
In other words, the vehicle state from the database is used until a determined state is
reached. Then the input is replaced by an estimated intended course. This allows for
observation of how the system – namely vehicle, sensors and functions – reacts to the
unaltered situation.
In both cases (trajectories from database and intended course), it may also be necessary
to extend the data of the accident database to obtain a more comprehensive, longer
scenario: As already mentioned, it can generally be assumed that the five seconds prior
to the collision are stored in the database. However, this data is not sufficient to enable
correct testing with sensor models and driving functions. Therefore, the scenario is ex-
tended towards the front, i.e. before the time of the first data point in the database (see
Fig. 2). This way, a start from standstill (relevant for HIL applications) but above all a
start of the scenario with the relevant objects outside the sensor range of the VuT is
made possible. Furthermore, the time after collision can also be extended in the sce-
nario. This way, for example, the vehicle behavior after the avoided collision can be
examined to see if the vehicle avoided the pedestrian and drove into oncoming traffic
or left the road instead.
5

Fig. 2. Starting point of the standard scenario (top) and the extended scenario (bottom)

Since in many cases it cannot be clearly determined which road user is responsible
for the accident, it is crucial to be able to adopt different ego perspectives for the sim-
ulation of vehicle-to-vehicle collisions. This means that the overall vehicle simulation
is carried out from the perspective of each vehicle involved in the accident. Often, a
change in the behavior of one of the involved parties already leads to a reduction in
severity of the accident.
6

2.3 Generating the highest possible test coverage

For the aforementioned user groups, but also for those responsible in the area of
vehicle safety and validation, it is also necessary at this point to vary the parameters of
the scenario based on the requirements shown so that the test coverage can be as high
as possible. This results in a very high number of possible parameters and values which,
due to the sheer scope, can only be tested with the help of simulation. The type of
sensors used, the requirements of the system under test (SuT) and the operational design
domain (ODD) all play a role in the choice of parameter variation. In the first step, a
variation of the dynamic behavior of road users is often required, for example, to answer
the question of whether the SuT would also have reacted correctly if the pedestrian had
walked onto the road at a different speed or if the tested vehicle had turned at a higher
speed.
In addition, it is a requirement to vary the environmental conditions such as weather
or sun position in order to be able to investigate their influence on camera-based sys-
tems and a possible fallback level in the sensor fusion.
Moreover, it is possible to increase the variance in testing perception by varying the
representation of the 3D objects involved (with the help of an extensive database of 3D
objects), for example by replacing an adult pedestrian with a child or a black SUV with
a differently colored compact car. At this point, it should be mentioned that accident
databases usually only provide approximate dimensions that are standardized for many
objects. Furthermore, it can be of interest to use the imported scenario description as a
starting point for variations of the country-specific features, e.g. to vary the parameters
of the road markings such as color and width, but also to vary the traffic signs.
In a final step, it may now be necessary to create a completely new functional sce-
nario [3] by adding more road users to the imported scenario in order to further chal-
lenge the driving function (in the context of ODD) and increase the test coverage. It
may for example be relevant to add vehicles that lead to a temporary occlusion of other
participants or limit the decision space of ADAS.
In general, and especially due to the options presented for expanding the source data,
automating the import of an entire accident database while retaining existing structures
also plays an important role.

3 Automatic Scenario Generation Using Accident Databases

3.1 Open integration and test platform CarMaker

With its different sensor model classes (ideal ground truth, high fidelity and raw
signal interface) for the typical sensor technologies camera, radar, lidar and ultrasonic,
the CarMaker product family with its derivatives TruckMaker and MotorcycleMaker
offers a suitable starting point for testing ADAS.
7

Fig. 3. Principle of closed-loop scenario-based testing with CarMaker and its sensor models

By using CarMaker, accident scenarios can be created from the perspective of pas-
senger cars. For this purpose, traffic objects of all classes can be used, from pedestrians
to e-scooter drivers and commercial vehicles. In addition, multi-axle trucks and com-
mercial vehicles can be simulated by using TruckMaker or powered two-wheelers can
be selected as VuT in MotorcycleMaker. In order to meet the requirement of integrating
sensor models and control systems in multiple vehicles of a simulation, CarMaker of-
fers the feature SimNet which allows the networking of several vehicles under test in a
shared simulation environment. This way, the influence on the accident can also be
investigated if several vehicles have advanced safety systems for vehicle control. The
models of the driving functions can be integrated via the Functional Mock-up Interface
(FMI), for example. Likewise, the data exchange between the simulation environment,
sensor models and the control function can be carried out via standardized interfaces
such as Open Simulation Interface (OSI) (see Fig. 3).

3.2 Toolbox ScenarioRRR

In order to provide maximum support for the automated creation of scenarios,


IPG Automotive has developed a toolbox called “ScenarioRRR” which helps to gener-
ate scenarios from different types of input data, for example raw sensor measurement
data or object lists from perception functions. Furthermore, functions specifically to
import data from accident databases (see Fig. 4) into the simulation environment were
included. The basic idea behind this toolbox is to support the automated creation of
scenarios from different measurement data such as descriptions from accident data-
bases, annotated sensor data or object lists from driving functions.
8

Fig. 4. Data processing for scenario generation with ScenarioRRR

Currently, this solution supports, among others, the GIDAS (German In-Depth Ac-
cident Study) [4] in the PCM format of version 4 and 5, CIDAS (China In-Depth Acci-
dent Study), and other small, local or proprietary accident databases. The trajectories
and speed profiles contained in the accident data are used as target specifications for
the scenarios. A distinction can be made as to whether the original trajectory or the
intended course is to be used. The options described above for extending the scenario
in both directions are also available. Further information, which is often generated with
the help of simulation tools, such as signals from the brake light switch or target accel-
erations, is not used as input data for the simulation intentionally: The speed profile and
the trajectory are followed by the virtual prototype as accurately as possible. Since the
virtual prototypes differ from the vehicles that were originally involved in the accident
and sensor models and integrated functions are used to test systems, small deviations
usually do not play a significant role.
The existing description of the road is also used to automatically create a road net-
work including the road furniture. For each object type of the road elements defined in
the accident database specification, a 3D object from the object library is defined as
default representation. They are selected based on the underlying dimensions. The var-
iation of the 3D objects of vehicles, as well as the change of the object class of each
traffic participant can be carried out in a straightforward manner.
Nevertheless, the experience has shown that the descriptions of the road in databases
are rather insufficient for a representative simulation so that suppliers of accident data-
bases also offer road networks in standardized formats for the simulation such as
ROAD5 or OpenDRIVE. For this reason, the ScenarioRRR toolbox was extended to
allow trajectories from the data (in this case the accident database) to be placed on a
referenced road network. This means that recorded trajectories can also be linked to
road networks that were exported from (high-precision) map data using the given GPS
9

position. Typically, Here HD Live Maps [5], OpenStreetMap [6] or customer-specific


solutions are used in this context.
This possibility has proven to be particularly advantageous if, as described above,
the description of the movement before and after the actual accident occurrence is to be
expanded. Usually, the recorded road geometry in the database is only suitable for the
reconstruction of the short sequence, which is also included, but not for an extension.
Furthermore, if the maneuver extraction is to be used for parameter variation, an
accurate description of the road network is required. The maneuver extraction is the
abstraction of a closed maneuver-based description for each traffic object, including the
ego vehicle.

Fig. 5. Example maneuver extraction for a longitudinal maneuver description

The aim of the maneuver extraction is to obtain logical scenarios in which the be-
havior of the ego vehicle and all road users is described by several mini-maneuvers, i.e.
partial descriptions, which indicate target velocities (or accelerations) or phases of
steady-state driving. This allows for easy variation of the parameters in order to be able
to vary not only the longitudinal speed but also the lateral descriptions of the trajectory
in a simple and targeted manner. For example, the description available up to this point
in form of a velocity profile is broken down into several mini-maneuvers. If the thresh-
old values for maneuver identification are chosen appropriately, the initial scenario and
the reconstructed scenario differ only slightly after maneuver extraction. In this case,
only the description form of the maneuver has changed, but not its visualization com-
pared to replay-scenarios (see. Fig. 5). In order to be able to make a statement on the
deviation, a corresponding calculation of the deviation has already been added to the
10

toolbox. With the help of the maneuver extraction, it is also possible to classify and
evaluate what has happened. This is particularly important when processing sensor data
as a data source, whereas this type of information for classifying scenarios in accident
data was already provided during data collection. For example, lane changes and over-
taking maneuvers can be detected automatically and relevant parameters, such as dis-
tances between approaching vehicles or relative speeds, can be determined. Such an
evaluation makes it possible to interpret the recorded data in the following. Using a
genuine road network, it is now also possible to add further traffic objects, i.e. to create
a data augmentation of the original scenario.

4 Conclusion and Outlook

The requirements for handling accident databases for use in closed-loop simulation
applications were presented and corresponding solutions for automated scenario gener-
ation for CarMaker were introduced.
The ScenarioRRR toolbox makes it possible to transform measurement data into sce-
narios and can be combined with accident databases and other sources to provide the
relevant elements for virtual test driving. The captured trajectories of dynamic road
users can be placed on road definitions to significantly increase test diversity and cov-
erage. The toolbox also provides solutions for various use cases to enable the user to
create logical scenarios and parameter variations.
IPG Automotive continues to work on expanding its toolbox for further use cases, for
example for easy variation of inner-city junction scenarios. Furthermore, it is possible
to carry out customer-specific adaptations for other input data formats. To increase the
degree of automation, integration into customer-specific solutions for requirements and
test management of scenarios is recommended.

References
1. Destatis Homepage, https://www.destatis.de/EN/Home, last accessed 2021/04/20.
2. European Commission Homepage, https://ec.europa.eu/commission/presscorner/de-
tail/de/qanda_20_1004, last accessed 2021/04/20.
3. Pegasus-Projekt Homepage, https://www.pegasusprojekt.de/files/tmpl/PDF-
Symposium/04_Scenario-Description.pdf, last accessed 2021/04/20.
4. Gidas-Projekt Homepage, https://www.gidas-projekt.de/index-11.html, last accessed
2021/04/20.
5. HERE HD Live Map Homepage, https://www.here.com/platform/automotive-services/hd-
maps, last accessed 2021/04/20.
6. OpenStreetMap Homepage, https://www.openstreetmap.de/, last accessed 2021/04/20.
A modular test strategy for Highly Autonomous Driving
based on Autonomous Parking Pilot and Highway Pilot

Andreas Bossert1, Stephan Ingerl1 and Stefan Eisenhauer1


1
ITK engineering GmbH, Im Speyerer Tal 6, 76761 Rülzheim, Germany

Abstract. A big challenge to introduce highly autonomous driving systems is


represented by the verification and validation of such systems. The complexity
of the system environment, the sheer endless number of possible situations and
influencing factors paired with the enormous importance of safety and security
requires new strategies for verification and validation.
In the following paper we introduce a holistic and modular approach of a test
strategy for Highly Automotive Driving functionality. The four building blocks
of a HAD effect chain Perception, Thinking, Planning and Acting as well as the
whole system are addressed. We present the underlying example architecture
and its system configuration, that is concretized on the functions of a (Mall)
Parking and a Highway Pilot. Finally, we show the experiences using this strat-
egy for verification and validation of specific driving functions.

Keywords: First Keyword, Second Keyword, Third Keyword.

1 Introduction

On the way to highly automated and autonomous driving systems the vehicle (AV)
needs to be able to handle a sheer number of known and unknown situations [1]. This
large number of different scenarios results from the combination of different parame-
ters as they are structured in 6-Layer Model [Fehler! Verweisquelle konnte nicht
gefunden werden.] and used in the Pegasus Project [3]. Especially the transfer of
responsibility relating the driving task from human being to the AV leads to a fast-
increasing complexity within the AV system. Caused by the tremendous variation in
scenarios of the environment, that has been handled by the driver before, the AV sys-
tem needs an adapted system architecture, much more software and hardware func-
tionality and the necessity of additional safety and security mechanism. All this func-
tionality needs to be developed at the same duration as before and to be sure, that all
these modules work together, you need faster release cycles. Validating the whole
functionality in one release cycle driving on the road is not feasible anymore, because
a huge number of miles is needed [2]. Also, it is highly recommended to hit the criti-
cal scenarios, but the critical scenarios are typically the dangerous ones for the test
driver.

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_13
2

Therefore strategies are needed to reduce complexity and to accelerate the amount
of test execution. One strategy can be the functional decomposition or so called “Slic-
ing the elephant” [5]. Graab et al. [6] uses a five-layer decomposition, compared to
Burton [Fehler! Verweisquelle konnte nicht gefunden werden.], who divides the
functionality into four functional components. The information processing (also
called Perception) consists of the Layer 1: Information reception and the Layer 2:
Information processing [6]. The method of slicing the Highly Automated and Auton-
omous Driving (HAD) functionality in several components leads typically in separate
test strategies for each discipline and exactly this leads to incompatible strategies not
fitting together anymore. Using methods like search-based techniques to optimize
these technical components or parts of them leads to an optimized technical compo-
nent but not to an optimized HAD functionality. Typically, these optimized compo-
nents do not lead to an optimized product, it can even lead to an unsafe system. This
can happen, if one component works with prerequisites that the previous one cannot
perform. In this paper an overall HAD test strategy is presented that is highly inter-
locked within its different technical components and avoid such unsafe situations. At
the end, the two domain functionalities “Autonomous Parking Pilot” (APP) and
“Highway Pilot” (HWP) are introduced to provide a practical basis to prove our strat-
egy and as examples for further activities like defining a toolchain and testing HAD
functionalities.

1.1 Scope of classical test strategy


Traditional test strategies for non-automated vehicle (i.e. level 0 to 2 according to
SAE [8]) have to deal with closed systems becoming more and more connection to
the outer world. Connections like mobile data connection, infotainment interfaces
(like Bluetooth or WIFI) and ADAS sensors have extended the number of external
influences to the vehicle. But even though the driver is still responsible for the dy-
namic driving task (DDT) and responsible for the vehicle’s behavior. The ability to
control the vehicle in these critical situations by the driver assumes that all necessary
components are designed, developed and approved according the ISO26262 [9]. In
this environment it is sufficient to focus requirements-based testing extended with
some additional test design techniques executed at testbenches (like HiL) and con-
ducted real driving tests from test drivers. So, these strategies focus on the testing of
HW components with their integrated SW and the integration of this components to
the vehicle, that is tested in real world.

1.2 Scope of autonomous driving test strategy


With the transfer of the responsibility for the behavior of the vehicle from the human
driver to the AV, the AV must deal with exceptional requirements for safety and secu-
rity. Start from the beginning of the trip, when handle over the DDT to the vehicle
along until handover the DDT back to the human driver, a lot of new requirements
come up. Challenging situations caused for example by road work, reflections and
edge cases as mentioned from Koopman [11] have to be managed by the AV. Survey-
3

ing of the real-world environment and its infrastructure is a complex and difficult
task. Even the AV has the advantage of looking in several directions at the same time,
that a human being is not able to, the risk to misinterpret the situation will be hard to
reduce. The base to manage these challenges are a complex data management process,
with the underlying techniques and tools, huge quality metrics and measurements,
new concepts for validation in a virtual world and a new management mindset. For
executing this huge amount of data in a simulation environment, it is highly recom-
mended to validate the simulation and its results against the real world.

1.3 Challenges
Testing HAD functionality brings up many additional challenges compared to classi-
cal verification and validation strategies for vehicles within the SAE level 0-2. These
challenges can be categorized in three different areas: Environment, Vehicle and Test-
ing.
Complex situations can be found in the environment caused by road works, differ-
ent weather conditions and road users, especially by combinations of these. Scenarios
like different crossing lane markers or the need to cross a solid line are examples of
unusual situations in accordance with road work. Weather conditions, like snow, rain,
fog and wind, can even lead to the need of a full stop or at least reduce the speed. On
the other hand, these conditions are not easy to detect and quantifiable for the AV. All
these influences in combination with the non-deterministic behavior of the other road
users is a daily struggle also for human drivers. Another aspect within environment is
the perception of objects like pedestrians, children, animals, other dynamic objects
(like a ball). These objects are sometimes very hard to detect in regard of reflections,
poster advertising and many more situations as described in [11]. Additionally, the
infrastructure in parking garages, on highways and in cities is not really prepared and
optimized to the needs of autonomous driving.
The huge number of data needed for training and testing must be handled and an
analysis is needed, if the quality of the data is good enough. The data must be ana-
lyzed to extract ground truth information and identify and categorize the objects with-
in a scene. In this mass of data and different scenarios you have the need to find these
scenarios, that match to the attributes you are searching for. To define the structure of
the labels, to label the data and to store it, is a big effort especially with respect to data
privacy. Therefore, new concepts and technologies are needed.
The vehicle itself with its increasing number of sensors, with the necessity of re-
dundancy in terms of safety and its complex End-to-End infrastructure is a very large
enhancement compared to human beings driven vehicles. The internal communication
increased to get all the information from the sensor to the ECUs handling the driving
functionality, but also the external connection to environment and to the cloud for
additional information like HD-maps leads to a bigger effort in development and test-
ing.
The testing activities themselves become more complex and difficult caused by the
complexity of the use cases, that must be handled, by the huge number of test execu-
tions needed for every release, by the new technologies necessary to deal with and by
4

the need to develop reliable simulations. Testing these complex use cases can normal-
ly not be reproduced on public streets and due to the complexity not even in special
test tracks. Some of these test cases are critical and unsafe. Regarding special weather
conditions or special phenomena real drive testing is not feasible. To reproduce such
situations, you must relocate testing from vehicle to simulation. This can be done by
re-simulating recorded data, but what to do, if the vehicle acts in a direction where no
data was recorded. Therefore, and to simulate safety critical scenarios a digital world
is needed. This virtualization representing the world must be good enough to deliver
trusty data and must be representative. In general, all of this is a new field of testing,
where less experience exists. Thinking into direction of developing process these tests
must be repeated for each release and a higher number of test cases must be executed
in one release.

1.4 Concept of HAD V&V strategy


On the one hand side a lot of test strategies exist focusing on the overall functionality,
on the other side test strategies focus on only one part or subparts of the HAD func-
tionality like “Perception”, “Thinking”, “Planning” and “Acting”. Most of the overall
strategies do not break down the strategy to these functional components, thus for
each component a separate strategy has been developed. Typically, the different ap-
proaches between the different layers do not fit together and are tested separately.
They are optimized in their own focus both in development and in testing. Each part
works fine individually but together they will not work properly, and the integrated
HAD functionality will lead to unknown unsafe functionality. Interlocking these sepa-
rated strategies and working with a common data structure is highly recommended
and the basis of a harmonization of the functional components. Activities in the direc-
tion a harmonization and standardization can be seen in subparts like the Open Simu-
lation Interface [12] in the component “Perception”. The same harmonization and
interlocking mechanism are needed between all components, not only in interfaces,
but also in a consistent data structure. The principles of such a HAD test strategy are
creating synergies between the different strategies, raising the test depth (e.g. by vir-
tualization) and reducing the test effort. Being as efficient as possible the test process
and the testing must work in an automated pipeline for continuous testing and contin-
uous delivery up to system level.

2 Reference Architecture for autonomous driving functions

At the beginning of the development of the HAD test strategy we have built a refer-
ence architecture for the autonomous driving function. The architecture consists of the
four building blocks as seen in Fig. 1.
The sensors in the Perception block are categorized in two types of sensor clusters,
the one cluster called system sensors and the other one surround detection sensors. In
the system sensor cluster are for example inertial measurement units (IMU), steering
angle, drive torque, brake pressure, temperature and odometry, but also the connec-
5

tions like V2X, HD-maps and GNSS (global navigation satellite system). A special
position has the sensor for the driver monitoring system to detect the attention of the
driver especially in lower levels than SAE level 5 (fully self-driving) for hand-over
activities. The cluster of surround detection sensors (described in [13]) consists of
cameras, LIDARs, radar, ultrasonic and microphone. The Perception of this cluster
contains sub functions like the conversion of the physical signals into electronic sig-
nals, a preprocessing part, sometimes a 2D to 3D transformation, an object and fea-
ture detection and classification and the sensor fusion.

Fig. 1. Reference Architecture of HAD V&V strategy with functional building blocks.

A typical interface from Perception to Thinking contains information from both


kind of sensor clusters like free space grid map of surroundings, localization of ego
vehicle in grid map and classified object list (after tracking) incl. velocity information
and orientation. On the other side the system sensors deliver data about infrastructure
(V2X), maps and global position but also things like steering angle, drive torque and
brake pressure.
The Thinking component summarizes the following actions like prediction con-
cerning the behavior of the other traffic participants, safety check, scene understand-
ing, mission planning and decision making. For local route planning through a given
scenario it is necessary to know how the other traffic participants will behave in the
near future. The AV must predict the behavior with the information like pose, velocity
and direction given from the Perception. The traffic rules are the common rules for all
traffic participants and so also the AV needs to know and understand what traffic
rules currently exist. Therefore, it is necessary to classify the scene into the actual
driving domain [14]. From the planning of the trip until the trajectory planning for a
concrete mission a planning is necessary to reach the next waypoint within the route.
This mid-term route depends on the traffic rules and needs a decision which maneu-
vers (lane change, turn over, …) are necessary to reach the goals of the mid-term
route. The selected maneuvers and the understanding of the scenery are the input to
the Planning component.
6

Depending on the detected driving domain (DD) and the planned maneuvers a se-
lection of the corresponding motion planer is done. For example in parking situations
another planner is used than for Highway (high speed) scenarios. The motion planer
calculates a trajectory within the safety regions and the given maneuvers from Think-
ing. The calculation of the trajectory is influenced by parameters regarding the road
surface, safety and comfort aspects.
The trajectory represents the nominal value for the Acting block with its parts like
velocity, steering and stability control.

3 Test strategies for building blocks of an AD functionality

Each building block of an AD functionality has its own test strategy derived from the
overall test strategy. Every test strategy has its requirements for the global data struc-
ture and process.

3.1 Perception test strategy


The verification and validation of the specific sensor is typically part of the supplier,
but for further validation a model of the physical behavior of the sensor is needed.
This model must be qualified against the real behavior of the sensor. For a risk-based
verification and validation an analysis with all physical and phenomenological aspects
of the edge and corner cases of the specific sensor is indispensable. The preprocessing
and the following object detection and classification is tested with the physical model
or the phenomenological model of the sensor by raytracing. This is done for each
sensor itself and depends on the architecture (Field of view) if there is a common
detection and classification for more than one sensor.
The fusion will be tested in his cascades and integrated one after another, always
using the same format and structure in the same simulation infrastructure (same
should be used all over the project with different views and focus). Regarding verifi-
cation and based on ODD analysis all functional detections based on requirements of
a normal driving situation shall be tested in a simulated environment, and validated
against real driving results.
7

Fig. 2. Building blocks of Perception and its reference to the OSI Interfaces

The whole test process shall be included in a continuous testing framework (Con-
tinuous integration/development/testing). Especially for the testing part a regression
test strategy must be developed. The regression test strategy shall base on the special
functionality and the progress in development (see quality gates and referring to a
maturity model of the functionality).
For all sensors and mentioned parts a standardized interface description is highly
recommended (see Fig. 2). The most widely used standard in the field of perception is
the ASAM Open Simulation Interface standard [12], that is currently extended for
actual and future needs.

Test Process
All parts of the Perception should be verified against its broken-down requirements
before an integration starts. Typically, each part uses behavior models at its interfaces
for verification and validation activities. There are several pre-integrations possible
within the preprocessing, the fusion of the overlapping sensor views, the object detec-
tion & classification and especially the different steps of fusion. After the integration
it must be validated, that all parts work together properly regarding performance,
stability and fallback. Thus, all parts are tested with the common ground truth data
adapted to its own needs and as much as possible in virtual simulation.

3.2 Thinking test strategy


The interpretation of the scenario seen through the “eyes” of the sensors, is a very
challenging task. It is based on the information of the perception system and highly
dependent on the configuration and performance of the sensors and the functionality
of the sensor fusion. Not every functionality within the highly complex autonomous
driving task needs all the information provided by the surround detection sensors, the
system sensors (see chapter 2) and the sensor fusion. For example, if you think about
speed limit/traffic sign detection or lane centering, only special information is needed.
8

For the building block Thinking no standard exists regarding the interfaces be-
tween the different functionalities like the ASAM Open Simulation Interface for Per-
ception do. Just the input to Thinking as output of Perception is standardized within
OSI. This makes a general test strategy difficult, and a standardization would be very
helpful. The behavior prediction of the traffic participants can be tested very easily in
the aspect of the interface format given from the object list and the additional envi-
ronment information, but on the other hand to predict the behavior of other drivers
can be very hard. Some companies focus on models of drivers and other traffic partic-
ipants like pedestrian, that should be used to simulate the behavior. On the other side
analysis of field data, datasets from accident and behavior datasets like [15] can be the
basis of testcases and simulation of the behavior of traffic participants.
Determine the driving domain becomes fundamental to understand the scenario
and conclude on the valid traffic rules. Neither the description of ODD nor the de-
scription of DD are standardized and formalized. Currently the ASAM OpenODD
concept project [16] works on such a standardized description. The description in a
machine-readable format is necessary to use it in development and testing. Some typi-
cal properties of the given scenario, behavior of the traffic participants and the map
information lead to the current driving domain.
When the AV knows the DD, the environmental features and the behavior of the
traffic participants, it can identify the save areas over time. With this safety envelope
and the route information from the navigation the AV can plan his mission for the
next stage of the trip. Therefore, a planning and decision making is necessary for the
maneuver that is needed to reach the next stage.

Test Process
All parts of the Thinking should be verified against its broken-down requirements
before an integration starts (similar to Perception). Typically, each part uses behavior
model at its interfaces for verification and validation activities. After the integration it
must be validated, that all parts work together properly regarding performance, stabil-
ity and fallback. So, all parts are tested with the common ground truth data adapted to
its own need and as much as possible in virtual simulation.

3.3 Planning test strategy


The driving maneuvers are translated into trajectories within the safe driving corri-
dor (lanelets). These trajectories are the input for the Acting. The motion planer must
be able to calculate a comfort trajectory within the safe regions in a fast way. The
timing is an elementary factor, because the AV must be able to react to a playing kid
popping up behind a parked car. There could be some additional actions to verify the
safety of the developed trajectory to be sure it is safe.

Test Process
The Planning component should first be validated against requirements at its inter-
faces with the help of the common data pool. The test cases represent different scenar-
9

ios with variations of the lanelets, that are derived from traffic participant prediction,
variation in vehicle state and obstacle position, different maneuver planning and
changing drivable zones. To validate the functionality of the motion planning it is
necessary to use additional methods like Search-based Testing to identify the critical
cases by searching for corner and edge cases. After identification of these cases an
optimization of the functionality can be done. The information about such corner and
edge cases should be mirrored back to the global tooling and test data pool to build up
corresponding test cases for the system tests.

3.4 Acting test strategy


The verification and validation seem to be a classical and well-established field
with a lot of experience over the past few years, but there are some new aspects and
parameters. The classical assistance and drive dynamic functions like electronic sta-
bility control (ESC), anti-lock braking system (ABS) and traction control system
(TCS) are very well known and fulfill highest A-SIL level (Level D) of the ISO
26262 Standard [9]. But with the situation, that the nominal value of the breaking,
drive and steering system comes from the autonomous driving function and no longer
from the driver, a lot of fallback and redundancy is added. There is several infor-
mation routed back as an input and basis of the driving task. Losing some of this in-
formation will mean losing the driver or lead to a misinterpretation of the current
vehicle status by wrong input parameters. As there are SW redundancy and mecha-
nism they can be tested in a simulated and virtual world. Especially thinking about the
system tests of the overall functionality, this is an important aspect. This means, that
models representing the real actors like gear, engine and driving shaft, but also wheels
and axle load distribution must be modelled and simulated.

Test Process
Functional tests should be done as good as possible in a virtual simulation right
from the beginning. The influences of the redundancy and the drop out of one path
shall be tested in the virtual world with physical models of the environment and the
surrounding. These models shall be developed and delivered in a format, that they
fulfill the requirements to be re-used on system level for testing, but also as plant or
system model for other component tests.
Hardware dependencies for the acting component must be tested on test track. A
special educated driver is needed and the vehicle must be prepared with equipment to
force these redundancy check test cases.

4 Approach of a holistic test strategy

For an efficient and effective HAD test strategy it is necessary to highly interconnect
the test strategies of the different functional components. On the one hand this needs
to be done in process and on the other hand in tooling and data structure.
The basic pillars of testing a HAD functionality are:
10

 Avoid manual tests in the vehicle.


 Use SiL testing intensively.
 Automate tests.
─ as far as possible
─ in all test levels
─ to handle variants and new features
 Use-case testing is done on system and vehicle level.
 Functional testing is done on software test levels.
 Establish an efficient test infrastructure is essential.

These pillars lead to a harmonized and synchronized toolchain with a central data
pool and a minimum of tool breaks. The data structure exists of several layers similar
to the layers defined by PEGASUS [4] and the formats defined by ASAM [12], [16].
But there is still a gap for logical scenarios and testcases, that should be closed for
future needs and the current projects. Therefore a need for standardization and har-
monization is obvious.
Our strategy uses a risk-based testing approach, because testing all scenarios is not
possible. So, the strategy focuses on scenarios with the highest risk. The key to identi-
fy these scenarios is to analyze the system in aspects of required functionality, archi-
tecture and environmental usage (ODD analysis). Additional risks typically results
from system changes during development. Tracking the errors and the probability of
errors are necessary to identify the regions of failures and the estimation of risks. The
use of automated risk identification methods like search-based testing help to find the
corner cases of the developed algorithms. The results of the previous test campaigns
to identify regions of high risk and the involvement of all stakeholders are additional
sources. After start of sales a concept for field data acquisition, replay and analysis is
elementary to improve the functionality and reduce risk. All these activities lead to a
better and intelligent selection of the test cases.
The high number of test cases and the methods of automated risk identification re-
quire a test infrastructure, that can execute a huge number of test cases within a min-
imum amount of time. Driving millions of kilometers in the real world is no longer
feasible anymore, so the test must be done in simulation and with highly parallelized
execution. Virtualization helps also to close the integration gap between the test levels
of software test and the system integration test. Extending the approach of continuous
integration and continuous development for the needs of testing in combination of
closing the gap makes the process much more efficient. Additional concepts and a
well-fitting strategy for using the same models between the integration levels and the
different functional components result in a very effective verification and validation.
A seamless process with a high degree of automation is essential for testing of
HAD functions. This refers to the areas of tooling, integration, data loop and data
format. Especially a high-performance data loop concerning real data, labeling, identi-
fy new scenarios and the balance between real data and synthetic data is the key factor
for a successful validation of self-driving cars.
11

5 Examples “Highway Pilot” and “Autonomous Parking Pilot”

We started by defining two functionalities of an autonomous driving car. Together


they have to cover low-speed and high-speed scenarios, the tasks to drive shall be
challenging, but reachable in the near future. The decision fell on the two functionali-
ties Autonomous Parking Pilot (APP) and Highway Pilot (HWP).
The functionality of the AAP extends the Automated Valet Parking functionality (de-
scribed in [17]) but extends this. It is a little bit more challenging because we do not
transfer safety decisions to the environment. On the other side it is not as challenging
as an urban scenario with its open world context. We assume, that the mall environ-
ment, that we used in the example, transfers a HD-map to the car containing the cur-
rent restrictions (temporarily closed street, parking restrictions, and as good as possi-
ble information about street work). But it can never be good enough to cover all situa-
tions, for example a street worker changes a sign or the route within a construction
site.
The HWP contrasts with the mall scenario, because of the possible high speed and
the different use cases the car must deal with. Even when you think an AV driving on
a highway has its well-defined ODD, there are several surprises and challenging situa-
tions, like fast motorbikes on the left lane, helicopters landing on the road, airplanes
that seems to cross the highway. But even normal situations like entering the highway
or overtake another car can cause difficulties.
At first, we analyzed the given driving domain for each functionality and defined
use cases. In a next step we developed a typical sensor configuration (see Fig. 3) fit-
ting to the functionality to have an impression of the sensor view of the AV. The safe-
ty aspects are covered by an analysis of the common safety argumentation chains and
checklists like a GSN [18], UL4600 [19] and the RSS [20]. Where we did a spike
with one special use case of the functionalities above. These examples are the base for
the development of our test strategies.

5.1 Use cases of Autonomous Parking Pilot


Together with the analysis and definition of the ODD we developed use cases for
functionality APP and defined a references sensor architecture (see Fig. 3).
12

Fig. 3. Example sensor architecture with field of view for Autonomous Parking Pilot

 APP_UC1: Drop-off
 APP_UC2: Driving between Drop-Off/Pick-up and parking area.
 APP_UC3: Search free parking lot
 APP_UC4: Park into parking lot
 APP_UC5: Drive to Pick-up

In the following each use case is described:


APP_UC1: The driver arrives at the Drop-Off zone of the mall. The driver and the
passengers want to leave the car. After leaving the car, the driver starts the autono-
mous parking functionality by phone or alternative mobile device.
APP_UC2: The car starts from the Drop-Off zone driving through the streets of the
mall area to reach the area with the parking lots. During this drive several obstacles
and situations can occur, the car has to deal with.
APP_UC3: The vehicle reaches the parking area and starts searching a free parking
lot. There can be restrictions in using the parking lot like only for handicapped peo-
ple, taxi, police, trucks.
APP_UC4: The car drives into the identified parking lot without damaging other
cars neither itself. Static and dynamic obstacles must be detected and categorized
correctly to decide whether another parking lot is needed, or the dynamic obstacle
will remove after a short period of time.
APP_UC5: The driver orders the car to the Pick-Up zone of the mall by mobile de-
vice. The car must drive out of the parking lot, plan the route to Pick-up zone (similar
to UC2 just vice versa), drive to Drop-Off zone.

5.2 Use cases of Highway Pilot


Together with the analysis and definition of the ODD we developed use cases for
functionality HWP and defined a references sensor architecture.

 HWP_UC1: Enter Motorway and Give over


 HWP_UC2: Pass Motorway entrance (and exit)
 HWP_UC3: Pass construction site
 HWP_UC4: Overtake another car.
 HWP_UC5: Cut-in/Cut-out scenario
 HWP_UC6: Change lane for motorway junction
 HWP_UC7: Takeover request due to emergency needed.
 HWP_UC8: Highway exit reached and ODT must takeover.
 HWP_UC9: End of traffic jam reached.

In the following each use case is described:


HWP_UC1: The car is driving on-ramp and the driver activates the Highway pilot.
The highway pilot drives onto the motorway.
HWP_UC2: The car passes the ramp-up, where other cars want to enter the High-
way.
13

HWP_UC3: The vehicle passes construction site, where lanes temporarily changed
and some lanes crosses others. There can also be special road signs and cars from the
workers may enter the highway in an unusual way.
HWP_UC4: The car must overtake a slower car. Various scenarios are possible,
like faster car comes from behind on ego lane or on neighboring lane, car in front
accelerates or car in front changes also the lane.
HWP_UC5: Cut-in means when another car changes into ego-lane between ego
vehicle and car in front. And a Cut-out, when car in front changes suddenly the lane
and a static obstacle (or also a much slower vehicle) can be perceived (noticed).
HWP_UC6: Planned route needs several lane changes, also during traffic jam.
HWP_UC7: Caused by internal failures of the car or an internal safety alert a take-
over request comes up.
HWP_UC8: The planned exit is reached, and the driver must takeover.
HWP_UC9: The end of a traffic jam is reached with high speed.
How to find the best test cases for such a functionality is described in [21].

6 Conclusion

In this work we compared the classical test strategies with the new challenges the test
strategies for HAD have to deal with. We described where the challenges of automat-
ed and autonomous driving come from and how they lead to a tremendous complexity
that was never seen before in the field of driving. The handling, the requirements of
safety and the need to validate this complexity brings up new strategies to handle the
testing effort. Typical strategies are to search for the critical test cases and to reduce
the complexity by dividing the functionality in smaller functional components. The
disadvantage of separate test strategies, that can be seen very often, can lead to func-
tional insufficiencies and to a non-efficient and non-effective validation. With our
HAD strategy we showed why and how all these strategies can be interconnected and
become highly efficient. We introduced a typical architecture as reference for the
system under test and described the functionality of the building blocks of a dividing
test strategy. The specific test strategy for each building block and its individual prop-
erties and methods were described. These methods, tools and processes are highly
interconnected and synchronized deducted from the pillars of the holistic test strategy.
For preparing an evaluation two functionalities of an AV were described as “Auto-
mated Parking Pilot” and “Highway Pilot”. These functionalities and the defined use
cases are the basis of a proof-of-concept and the definition of data structure and vali-
dation toolchain.

References
1. Koopman, P., Wagner, M.: Challenges in autonomous vehicle testing and validation. SAE
International Journal of Transportation Safety 4(1), 15-24 (2016)
2. Wachenfeld, W., Winner, H.: The release of autonomous vehicles. In: Autonomous Driv-
ing, pp. 425{449. Springer (2016)
14

3. Maike Scholtes, Lukas Westhofen, Lara Ruth Turner, Katrin Lotto, Michael Schuldes,
Hendrik Weber, Nicolas Wagener, Christian Neurohr, Martin Bollmann, Franziska Körtke,
et al. 2020. 6-Layer Model for a Structured Description and Categorization of Urban Traf-
fic and Environment. arXiv preprint arXiv:2012.06319 (2020).
4. “Project Pegasus” [Online]. Available: https://www.pegasusprojekt.de/en/about-
PEGASUS.
5. Amersbach, Chr., Winner, H.: Functional Decomposition: An Approach to Reduce the
Approval Effort for Highly Automated Driving, 8. Tagung Fahrerassistenz, Lehrstuhl für
Fahrzeugtechnik mit TÜV SÜD Akademie, München (2017)
6. B. Graab, E. Donner, U. Chiellino, and M. Hoppe, “Analyse von Verkehrsunfällen hin-
sichtlich unterschiedlicher Fahrerpopulationen und daraus ableitbarer Ergebnisse für die
Entwicklung adaptiver Fahrerassistenzsysteme,” in TU München & TÜV Süd Akademie
GmbH (Eds.), Conference: Active Safety Through Driver Assistance. München, 2008.
7. S. Burton, R. Hawkins, Assuring the safety of highly automated driving: state-of-the-art
and research perspectives, University of York (April 2020)
8. SAE J3016: Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Au-
tomated Driving Systems. Online: https://www.sae.org/standards/content/J3016_201806,
(2018)
9. ISO 26262:2018: Road vehicles – Functional safety (2018)
10. P. Koopman, M. Wagner, “Toward a Framework for Highly Automated Vehicle Safety
Validation”, WCX World Congress Experience (April 2018)
11. P. Koopman, “Ensuring Autonomous Vehicle Perception Safety”, Autonomous Think
Tank Gothenburg, Sweden (April 2019)
12. ASAM Open Simulation Interface, https://www.asam.net/standards/detail/osi/ , (April
2021)
13. E. Yurtsever (Member, IEEE), J. Lambert, A. Carballo (Member, IEEE), K. Takeda (Sen-
ior Member, IEEE), “A Survey of Autonomous Driving: Common Practices and Emerging
Technologies”, https://arxiv.org/pdf/1906.05113.pdf (April 2020)
14. Thatcham Research, “Defining Safe Automated Driving - Insurer Requirements for High-
way Automation” (September 2019)
15. A. Rasouli,I. Kotseruba, J. K. Tsotsos, “Are they going to cross? A benchmark dataset and
baseline for pedestrian crosswalk behavior”, Proceedings of the IEEE International Con-
ference on Computer Vision Workshops, pages 206-213 (2017)
16. ASAM OpenODD, https://www.asam.net/project-detail/asam-openodd/ , (26.10.2020)
17. Bosch Mobility Solutions, “Automated Valet Parking”, https://www.bosch-mobility-
solutions.com/en/products-and-services/passenger-cars-and-light-commercial-
vehicles/automated-parking/automated-valet-parking/, retrieved 22nd April 2021.
18. Uber ATG, „Safety Case Framework” https://www.uber.com/us/en/atg/safety/safety-case-
framework/, retrieved 3rd September 2020.
19. UL 4600, “1. Proposed First Edition of the Standard for Safety for the Evaluation of Au-
tonomous Products”, (13th December 2019)
20. S. Shalev-Shwartz, S. Shammah, A. Shashua, “On a Formal Model of Safe and Scalable
Self-driving Cars”, arXiv:1708.06374v6 [cs.RO] (27 October 2018)
21. F. Hauer, A. Pretschner, B. Holzmüller, „Fitness Functions for Testing Automated and Au-
tonomous Driving Systems“, (2019)
Mastering the Data Pipeline for Autonomous Driving

Patrik Moravek1 and Bassam Abdelghani11


1
dSPACE GmbH, Rathenaustraße 26, 33102, Paderborn, Germany
Pmoravek@dspace.de, babdelghani@dspace.de

Abstract.
Autonomous driving is at hand, for some at least. Others are still struggling to
produce basic ADAS functions efficiently. What is the difference between the
two? It is how the data is treated and used. The companies on the front line real-
ized long ago that data plays a key and central role in the progress and devel-
opment processes must be adapted accordingly. Those companies that have not
adapted their processes are still struggling to catch up and are wasting time and
resources.

This article discusses the key aspects and stages of data-driven development
and points out the most common bottlenecks. It does not make sense to focus
on just one part of the data-driven development pipeline and neglect the others.
Only harmonized improvements along the entire pipeline will allow for faster
progress. Inconsistencies in formats and interfaces are the most common source
of project delays. Therefore, we provide a perspective from the start of the data
pipeline to the application of the selected data in the training and validation
processes and on to the new start of the cycle. We address all parts of the data
pipeline including data logging, ingestion, management, analysis, augmenta-
tion, training, and validation using open-loop methods.

The integrated pipeline for the continuous development of machine-learning-


based functions without inefficiencies is the final goal, and the technologies
presented here describe how to achieve it.

Keywords: data analysis; data augmentation; data collection; data manage-


ment; data pipeline; data replay; data recording; data retrieval; data selection;
ingestion pipeline; reprocessing; training; validation.

1 Introduction

Big data is not what is standing in the way of the automotive industry’s attempt to
automize driving. It is the way the data is used and harnessed that is hindering pro-
gress toward driverless mobility. It is easy to collect a lot of data, it is easy to store
this data, and it is easy to retrieve the data. However, it is not easy to do this efficient-
ly. Efficient means not wasting resources and time. Data is great and it is considered
“the new oil,” but too much oil can choke the engine and even cause the engine to

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_14
2

stall. Therefore, it is not the data itself but the process and appropriate tools that ad-
vance the progress.
There is a huge amount of data being collected by the vehicle sensors, adding up to
dozens of terabytes per vehicle per day. And while the total data lake can reach hun-
dreds of petabytes (e.g., BMW´s D3 data center features more than 200 PB of storage
[1]) current estimations say only 5% of the data is actually accessed and used produc-
tively.
Reliable autonomous operation is an ambitious goal due to the complexity of the
problem to solve. It is not a well-defined problem within a limited scope that would
allow traditional problem-solving approaches to succeed. The complexity of the envi-
ronment, the real world, in which autonomous vehicles must be operated productively
is enormous. And even an artificial limitation of the relevant conditions such as a
restricted area or weather conditions (often defined in ODD – Operative Domain De-
scription) does not make it possible to describe explicitly intended operation in the
form of if-then-else. That is the reason for which a completely different approach to
solving this problem has been adopted. It is a general heuristic approach that does not
intend to solve the problem with the highest accuracy and completeness, but rather
aims to solve the problem pragmatically in an efficient manner. It has been proven
that a more general code framework that can be configured and tuned with specific
examples of the problem to solve results in significant advances in autonomous driv-
ing. It can be called programming by examples or, as Andrej Karpathy, the director of
AI development at Tesla, calls it, SW 2.0 [2].
The clear separation in naming (SW 2.0 vs. classic software) is related to the com-
pletely new paradigm in programming related to the problem-solving method. It is no
longer the written code that defines whether the program successfully solves the task.
Rather, the parametrization of the code has the most significant impact.
In conventional software development, a programmer writes instructions line by
line to define what is executed automatically. In the new approach, only a few lines of
codes are written that define a model and parameters of the model that are set after-
ward. The success or failure of the program thus depends to a very large extent on the
configuration of the parameters. The parameters are not set manually. There are thou-
sands or even millions of parameters that have to be adjusted. Fortunately, this is done
automatically by describing the right examples of how the output should relate to an
input. In other words, the model is fed input data and its output is compared with the
expected output. The difference is back propagated in the model, adapting the param-
eters to achieve a smaller difference in the next run. After this optimization task and
many different iterations, the model parameters are set to deliver an output that is
“sufficiently close” to the expected result. The model is then adjusted to the provided
examples (data) and returns the expected results. This is good because the model au-
tomatically returns the expected output based on the provided input without having to
explicitly define every step in the process. Only the model is hard-coded, while the
parameters are defined automatically. The issue is that the model will work as long as
the provided input corresponds to the examples given to the model during the para-
metrization (training of the model). If the model has not a seen similar input, it will
3

most likely not deliver the expected output. The consequence is that the success of the
development is determined by the data the model sees during the training process.
Due to this fact, the roles in the development process are split between program-
mers who write the code to implement the model and data engineers who prepare
the data for the training phase. After the initial phase, the progress of the project is
meas-ured based on how much the training data, and to a lesser extent the code,
has im-proved. This means that the focus of the development optimization changes
complete-ly. The focus is not on code advances, code versioning, or integration
with all the tools that help streamline the code development and testing process.
Completely new requirements for supporting tools have emerged. The development
process must adopt this fundamental change of focus and place data at the center of
attention. The pro-cesses must be wrapped around data and allow for the efficient use
of the data in vari-ous development steps. The environment must be able to handle
large amounts of data as the trained model must not be surprised by new situations
(input) on the road.

Fig. 1. Data pipeline and the cycle of data-driven development in the automotive industry

The structure of the paper is aligned with the main domains of the data pipeline and
the data cycle supported by dSPACE (as depicted in Fig. 1). Each chapter below is
dedicated to one key topic. Nevertheless, since the main narrative of the data pipeline
is "seamlessness," the interconnections and mutual influences are highlighted in the
individual chapters.

2 Onboard Data Pipeline

The journey of the data through the data pipeline starts at the vehicle. The vehicle is
the first hub and the source of the data to be used to advance autonomous driving
functions. One vehicle can be seen as a single source of data when looking at it from
the perspective of a data collection campaign including several vehicles. At the same
time, the vehicle itself represents a data collection network. At the vehicle level, the
data sources are all sensors, buses, and networks that transmit information from inter-
nal vehicle sensors and ECUs. Simple recording simply means shuffling the data from
IO interfaces to data storage as fast as possible. More advanced logging introduces
functions that are applied to the data between their reception and their storage. This is
4

a data pipeline at the vehicle level. Both perspectives have their own challenges and
aspects, and thus they can be split up for further consideration.
Acquisition of data at the vehicle level has become yet another challenging topic in
the general pursuit of autonomy. Unlike legacy acquisition technologies, data logging
systems for ADAS/AD must cope with new interfaces, protocols, high data rates, and
potentially huge data volumes produced by environmental sensors.
The vehicle and the data collection system are also the first place where the data
pipeline can be optimized. The internal vehicle bus and network architecture can be
quite complex, and failures or inconsistencies can occur when acquiring data. This is
not infrequent and such failures must be identified early on in order to avoid wasting
time (when collecting corrupted data) and storage space.
The most common practice currently is for in-vehicle data logging systems to col-
lect all of the data and for the quality of the data to be checked while ingesting the
data or in the data center. This method wastes the resources in the vehicle. Most obvi-
ously, the size of the in-vehicle storage must compensate for the inefficiency. In addi-
tion, the drives must be interrupted more frequently than necessary just to swap drives
or upload the data from the vehicle to an ingestion station.
This inefficient process has been identified, and the emerging trend is to equip ve-
hicles with an additional computer to analyze the incoming data while driving. This is
an intermediate step as it is not optimal due to the increased complexity of the vehicle
measurement equipment. An additional computer requires additional power, space,
mounting, a high bandwidth connection, and precise synchronization with the rest of
the system. The final goal is for the data logger itself to be able to analyze the incom-
ing traffic and decide what is to be recorded and what is to be thrown away. This
poses additional requirements for the data logger to supply enough computational
power for the analysis. In addition, the logger has to offer efficient means to set up
such a data analysis in the software framework or logging application itself.
Several levels of detail can be distinguished when curating and analyzing data.
First, the most basic requirement is to detect any missing data streams, which would
indicate a problem with an in-vehicle component. This might happen if some of the
sensors or ECUs do not boot properly after restarting the system. Additionally, the
data rate of the data streams can be checked to indicate configuration issues. Analyz-
ing the headers and content of the received packages and messages offers far-reaching
possibilities for understanding the conditions and surroundings of the ego vehicle.
Such a content-based analysis not only checks the data quality but also optimizes
the entire data cycle. If the data logger is situation-aware, relatively simple rules can
be defined to improve the data flow. Such rules can trigger the tagging of the data for
later use, define filters on the data logging pipeline, or enable a triggered recording to
store only data of a defined interest.
The triggered recording can be enhanced with an over-the-air transmission feature.
In this case, the data is not only stored in the vehicle storage but immediately sent out
to a data center to be available for data scientists as soon as possible. Such a data flow
shortens the entire data cycle as much as possible and allows for the accelerated im-
provement of the neural networks employed.
5

Achieving in-depth insight into the data from the data logger has several prerequi-
sites. First, the data logging has to be centralized to have a comprehensive view of the
data from all devices. Then, it is necessary to provide support for all interfaces (CAN,
FlexRay, Ethernet, GMSL, etc.) and understand the communication protocols to ac-
cess the signal and data level. And potentially, it must be possible to interpret the
signals and data themselves.
Interpreting signals is quite trivial (assuming it is possible to read the communica-
tion matrix), while interpreting data from complex environmental sensors is the chal-
lenging part. It is exactly what ADAS/AD ECUs are intended for with the difference
that the reliability requirement is much lower. Much lower levels of perception are
accepted. The detection is only for informative purposes and does not have any safety
implications. False positives might just mean a bit more data is recorded. False nega-
tives at further distances are not as much of a worry as buffering is in place anyway.
Once the objects are closer, the probability of detection increases, and a specified
interval before the detection is recorded. It is not necessary to understand the com-
plete situation. Only certain aspects of the situation, certain objects, or certain maneu-
vers are of interest in the particular phase of development and testing. This simplifies
the logging configuration and the in-vehicle data pipeline. In addition, it can be dy-
namically adapted based on the actual needs.

Fig. 2. Logging data during road testing involves a specific logging method that incorporates a
minimal level of delay and jitter to not affect the system under test

The road testing phase provides another option to optimize the data logging pro-
cess. The combination of a performant computing platform and a data logger in one
device makes it easy to deploy a perception algorithm together with the logging pipe-
line. Certain KPIs for the algorithm under test (e.g., uncertainty level) can act as a
6

trigger to store the relevant time interval in the recording and even to send it over-the-
air to the data center to speed up the feedback loop.
When an ADAS/AD computer (or its prototype) is under test in the vehicle, the
computational and logging platforms are separated but the same synergy effect can be
achieved (see the data logging architecture in Fig. 2). KPIs for algorithms under test
can be a trigger to the recording. In this case, not only the sensor and bus data are
saved but also the state of the ECU under test can be saved consistently. This allows
for detailed and efficient debugging with a complete data snapshot of the situation.
Furthermore, it is possible not only to log externally exposed function parameters but
also to store the memory dump.

3 Ingestion Pipeline

Ever improving environmental sensors with increasing resolution produce ever in-
creasing amount of data. Cameras in particular are the drivers for data bandwidth and
storage requirements. Parking cameras might feature 1.2 or 2 Mpx of resolution, but
the imager in more common cameras found in SEA L2+ vehicles provide 5 to 8 MPx
of resolution. Recently, the company Ambarella announced that it would be launching
an Intelligent AI Vision processor capable of processing outputs of 8k resolution
cameras at up to 60 frames per second with low consumption [3]. This will make it
possible to introduce cameras with an 8k resolution in the automotive domain as well
(unlike the 4k cameras seen today). If data from such cameras were to be logged in a
raw form (as it is currently done), a single 8k camera running at 30 frames per second
would require a recording bandwidth of at least 11.5 gbps. Currently, typical data
rates (aggregated) range from a few gigabits per second to 40-50 gbps, making it pos-
sible to file even large storage drives very quickly.
When in-vehicle storage (usually SSD or HDD types) is full, data must be trans-
ferred to a storage location that can be accessed by developers and test engineers. If
we ignore the over-the-air transmission of selected sequences for now, there are two
basic means of transferring collected data. One option is to ingest data from in-vehicle
storage to a data center directly. Directly means that the drives (SSDs or HDDs) are
transported to the data center where the data is finally stored. The data is copied from
the in-vehicle storage to the server storage, while the drives are erased and returned to
the drivers. The second option is to not transport the storage drives themselves but
rather to transmit the recorded data. If all available storage drives in the vehicle are
full, the driver drives to a place with a good connection, and the data is transmitted
via the Internet (usually via dedicated links with high bandwidth such as Express
Route Direct from Microsoft [4]). Both approaches have their pros and cons, and the
goal is to optimize the cost/transfer time ratio.
Whatever approach is selected, the ingestion pipeline starts by reading the data
from the vehicle drives. As a rule, data that cannot (or will not) be used should not be
ingested. This is especially true for corrupted or incomplete data. Such data would
only take up space without any added benefit. If the quality check is not performed in
7

the vehicle, the ingestion pipeline is another possibility for maximizing storage and
thus saving costs.
In addition to the bandwidth requirements, implementing an ingestion pipeline can
be computationally demanding. Therefore, the processing takes place in the data cen-
ter itself or in what is referred to as a collocation center. Such facilities offer much
cheaper computational power than is available in the vehicle and have fewer re-
strictions in terms of space and power supply.
The ingestion pipeline is an abstraction of the hardware and location where it is
run. It represents logical steps that the data goes through to become available for us-
ers.
Quality control is the first stage in the ingestion pipeline. It is a step that pays off as
low-quality data does not have to be ingested because it will never be used and thus
can be deleted. Low-quality refers to incomplete data, corrupted data, inconsistent
data, or data with an error flag (from the data source).
Automated annotations are another stage. The primary purpose of annotation in
this step is to provide a possibility to analyze and search for data early on. This allows
for more informed decision-making concerning further data processing. At this stage,
annotation functions are executed to derive basic information about the data. Usually,
the functions are not power hungry as they process all data (instead of only preselect-
ed data). There might be different strategies, but easily derived annotations conducted
in this phase include map-based annotations, weather and day-time annotations, and
annotations derived directly from bus data.
After this stage, users can analyze statistical information based on the data: for ex-
ample, the distribution of road types, road infrastructure, or weather conditions such
as rain, fog, and storms. In addition, information about traffic signs is extracted from
the map data to provide further insight on road sections of interest.

Fig. 3. Data ingestion pipeline ensuring only quality data are ingested and automatically prepar-
ing data for searches and analyses in compliance with GDPR

Another step involves the preparation of data for visualization. Users expect detailed
visualization options that will allow them to see the vehicle route on a map and to
8

preview data from different sensors. Therefore, highly compressed previews opti-
mized for web visualization are generated for every recording. In addition, the pre-
views can be anonymized (faces and license plates are blurred or artificially generat-
ed) to comply with GDPR regulations and to simplify the organizational measures.
Ingestion can be executed fully automatically as a part of the overall data pipeline.
It all starts with the preconfigured upload station. At the push of a button, the data is
retrieved from the vehicle SSD (or hard drive) and processed by the ingestion pipeline
functions to prepare it without any human interaction. At the end of the pipeline, the
test or data engineers are able to access, preview, and search in the data to accomplish
their tasks.

4 Data Annotation, Analysis, and Selection

In a broader sense, annotation is the extraction of structured information from un-


structured data. It is necessary to make the unstructured data easier to understand and
use. In the context of autonomous driving, raw video, lidar, and radar data is consid-
ered unstructured. Annotations are metadata and, to a certain extent, they result from
a processing stage that makes it possible to execute higher level abstraction algo-
rithms on the data itself. For example, when calculating the number of objects in a
scenario or calculating the distribution of distances to other objects, it is first neces-
sary to recognize the objects. Another example is searching for a particular aspect or
situation. Running a search on the raw data is not feasible. Therefore, metadata ex-
tracted from the raw data is used to improve search responsiveness.
An overview of the purposes and requirements of annotations in the process of da-
ta-driven development is shown below.

• Filtering and reducing data


Reducing the volume of data to data that significantly adds value is one of the
main goals of data-driven development. Annotations make it possible to define
rules for in-vehicle or ingestion pipelines that either delete the recorded data or
speed up the utilization of data by sending it off the vehicle via wireless links.
Any annotation type can be used for filter rules and their quality requirements
may differ. If a filter is used to preselect data, some false positive bias is allowed as
it “only” increases the used wireless bandwidth. If a filter is used to reduce data,
more confidence is required in the annotation to not delete useful situations that do
not have sufficient representation in the already collected dataset.
One efficient way of reducing the data volume for AI training is to eliminate
frames that do not balance the training data set; frames that are too similar to oth-
ers. A specific description vector of a frame is used in this case as an annotation
and compared with vectors from other frames. For more details, see [5].

• Analyzing data
Without annotation, a great deal of important metadata would be missing. By
annotating the data, it is possible to gain insight into the distribution of different
9

aspects of interest such as highway versus rural roads, the number of merge points
on the highway, or the number of surrounding vehicles (see example graphic in
Fig. 4).

Annotations for data analysis are also necessary to evaluate whether set goals are
achieved in data logging campaigns. A predefined number of recordings for a specific
situation is usually the goal that determines the length of the campaign.

• Searching for and selecting data


Possessing data without being able to find something efficiently is a waste of re-
sources. Annotations can be used to quickly search without intensive computation-
al tasks being performed every time data is requested from a data lake.
Building on the search, data can be structured and organized to prepare dedicat-
ed purpose-built datasets to facilitate the execution of further stages in the data
pipeline like labeling, augmentation, and even AI training and validation processes.
The most common search query is a text string describing a particular aspect of
interest (e.g., night, rain, bicycle, distance <50m, etc.). During AI validation, usual-
ly some particular situation or frame causes the trained model to fail and the reme-
dy is to expand the training dataset. Therefore, the search aims to find data that is
similar to the problematic frame. Instead of describing the picture in human terms
(e.g., urban situation, rain, etc.) the problematic frame itself can be used as a query
that looks for similar pictures or situations. This innovative and promising ap-
proach is described in more detail in [6].

• AI training
AI training requires high-quality annotations (also called labels). The more pre-
cise the labels, the better the training result and the better the performance of the
final ADAS/AD function [7]. For perception functions, both static and dynamic
objects need to be annotated with their size, position, orientation, and other aspects
to provide sufficient ground-truth information.

• Validation
Validation always needs a reference. When validating perception and fusion
functions, annotations serve as this reference (i.e., ground truth). As the name im-
plies, the ground truth is a precise description of reality that is captured in the sen-
sor data.
10

Fig. 4. Example of distribution analysis for data speed road sections

There are several types of annotations that are extracted and used in the process of
data-driven development. Often the term “tag” is used for situational annotations de-
scribing general aspects of the situation such as rainy or pedestrian crossing and the
term “label” is used to identify the object in the data precisely. However, there are
overlaps and such differentiated terminology is not always unequivocal.

• Manual annotations – driver or assistant manually adds whatever kind of infor-


mation that can be used for the purposes mentioned above (usually not the annota-
tions for reference data).
• Ego vehicle tags – these tags are derived from ego parameters extracted during the
analysis of bus and network communication (e.g., speed, acceleration, yaw angle)
• Weather and day-time annotations – information about the actual weather situa-
tion, for example, from public services that provide the information based on the
position and time or by analyzing camera data.
• Map-based annotations – maps are a rich source of useful information that can
serve the purpose of annotation. Road types, road infrastructure, traffic signs, and
speed limits are just a few examples of possible extracted tags.
• Feature vector – this is a special type of annotation calculated based on the picture
and used for the similarity search.
• Object detections and object properties – identification of objects in the scene
and their properties based on neural networks or derived from manual data. Exam-
ples of objects include static objects such as guardrails and streetlamps and dynam-
ic objects such as other traffic participants (e.g., pedestrians, trucks, buses, etc.)

It is also worth mentioning that annotations are not free and entail certain costs. If
the annotations are extracted automatically, then the cost of computational resources
is considered. If a human annotator performs the job, then the personnel costs are
considered. For high-quality annotations, used for AI training and validation, the
combination of automated process and human quality controls is often the preferred
method as it optimizes the costs.
11

5 Data Replay

Autonomous vehicles can only gain the trust of consumers if the technology is proven
to work in all cases, emphasizing the role of automated testing. Key technologies for
autonomous driving such as environment perception and sensor fusion require a dif-
ferent approach to testing. Data replay, a relatively new testing strategy, has estab-
lished itself as an efficient and cost-effective testing and validation method for these
technologies.
Various methods can be used to validate an autonomous driving system (see Fig.
5). Test drives with test vehicles and AD prototypes can be used to test all vehicle
components under realistic conditions. However, test drives are cost intensive. More-
over, critical traffic situations are rarely observed on the real road. Another possibility
for validation is environment simulation. With synthetically generated data from high-
fidelity simulators (e.g., dSPACE ASM), hazardous situations can easily be simulated
and manipulated. For example, it is possible to change the weather and the time of
day of a test for the same operational design domain. Although the quality of sensor
simulations is steadily increasing, synthetic data will never truly match the real world
due to the simulation paradox.

Fig. 5. Applicability analysis of different tests according to “safety first” for automated driving.

Therefore, replaying recorded sensor data, known as data replay (i.e., data repro-
cessing), has established itself as another key test methodology. Data that was previ-
ously recorded during test drives is used for validation. When replaying the data, the
recording conditions must be recreated exactly as recorded, thus allowing for street
conditions to be replicated in the laboratory. When using real-time data replay, the
environment perception algorithms are supplied with recorded data around the clock,
the system responses are measured, and the object detection quality is evaluated con-
tinuously. This ensures that the new software version of the AD stack is tested proper-
ly against real-world data and scenarios.
12

When testing with recorded and/or simulated data, it is possible to distinguish be-
tween two basic test methods. Software-based validation such as software replay fo-
cuses on validating an algorithm without considering time constraints or connected
bus systems. Testing can be performed before a hardware prototype is available. The
second level is hardware-based validation such as hardware replay, which involves
deploying the central computer or sensor ECU, which is connected to the test system
and supplied with synthetic and/or real data. This makes it possible to test all of the
hardware as well as the electrical interfaces and the software in the conditions closest
to on-road testing.
An example architecture (see Fig. 6) can be used to better understand the challeng-
es of data-replay. In this architecture, the sensors of the autonomous vehicle generate
a data volume of 70 Gbit/s. The data replay test station must stream this 70 Gbit/s
around the clock while ensuring that the device under test does not recognize that it is
not installed in a real vehicle and driving on the road. The challenge lies in the nature
of the streamed bits. It should be noted that raw camera data, radar data, lidar data,
and especially data from CAN and Ethernet (SOME/IP) packets are extremely differ-
ent. Nevertheless, all these data streams must be played back in a synchronized man-
ner within microseconds to ensure the correct test conditions. Taking the software
replay scenario into account, an additional layer of requirements is added: the correct
simulation of the virtual image of the device under test. Not only does this need to be
accurate, but it must also contain the accurate simulation of the virtual buses attached
to this virtual device under test. In addition, the whole software replay test has to run
as fast as possible, ideally faster than in real-time.

Fig. 6. An example architecture of a hardware replay test station

Data replay methodology can be enhanced even further. Introducing failures and
manipulating data in the streamed data makes multiple test variants possible. The real-
time manipulation of the streamed bus/network data makes it possible to validate the
bus security stack of the device under test. Moreover, manipulating sensor data and
13

introducing failures can test the reaction of the perception algorithm against vandal-
ism attacks.
Nevertheless, the data replay test station, be it software or hardware, is part of a
bigger data and software delivery pipeline. Continuous testing and over-the-air soft-
ware updates mean the faster a test campaign is carried out the faster car manufactur-
ers can monetize new features. A crucial factor for this is the transfer of data from the
data lake to the data replay test station. An optimal approach here is to bring the data
replay test stations close to the measurement data, thus saving time and money. This
is more challenging for hardware data replay systems than for pure software data
replay systems with regard to colocation with the data storage.
With such an approach, the environment perception components can be tested 24/7
against thousands of driven kilometers. Being infrastructure agnostic is automatically
required to pursue this endeavor as data could be stored in a native public cloud, on-
site, or in hybrid cloud/edge constellations. Especially for hybrid cloud constellations,
there is a great need for a single data access point to manage and allow for data replay
test campaigns as it simplifies the testing process significantly. Intempora Validation
Suite (IVS) abstracts these different data infrastructures and provides a single web
interface to control both software and hardware data replay test stations.
Data replay is becoming increasingly established as a centerpiece of autonomous
driving homologation as it guarantees a high level of confidence in the vehicle’s per-
ception stack. Nevertheless, data replay testing poses challenges regarding the data
replay test bench due to the large number of sensor and bus interfaces and with regard
to the infrastructure and efficient data storage [8].
A smooth transition between software and hardware data replay testing accelerates
the entire validation process. However, the increasingly common use of public cloud
services complicates the situation. The public cloud infrastructure is inaccessible to
test engineers and the data must be co-located using the device under test. A new
level of cooperation is needed among automotive OEMs, tooling providers, and the IT
companies providing the infrastructure (Fig. 7).
14

Fig. 7. Scalable data replay from dSPACE in the cloud and colocation center

6 Data Beyond Reality

Validating ADAS or AD systems with recorded data from real traffic situations is
not enough. The real world is very complex, traffic situations are varied, and the vali-
dation timeline is limited. The vehicles must be prepared for situations that were not
encountered during the data collection or testing phase. This is where simulation
comes into play. The realistic simulation of sensors and the optimization of the total
test cases are major challenges.
Regardless of the challenges, the need for synthetic data is evident. Therefore,
well- defined data pipelines are able to seamlessly employ both real and synthetic data
during the development process. As synthetic data is becoming more and more realis-
tic, both data sources can be used for training and validation. Data management sys-
tems must be able to distinguish between these two types of data and maintain the
connection if synthetic data is derived directly or indirectly from the collected data.
There is also a new approach to how to expand the data sources for the training and
validation used in data management and the data pipeline. This is referred to as data
augmentation and it is a combination of real collected data and synthetically generat-
ed data. There are two ways to augment a dataset. One is to replicate real data in the
simulated environment with additional traffic participants. This becomes a fully simu-
lated output for the system under test. The second option is to integrate a simulated
object in the collected data. Most of the output is still collected data (the exact data
from the sensors recorded), but that data is overlaid with artificially created objects or
effects.
15

7 Closing the Loop

It is indisputable that data from real traffic situations plays a key role in the devel-
opment of new cars and will continue to do so in the years to come. ADAS functions
are mandated by regulatory authorities and there is no way around this. Even with
basic ADAS functions, machine learning models are employed. With the complexity
of the functions increasing and vehicle autonomy growing, more and more neural
network models and their parameters are being integrated into the car software. Work-
ing efficiently with data and not just with code increments is a challenging task for
companies that are not prepared, meaning they do not have proper infrastructure and
have not adopted adequate methods to progress as expected by the market. Many
companies have designed their internal tools to support data-driven development.
However, it is evident that such approaches are difficult to scale, in terms of the data
volumes, team size, and new function development. Internal tool development binds
resources and keeps companies from achieving their main goal. Fortunately, commer-
cial solutions have emerged to help build an automated, flexible, and scalable pipeline
with stable features and secure maintenance to keep the pipeline running during the
critical phase of development projects.

References
1. BMWGroup. [Online]. Available:
https://www.press.bmwgroup.com/global/article/detail/T0293764EN/the- new-bmw-
group-high-performance-d3-platform-data-driven-development- for-autonomous-
driving?language=en; last accessed 2020/04/08. [Accessed: April 19, 2021].
2. A. Karpathy, "Software 2.0," November 11, 2017. [Online]. Available:
https://karpathy.medium.com/software-2-0-a64152b37c35. [Accessed: April 8, 2021].
3. Ambarella, "Ambarella introduces CV5 high performance AI vision processor for single
8K and multi-imager AI cameras," January 11, 2021. [Online]. Available:
https://www.ambarella.com/news/ambarella-introduces- cv5-high-performance-ai-vision-
processor-for-single-8k-and-multi-imager- ai-cameras/.
4. Microsoft, "Microsoft docs," [Online]. Available: https://docs.microsoft.com/en-
us/azure/expressroute/expressroute-erdirect-about. [Accessed: May 9, 2021].
5. P. Moravek, "Smart Data Logging – Part I: Reducing Redundancy," dSPACE, February
15, 2021. [Online]. Available: https://www.dspace.com/en/inc/home/news/engineers-
insights/smart-data-logging-redundancy.cfm. [Accessed: May 5, 2021].
6. D. Hansenklever, "Dataset Enrichment Leveraging Contrastive Learning," December 7,
2020. [Online]. Available: https://towardsdatascience.com/dataset-enrichment-leveraging-
contrastive- learning-ea399901f24. [Accessed: May 2, 2021]
7. M. Mengler, "Quality — The Next Frontier for Training and Validation Data," May 17,
2018. [Online]. Available: https://understand.ai/blog/annotation/machine-
learning/autonomous-driving/2018/05/17/quality-training-and-validation-data.html. [Ac-
cessed: May 13, 2021].
8. dSPACE, "Validating ADAS/AD components using recorded real-world data," [Online].
Available:
https://www.dspace.com/en/inc/home/applicationfields/our_solutions_for/driver_assistanc
16

e_systems/data_driven_development/data_replay.cfm#179_55577. [Accessed: April 20,


2021].
Smart Fleet Analysis
with Focus on Target Fulfillment and Test Coverage

Erich Ramschak, Rainer Voegl, Philipp Quinz,


Michael Erich Hammer, Rudolf Freidekind

AVL List GmbH, Graz, Austria

Abstract
A new method is described regarding over-the-air analysis of fleet data on a
backend server in order to optimize driving routes and to evaluate the perfor-
mance fulfillment level of ADAS. With this e.g. insufficient coverage of maneu-
ver diversity can be corrected immediately and new driving routes adjusted to
avoid inefficient wasted kilometers during real road validation of assistance fea-
tures in the fleet. Thus, one of the most relevant demands from vehicle manufac-
turers is addressed, namely, a significant reduction of the effort for validation
without compromising safety, quality and reliability of ADAS functions.

Testing of vehicle fleets under real conditions on public roads is an essential


part for the validation of assistance functions. The primary goal is to evaluate
safety, reliability, robustness and target-compliant performance of assistance
functions under the greatest possible variety of environmental conditions, traffic
situations and driving maneuvers. The challenge is that the great effort to collect
the test data is complemented by time-consuming effort on data analysis. This
may result in the worst case that relevant findings only become available after
the test program has already finished.

The new over-the-air method consists of the four core elements driver guid-
ance, secure data transmission, automated maneuver evaluation and web-based
data enrichment including a statistical analyzer on a backend server. Useful ap-
plications including practical examples and figures are highlighted in the next
chapters. Advantages versus conventional approaches are discussed. Finally, an
outlook to potential extensions is shown.

Keywords: ADAS, AD, fleet monitoring, performance optimization, test cover-


age, route optimization, over the air data analysis, data enrichment

© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_15
2

1 Introduction

1.1 Objective
Before driving assistance functions (ADAS) and automated driving functions (AD)
are launched to the market vehicle undergo comprehensive fleet test campaigns under
real world environment and traffic conditions. The target is on the one hand to validate
robust, safe and reliable operation under all practical circumstances and on the other
hand to evaluate the features performance regarding high expectations as well as ac-
ceptance of the end consumers.
A fleet test program for e.g. in Europe usually covers driving routes through 28
countries with high diversity at urban, rural, expressway, highway, straight, curved and
mountain road types including different weather, daylight and traffic conditions. As an
example, for validation of 15 basic ADAS L0 to L2 features 10-20.000 km per country
are tested. Thus, driving range would rise to about 300-500.000 km. Furthermore, in
case of first-time introduced ADAS components, like sensors or perception functions,
these test-km distances could easily be doubled.
Hence, the real road fleet validation is a considerable time and cost consuming effort
and yet essential to ensure safe and reliable operation.

1.2 Status Quo and Demand for Improvements


If the above-mentioned example is considered for all 47 European countries with a
total network of 5.4 million km road length including different road type distributions
in each country (ref. to Fig. 1) it becomes obvious, that route planning for the fleet test
campaign is a highly complex and demanding task. Hence manageable solutions must
be found to reduce the validation effort.

Europe: Road Type Distribution


"Other roads" w/o hard surface excluded

100%
90%
80%
70%
60%
Secondary or regional roads
50%
Main or national roads
40%
30% Motorways
20%
10%
0%
IT

LV
LT

RO

IS
BE
BG

LU
DK
DE

IE

MT

PL
PT

RS
UK
MK
HR

HU

NL

NO
FR

AT

SI
SK
FI
CZ

CY

SE

CH
EE

TR
EL
ES

Fig. 1. European Road Types Statistics [1]

A common approach for route planning is to rather define statistical distribution tar-
gets of road types, weather & light conditions and driving maneuvers or scenarios with
3

respect to a total number of validation distance (ref. Table 1) instead of just making
kilometers in each country.

daylight 60% dry 50% urban 20%


night 25% wet 30% inner-urban 15%
twilight 10% rain 15% rural 45%
backlight 5% snow 5% highway 20%

Table 1. Example for environmental route statistics (targets)


However, it is mainly empirical knowledge to define the most suitable driving routes
and to estimate the fulfillment of the statistical targets due to many events like traffic
density or change in weather conditions happen rather unpredicted.

Once the fleet validation program has been started it is therefore not clear if the
statistical target ranges will be met by end of the test campaign. The evaluation of test
coverage through fleet data analysis usually takes weeks and daily route adaptions are
not feasible. As a result, gaps in the test coverage can occur. For multiple test scenarios
this means the validation may not be completed accordingly.
In the best case and based on online fleet monitoring, a test program with daily op-
timization would finish earlier providing the same test coverage while reducing effort
in time and cost.

2 Over the Air Fleet Monitoring with Backend Analysis

Data acquisition with high data quality is the key to efficient data analytics. In the spe-
cific case of fleet monitoring real-time over-the-air data collection is enabling a com-
pletely new way of tracking and operating the fleet. It is essential to combine measure-
ment data from the vehicle as well as meta data from the environment or the driver in
a centralized data storage. Integration and automation of the data acquisition and data
transport is key to data quality as the central fundament for later analytical assessment.
Centralizing the data is a prerequisite for all the objectives and benefits mentioned
above.

Key capability of AVL’s data analytics pipeline is the central persistence of all data
analytics results e.g. coverage of attributes as described per chapter 1.2. For this pur-
pose, scripts used for evaluation in the distributed test vehicles (currently the predomi-
nant state in the industry for analyzing fleet data) are performed onto a powerful and
scalable analytics platform to enable automatic processing. Centralizing the analytics
scripts enables re-use and clean and proper versioning and governance of code and re-
sults.

This accessibility of the measurement allows the decoupling of data from the original
test request. All measurement data will be evaluated by constantly refined analytics
scripts for suitability. This leads to a maximal synergetic use of each mile driven. Hence
4

with more test miles as well as further test programs and therefore more evaluation of
measurement data every trip is contributing to higher coverage for each of the attributes.

Fig. 2. Instant Analytics and insights on the fleet test show potential overlaps of the testing pro-
gram (route planning) and enable synergies to reduce testing effort.

2.1 Software Solution Architecture


AVL implemented the software solution AVL Fleet Analytics Service - FASER de-
picted in Fig. 2. This software solution does satisfy targets and provides benefits as
described. It is operated as a web-based software service for fleet operation.

Fig. 3. Component Architecture of FASER

Fig. 3 describes the high-level topology of our solution architecture. All components
have well defined interfaces and communicate either via the API ecosystem (REST and
gRPC) or via a messaging bus (Apache Kafka and AMQP) for the event driven parts of
our logic.
5

Some of the core capabilities are in the scope of target fulfillment for fleet operation
are:

 A completely automated analytics pipeline from data income (via message arrival at
FASER originating from the data acquisition system in the vehicle) to the persistence
of the coverage attributes in the result database (see chapter 2.2).
 Within our processing engine our measurement data is connected with metadata.
Examples are:
─ Map matching of GPS points onto drivable road segments
─ Fetching metadata from map providers (e.g. OpenStreetMap Overpass API) and
enrich the detected events or even enable event detection (eg. Road types)
─ Weather/environment data enrichment on collected data during data transfor-
mation
─ Traffic information extraction (e.g. average speed per road segment) or enhance-
ment (via third party traffic API’s)
 With the help of a connected mobile digital logbook, running on mobile handheld
devices like mobile phones or tablets, additional label information is collected from
test execution. For example the driver is able to tag situations on the road which
might be hard to extract based on automated perception (odd vehicles on the road,
special types of vulnerable objects on road, maintenance work and configuration
updates on the ego vehicle, …). Again, this metadata is connected to the events in
the central result database (see chapter 2.2).

2.2 Central database for analytics results


One of the core novelties of this solution is the persistence of all analytics results in
a central result database. The previously described synergy is technically achieved by
following principles:

 The event detection and characteristic value calculation algorithms are defined inde-
pendently from the actually executed route/test case.
 Any event detection is applied to any measurement data as long as the necessary
signal information is available.
 All necessary metadata (e.g. software version, calibration version) are connected to
the analytics results (event and characteristic value).

Finally, this requires a common data model for all results within the Result compo-
nent. It is achieved by only 4 major business objects (shown in Fehler! Verweisquelle
konnte nicht gefunden werden.):

 Events have a certain EventType (which are built as a tree of dependent EventTypes)
 SessionMetadata (e.g. measurement files, with all necessary metadata to be con-
tained in Result) contains a list of all Events which belong to a session
 Characteristic values are in the end Aggregates which belong to an Event
6

On top of these general result representation the Result component is serving API end-
points (e.g. histogram, heatmap, violin, categorical,…) to present the values by provid-
ing the filter condition in a way to be highly compatible with the visualization compo-
nent to create the charts in the frontend. It is the Result component and the visualization
on top which enable the engineers to perform completely interactive and explorative
investigations on the data collected across the complete fleet of vehicles and across all
measurement trips.

3 Route Optimization for Best Scenario Diversity

Since testing and validation under real-world conditions is very time consuming, all
effort to reduce the time duration to a minimum is feasible. At the same time, data
diversity reached and test cases as well as scenarios covered, should be maximized.
Therefore, AVL’s approach of route optimizations is very important to increase effi-
ciency in the validation program. In chapter 1 example for requirements (or boundary
conditions) for reaching a certain data diversity is shown in Table 1. In order to do so,
it’s necessary to know right from the start, if certain routes fulfill the mandatory re-
quirements. The route optimization is performed from high level abstraction to more
and more detail in the process, until specific routes are defined. The process is shown
in Fig. 4 .

Boundary Conditions Initial Route Detailed Route


Planning Planning

•Road type •Plan time table •Plan exact route


distribution for sequence of with Points of
•Scenarios countries to be Interest (POIs)
•Test cases driven •Calculate road
•Wheater •Detailed plan of type distribution
conditions how many •Calculate static
kilometers shal be KPIs
•Time-of-day
covered in each
distribution
country

Fig. 4. Process for route planning, from requirements to detailed routes


7

At first, the boundary conditions like countries, road type-distribution and vehicle
availability must be considered. This leads to an initial route planning with a time se-
quence of countries and a concrete time plan.

Fig. 5. Example initial route planning in Europe

Fig. 5 shows an example for an initial route planning across Europe. The color of
the countries represents the priority of those and therefore, how much time and mileage,
there should be spent and covered, respectively. Dark blue countries represent a very
high priority, blue countries a medium priority and light blue countries a lower priority.
The number together with the arrows show the order in which the countries will be
visited by the vehicle. Obviously, this example just gives an overview for one vehicle.
For a whole fleet the initial planning is performed for every single vehicle or a group
of vehicles, respectively.

With the initial route planning done, the detailed route planning follows. The dedicated
routes for each vehicle must be defined in order to fulfill required road statistics as well
as time of day distributions. AVL’s toolchain supports to plan and compare different,
specific routes in terms of those coverage KPIs and to decide routes which best fits to
the boundary conditions. An example for the comparison of three different routes from
Graz to Gothenburg is illustrated in Fig. 6
8

Fig. 6. Example for comparison of three different routes from Graz to Gothenburg

The drop down box of the selected Route 3 shows only some of the calculated KPIs.
Route 1 in light blue color and route 2 in orange color are shown on the map and can
be easily compared with each other. In this example the KPIs for each route are as
described in Table 2. Depending on the specific target requirements the route which
best fits will be chosen.

KPIs/Routes Route 1 Route 2 Route 3


Tunnels 11 16 13
Toll Booths 17 19 23
Sharp Curves 41 36 31
Secondary Highway (%) 1.21 1.73 2.54
Roundabouts 16 14 9

Table 2. Example of the KPIs for three different routes from Graz to Gothenburg
A well-prepared route planning is basis for a successful and efficient real-world fleet
validation. The strengths and advantages of an automated data evaluation will be de-
scribed in the following chapter.
9

4 Automated Performance Evaluation Under Real World


Conditions

During a fleet validation the vehicle with the ADAS functions is tested under real-
world conditions as the end customers will encounter them during their day to day us-
age. The evaluation take place in all dedicated target markets. To ensure a good cover-
age of the predefined ODD`s (Operational Design Domains) and DDT`s (Dynamic
Driving Tasks), an accurate route planning and optimization is key for efficient testing
– as discussed before. Not only the driven mileage is an indicator for robustness but
also the fulfillment of certain coverage KPI`s like:
 road conditions and types (highway, rural, city, etc.) and static POIs (Points
of Interest) like tunnels, roundabouts or bridges;
 the environment in terms of light and brightness, the weather conditions, like
light, medium or heavy rain, low standing sun and so on;
 the traffic conditions as well as the participants;
 driving tasks like different dynamic maneuvers and
 physical parameters like vehicle speed, acceleration and the position in the
lane, to just name a few.
Key for the performance evaluation of the ADAS features is the automated maneu-
ver detection based on certain signals from the environmental perception. Those signals
include information about the vehicle under test (e.g. speed and acceleration) and in-
formation about other traffic participants, like other vehicles and VRUs (Vulnerable
Road Users), as well as the relation to those (e.g. distances to other traffic participants,
relative speeds, distance to lane borders and time to collision). Based on those signals,
which are transferred over the air to the fleet monitoring backend, ADAS relevant sce-
narios (see Fehler! Verweisquelle konnte nicht gefunden werden.) are automatically
detected within AVL’s fleet monitoring framework.
10

Fig. 7. Examples for dynamic driving scenarios which are automatically detected

The scenarios range from simple follow maneuvers, like encountered in stop and go
traffic, to more complex maneuvers, where a lane change is performed, either by the
Ego vehicle (in blue) or the TOF (Target Object Front, in red), to even more complex
maneuvers, where three or more vehicles are involved. All those maneuvers are de-
tected automatically and certain relevant KPIs (Key Performance Indicators) are calcu-
lated. Most of them represent an extremum like the maximum or minimum of a certain
signal like speed, acceleration or distance to another vehicle. Based on those parame-
ters, the system requirements and performance targets can be verified.

For the validation of the ADAS features under e.g. not foreseen but relevant scenar-
ios, the input from the test driver is essential. This input is logged with a digital tagging
device in the vehicle, which can be either a smartphone or a tablet. The driver has the
possibility to tag every situation that is noticeable, either due to certain environmental
conditions or an unexpected or wrong reaction of the system. With those false positives
and negatives, together with the general coverage statistics as well as the activation and
availability statistics of the systems, analysis on the feature performance can be done
easily. The analysis enables a clear statement whether the dedicated ADAS feature can
be considered as mature and therefore validated under real-world conditions.

The whole validation process of ADAS features is shown in Fig. 8. Over the air
transfer of the needed vehicle signals is precondition for the following analysis. The
scenario detection and KPI calculation are fully automated. With the in-vehicle evalu-
ation of the driver regarding false reaction of the ADAS feature together with a general
11

subjective evaluation of the automated driving performance, the validation can be per-
formed without the need of co-drivers. Enabler for public road testing is the objective
interpretation of system behavior against the system requirements and the according
acceptance criteria of end consumers.

‡ OTA data transfer

‡ Automated scenario detetion in fleet monitoring backend

‡ KPI calculation in fleet monitoring backend

‡ In-vehicle validation by test driver (false positives/negatives)


‡ Performance evaluation of the ADAS features

‡ Verification against requirements and performance targets


‡ Validation based on test driver input and coverage analysis

Fig. 8. Automated performance evaluation process

Fig. 9. Example for coverage analysis

In Fig. 1Fehler! Verweisquelle konnte nicht gefunden werden. an exemplary cov-


erage analysis is shown. Different ODDs as well as environmental conditions are ex-
amined. They give an overview under which circumstances the ADAS features have
12

been tested and therefore can be considered mature or not. If certain boundary condi-
tions or corner cases are missing, further testing might be necessary in order to verify
and validate the system under all conditions. That is especially important, if a poor
performance in one or more environments or critical situations was identified.
To sum it up, AVL’s approach offers a high level of automation and therefore a high
potential of increasing efficiency in real-world testing saving time but also significant
cost, because the validations are done without co-drivers. The clustering of data accord-
ing to certain dynamic driving scenarios together with the data enrichment of environ-
mental conditions, grant the possibility of filtering and understanding big amounts of
data. The relevant events and weakness can be identified, strengths highlighted, and
necessary improvements shown. Thus, fleet monitoring and automated performance
evaluation offer a valuable solution in the ADAS validation process.

5 Summary of Advantages

The discussed solution is introducing a clear separation of measurement data from


analytics results (visualized in Fig. 2). It enables faster coverage analysis and perfor-
mance investigation across all measurement files. The maximum of information from
every driven mile is extracted without time intensive raw data analysis. In addition, data
enrichment via information from on-line channels and high definition maps improve
the value of test campaigns without putting cost-intensive equipment to the fleet. Fol-
lowing the client/server pattern and providing content via API based backend services
over a web frontend, the same state of data and information can be served to a wide
range of stakeholders in a consistent state.
The clear target is a high level of automation from data creation to information
presentation (automated data pipeline). This allows much faster human centric sup-
ported control loops. It is especially important for real world fleet testing where collec-
tion of geo localized scenarios is essential. It is ensured that even before the fleet vehi-
cles are leaving the area of interest corresponding route modifications can be done with-
out loosing valuable opportunities to test.
Finally, maximizing digitization of the whole fleet execution and monitoring process
is increasing efficiency significantly like substituting co-drivers in the vehicle, lowering
the needed workforce for data inspection and cutting unliked waste kilometers which
are not contributing to the testing purpose.

6 Outlook

The shown methodology already provides a high level of maturity. However, more
improvements are in the pipeline. A new point is data reduction on the edge (in vehicle)
via similarity measures for scenarios. This means, that only high-volume data for rele-
vant scenarios is collected. Relevant in this context means either new, in the sense of
13

unseen situations in database via scenario cluster density analysis, or critical, in the
sense of either poor perception performance or critical ADAS/AD function behavior.

Another future topic concerns data lifecycle management in the backend. The same
criteria and method, respectively, as used for selective collection of data in the vehicle
can also be utilized in the backend to balance the density of scenario data. The aim is
to maximize diversity in the scenario database by, at the same time, minimizing the
storage demand. This optimized real-world scenario set should be utilized to extract
input for virtual testing in the simulation, e.g. by definition of according OpenSCE-
NARIO® and OpenDRIVE® files.

For further increasing test efficiency, an automatic control loop will be realized. The
current coverage KPI’s from test execution will be compared with the target distribu-
tion. Based on the delta, the route planning will be automatically adjusted and updated
routes send to the fleet for execution. Furthermore, intelligent maneuver proposals can
be provided (according to degrees of freedom the driver still has in public traffic) to
fasten the process of achieving maximum test coverage. This enables the combination
of verification typically performed on proving ground and real-world validation.

Finally, enhancing the ability of in-vehicle labeling to get as much driver perception
and label information as possible, will increase the value of the collected data. Speech
to text with follow up NLP (natural language processing) to extract as much semantics
as possible is key for that purpose.

References
1. Data source: Statistical pocketbook 2020, Chapter 2.5 Infrastructure, Eurostat, Image AVL

Das könnte Ihnen auch gefallen