Beruflich Dokumente
Kultur Dokumente
Automatisiertes
Fahren 2021
Vom assistierten zum autonomen
Fahren
7. Internationale Fachtagung
Proceedings
Ein stetig steigender Fundus an Informationen ist heute notwendig, um die immer
komplexer werdende Technik heutiger Kraftfahrzeuge zu verstehen. Funktio-
nen, Arbeitsweise, Komponenten und Systeme entwickeln sich rasant. In immer
schnelleren Zyklen verbreitet sich aktuelles Wissen gerade aus Konferenzen,
Tagungen und Symposien in die Fachwelt. Den raschen Zugriff auf diese Infor-
mationen bietet diese Reihe Proceedings, die sich zur Aufgabe gestellt hat, das
zum Verständnis topaktueller Technik rund um das Automobil erforderliche spe-
zielle Wissen in der Systematik aus Konferenzen und Tagungen zusammen zu
stellen und als Buch in Springer.com wie auch elektronisch in Springer Link und
Springer Professional bereit zu stellen. Die Reihe wendet sich an Fahrzeug- und
Motoreningenieure sowie Studierende, die aktuelles Fachwissen im Zusammen-
hang mit Fragestellungen ihres Arbeitsfeldes suchen. Professoren und Dozenten
an Universitäten und Hochschulen mit Schwerpunkt Kraftfahrzeug- und Moto-
rentechnik finden hier die Zusammenstellung von Veranstaltungen, die sie selber
nicht besuchen konnten. Gutachtern, Forschern und Entwicklungsingenieuren in
der Automobil- und Zulieferindustrie sowie Dienstleistern können die Proceedings
wertvolle Antworten auf topaktuelle Fragen geben.
Automatisiertes Fahren
2021
Vom assistierten zum autonomen
Fahren 7. Internationale
ATZ-Fachtagung
Hrsg.
Torsten Bertram
Technische Universität Dortmund
Dortmund, Deutschland
© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
Das Werk einschließlich aller seiner Teile ist urheberrechtlich geschützt. Jede Verwertung,
die nicht ausdrücklich vom Urheberrechtsgesetz zugelassen ist, bedarf der vorherigen Zustim-
mung des Verlags. Das gilt insbesondere für Vervielfältigungen, Bearbeitungen, Übersetzungen,
Mikroverfilmungen und die Einspeicherung und Verarbeitung in elektronischen Systemen.
Die Wiedergabe von allgemein beschreibenden Bezeichnungen, Marken, Unternehmensnamen
etc. in diesem Werk bedeutet nicht, dass diese frei durch jedermann benutzt werden dürfen. Die
Berechtigung zur Benutzung unterliegt, auch ohne gesonderten Hinweis hierzu, den Regeln des
Markenrechts. Die Rechte des jeweiligen Zeicheninhabers sind zu beachten.
Der Verlag, die Autoren und die Herausgeber gehen davon aus, dass die Angaben und Informationen
in diesem Werk zum Zeitpunkt der Veröffentlichung vollständig und korrekt sind. Weder der Verlag
noch die Autoren oder die Herausgeber übernehmen, ausdrücklich oder implizit, Gewähr für den
Inhalt des Werkes, etwaige Fehler oder Äußerungen. Der Verlag bleibt im Hinblick auf geografi-
sche Zuordnungen und Gebietsbezeichnungen in veröffentlichten Karten und Institutionsadressen
neutral.
V
Foreword
VII
Inhaltsverzeichnis
IX
X Inhaltsverzeichnis
XI
XII Autorenverzeichnis
Abstract.
IAV developed a novel data fusion approach to ensure a comprehensive envi-
ronmental perception for the autonomous driving in urban environments. The
approach is highly scalable, easily adaptable, and sensor independent while
processing heterogeneous data. It is based on a dynamic occupancy grid in con-
junction with a sophisticated combination of inverse sensor models which are
taking the sensor properties into account. The approach has been successfully
used within the HEAT project and will be discussed in this context.
© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_1
2
1 Introduction
Taking the conditions of the HafenCity into account IAV developed the data fusion
algorithm for the environmental perception of the HEAT shuttle from scratch. Start-
ing with an initial sensor setup it was reasonable to expect this setup to change due to
the nature of development projects as well as the different conditions of the different
sub-routes. As a consequence, IAV designed the algorithm to be independent of the
specific sensor sources and highly scalable while being easily configurable at the
same time. The latter was realized by using a modular design principle.
The sensor setup is introduced and the necessity of a highly efficient data fusion algo-
rithm is motivated in section 2. Section 3 is devoted to the data fusion algorithm. The
performance of it is discussed in section 4 followed by a summary and conclusion in
section 5.
The arrangement of the sensor setup is schematically shown in Fig. 2. The two long-
range LiDAR sensors are mounted on the roof of the HEAT shuttle. Four mid-range
LiDAR sensors are mounted on the bottom corners of the HEAT shuttle. Together
they observe the mid- to far-distance area around the shuttle and are utilized for the
localization. The other four mid-range LiDAR sensors are mounted on the edges of
the roof and are tilted downwards. They are used to observe the surrounding of the
HEAT shuttle as well as for the localization and lane detection. The former is in par-
ticular important for the detection of pedestrians in the immediate vicinity of the
HEAT shuttle and for lanes.
The Radar sensors are used to observe the mid-range distance. They work in a dif-
ferent wavelength region than the LiDAR sensors. Therefore they have different de-
tection capabilities and are differently influenced by adverse weather conditions thus
providing additional safety through redundancy. In addition, Radar sensors can pro-
vide some dynamic information of objects in contrast to LiDAR sensors.
The front camera is mounted in driving direction and provides object and lane in-
formation in the near-to-mid range region. The four area view cameras observe the
direct vicinity of the HEAT shuttle and extract object information in this area.
4
Fig. 2. Sensor setup of the HEAT shuttle in top and side view. Blue: cameras, green: Radar
sensors, orange: LiDAR sensors. Most right: infrastructure unit.
This extensive sensor setup provides a comprehensive perception of the HEAT shuttle
environment but generates an enormous amount of data necessitating an efficient data
processing. To handle this data volume IAV developed a novel and highly scalable
algorithm, which uses a minimum amount of computational resources, i.e. no graph-
ical processing unit is necessary, as well as is able to process heterogeneous data.
3 Data processing
ty and simplicity. However, each sensor has reduced information to extract the objects
compared to the feature- and low-level fusion architectures. As a consequence, the
classification is more challenging.
IAV will provide a hybrid multilevel fusion approach combining the fusion of raw
data with features and objects.
Fig. 3. Structure of different data fusion schemes, adapted from [3]. The hybrid scheme was
developed by IAV.
The data processing uses an extended occupancy grid approach, which is introduced
in 3.3. In a first step, the data of all point cloud sensors is fused with the occupancy
grid using inverse sensor models taking the dynamic and meta information of each
feature into account followed by the calculation of the grid cell occupancies (see 3.4).
Subsequently the data of all object sensors is projected onto the occupancy grid (see
3.5). Afterwards the occupancy grid of the current time step is fused with the occu-
pancy grid of the last time step by performing a Bayesian update (see 3.6). In the next
step, morphological operations can be applied to the occupancy grid, if necessary, and
the grid cells are clustered (see 3.7). The final step is devoted to the tracking and crea-
tion of dynamic objects (see 3.8).
In the context of the HEAT shuttle a Cartesian grid consisting of rectangular grid is
used. For the sake of computational simplicity, it was assumed that the grid cells are
independent from each other. Each grid cell has a measure consisting of the occupan-
cy as well as a dynamic layer. An occupancy occ of 0.5 means unknown while occ →
1.0 means fully occupied and occ → 0.0 means free. The dynamic layer can be used
to store e.g. object classification data, a drivability value, velocity, or road infor-
mation.
During the operation of the HEAT shuttle a Cartesian grid of 100 m x 100 m with
grid cells of the size 0.2 m x 0.2 m were used resulting in ~250k grid cells to describe
the environment.
In detail, each grid cell contains its occupancy and the dynamic layer as well as a
temporary quantity describing the measurement. The latter was chosen to be a point
count, which is proportional to the number of data points associated with the respec-
tive grid cell, to fuse the point cloud data of different sensors. To achieve this, for
each unique sensor an inverse sensor model is defined taking into account the proper-
ties of the sensor. An example of an inverse sensor model for a single data point with
no dynamic information is shown in Fig. 5. However, the projection of dynamic in-
formation can be done in a similar fashion.
8
Fig. 5. Examples of an inverse sensor model of a point cloud source (left) and of an object
projection model (right).
The exemplary sensor has a specific resolution of both the distance measurement as
well as the angle determination. As a consequence, the measurement uncertainty can
be visualized as an arc around the data point.
For each sensor, two quantities must be defined: the added point count value and how it is
distributed inside the uncertainty area. The weighting of the different sensors is done by care-
fully choosing the absolute values of the added point count. The distribution function describes
the properties of the point cloud sensor. The simplest case would be a uniform distribution
while models that are more complex can use functions that are more sophisticated. The exam-
ple inverse sensor model in
Fig. 5 shows a linear decrease of the point count towards the edges of the arc.
The occupancy of a grid cell with non-zero point count is calculated via
9
point count
𝑜𝑐𝑐𝑚 = 0.5 + 0.5 ∗ tanh ( ) (1)
𝑁𝑐
where λ𝑚𝑑 denotes the probability of a missed detection. 𝑑 denotes the distance to the
ego vehicle and 𝐷𝑐 denotes a normalization constant modelling the increase of the
occupancy and thus the uncertainty of an unoccupied grid cell with increasing dis-
tance. The minimum function ensures that the occupancy of an unoccupied grid cell
will never exceed 0.5 and thus being occupied.
In detail, IAV developed object sensor models, which take into account the specif-
ics of the object-providing sensor, such as field of view, accuracy, or classification
capability. To project the object onto the grid, three quantities must be known: its
dimension, its confidence, and how the confidence is distributed within the object.
The former determines on which grid cells the object is projected. The latter deter-
mines how much trust is set into the measurement. The confidence is transformed into
an occupancy representation using
𝑜𝑐𝑐 = 0.5 + 0.5 ∗ 𝑓(confidence) (3)
where 𝑓(confidence) is a function with a codomain of [0, 1) describing how the con-
fidence is distributed within the object. This function is chosen to model the object
sensor characteristics. The trivial case is the uniform distribution if nothing is known
about the sensor.
A non-trivial example is shown in Fig. 5. A car is inside the field of view of an ar-
ea view camera located at the left side of the HEAT shuttle and thus observed by it.
The camera provides the car as a preprocessed object enriched with meta information.
The front left corner as well as parts of the front bumper and the left side of the car
are in direct line of sight of the camera. As a consequence, the confidence has its
maximal value there. Additionally, the camera can observe parts of the front bumper
as well as of the left side of the car. In contrast, the camera cannot observe the farther
side of the car resulting in a decreased confidence.
10
To update the occupancy of the associated grid cells the confidence distribution
function is used to calculate an occupancy at the grid cell position. The latter is then
fused with the occupancy of the grid cell using a Bayesian update rule similar to what
will be discussed in section 3.63.5.
In detail, a one-to-one mapping between the grid cells of both occupancy grids is
performed. In a first step, all grid cells being associated with a dynamic object at 𝑡 −
1 are motion compensated. Let 𝐶𝑜 be a grid cell with the center point 𝑟⃗ associated
with a dynamic object 𝑂, which has a velocity 𝑣⃗. Then the properties of 𝐶𝑜 are shifted
to the grid cell at the position 𝑟⃗ ′ via
𝑟⃗′ = 𝑟⃗ + 𝑣⃗ ∆𝑡 (4)
where ∆𝑡 is the time difference between the two time steps. 𝐶𝑜 is then treated as un-
occupied and its occupancy is calculated using Eq. Fehler! Verweisquelle konnte
nicht gefunden werden..
Subsequently each grid cell 𝐶𝑡𝑖 at time 𝑡 and position 𝑟⃗ ′ is fused with a grid cell
𝑗
𝐶𝑡−1 at time 𝑡 − 1 and position 𝑟⃗ if Eq. (4) is satisfied with 𝑣⃗ being the velocity of the
shuttle. The grid cells are fused using a Bayesian update rule and taking into account
Eq. (Fehler! Verweisquelle konnte nicht gefunden werden.) and (Fehler! Ver-
weisquelle konnte nicht gefunden werden.) [9]:
𝑗 𝑗
𝑜𝑐𝑐(𝐶𝑡𝑖 ) = 𝑜𝑐𝑐𝑚 (𝐶𝑡𝑖 ) ∗ ((1 − 𝜀) ∗ 𝑜𝑐𝑐(𝐶𝑡−1 ) + 𝜀 ∗ (1 − 𝑜𝑐𝑐(𝐶𝑡−1 ))), (5)
𝑗 𝑗
𝑓𝑟𝑒𝑒(𝐶𝑡𝑖 ) = 𝑜𝑐𝑐𝑚 (𝐶𝑡𝑖 ) ∗ (𝜀 ∗ 𝑜𝑐𝑐(𝐶𝑡−1 ) + (1 − 𝜀) ∗ (1 − 𝑜𝑐𝑐(𝐶𝑡−1 ))). (6)
Fig. 6. Flow chart of the interacting multiple model approach with two Kalman filters.
In general, the IMM algorithm combines an arbitrary number of filter models, which
allows for a more precise tracking and prediction of the dynamic state of object com-
pared to a single filter. This improved precision is achieved at the cost of a higher
computational demand.
However, following a minimal resources approach, IAV opted for two filters in the
context of the HEAT shuttle. To be more precise: the used IMM combines two ex-
tended Kalman filters using the constant velocity (CV) and the constant turn rate and
velocity (CTRV) motion models. The CV model is well suited to describe the straight
movement of objects but is less effective to describe a track with a high curvature
while it is the opposite case for the CTRV model.
The flow diagram is visualized in Fig. 6. 𝑥⃗ denotes the state vector, 𝜇⃗ the condi-
tional model probability vector, and 𝜆 the model likelihood. For the CV model the
state vector is defined as 𝑥⃗CV = (𝑥, 𝑦, 𝜑, 𝑣)𝑇 , where 𝑥 and 𝑦 are the position, 𝜑 is the
driving direction, and 𝑣 is the absolute velocity value. The state vector of the CTRV
model is similarly defined as 𝑥⃗CTRV = (𝑥, 𝑦, 𝜑, 𝑣, 𝜔)𝑇 , where 𝜔 denotes the angular
velocity. The CV model state transition is given by with ∆t being the time difference
between the two time steps:
𝑣
𝑥(𝑡) = 𝑥(𝑡 − 1) + (sin(𝜑 + 𝜔 ∗ ∆t) − sin(𝜑))
𝜔
𝑣
𝑦(𝑡) = 𝑦(𝑡 − 1) + (cos(𝜑) − cos(𝜑 + 𝜔 ∗ ∆t))
𝜔
4 Performance evaluation
For the performance of the data fusion algorithm the average run time as a function of
input dynamic objects and point cloud data points was evaluated.
The computation time for the projection of an input object is directly proportional
to its size. Therefore, a truck or bus will produce a higher computational load than a
car or a pedestrian. However, in a typical urban environmental scene, the number of
trucks and buses will be much smaller than the number of cars and pedestrians. As a
consequence and for the sake of simplicity, the input dynamic objects were chosen to
be moving cars with a size of 4.5 m × 2.5 m and standard deviations of the length and
width of 1.0 m and 0.75 m, respectively.
The point cloud data points were a mixture of Radar and LiDAR data points. To
resemble the shuttle sensor setup, the ratio of LiDAR to Radar data points were set to
200:1. The dynamic grid had a size of 100 m x 100 m with a grid cell size of 0.2 m ×
0.2 m. The evaluation of the data fusion algorithm was performed with the HEAT
shuttle implementation within a single thread on an i7-8850H processor.
14
Fig. 7. Average run time as a function of the number of input objects and point cloud data
points (in 1000 counts). The semi-transparent plane is the fit to the data and shown in Eq. 7.
. It shows a linear increase of the average run time with increasing number of point
cloud data points for a constant number of input dynamic objects and vice versa. It
should be noted that the absolute numbers depend on the used hardware.
As described in sections 3.4 and 3.5 each sensor source will be treated inde-
pendently but they will be fused with the same occupancy grid. Because of this inde-
pendence it is straightforward to add/remove sensors to/from the data fusion algo-
rithm. As a consequence, the algorithm is highly adaptable to a changing number and
type of sensor sources.
All input dynamic objects are directly fused with the occupancy grid. As a conse-
quence it is irrelevant if 𝑁 input objects are provided by 1, 2, or 𝑁 different sensors.
The situation is similar for the point cloud sensors. They are all directly fused with the
occupancy grid via the point count values of the occupancy grid cells, which is direct-
ly related to their occupancies. Therefore the performance of the data fusion algorithm
is independent of the number of sensors providing the input data. It only depends on
15
the size of the input data, i.e. the number of input dynamic objects and point cloud
data points. This shows the high scalability of the data fusion.
The performance of the algorithm and thus the average run time could be improved
by utilizing the multithreading capabilities of the CPU or by employing a GPU.
In summary, IAV presented a novel approach to efficiently fuse the data of a large
number of different sensors. The approach was motivated by the development and
successfully applied during the operation of the HEAT shuttle in Hamburg’s
HafenCity.
The data fusion algorithm was specifically shown using the sensor setup of the
HEAT shuttle providing raw point cloud sensor data as well as preprocessed objects.
However, the algorithm is not bound to this two types of sensor data but can be ap-
plied to other sources of information as well, such as road markings, map information,
or light signals.
The data fusion algorithm is highly scalable and easily adaptable while running
with minimal computational resources, i.e. without the support of a graphical pro-
cessing unit. Its application is not restricted to shuttles but can be used in e.g. auto-
matic valet parking or infrastructure sensor data fusion.
References
1. https://www.hochbahn.de/hochbahn/hamburg/de/Home/Naechster_Halt/Ausbau_und_Proj
ekte/projekt_heat
2. Hamburg HafenCity Homepage, https://www.hafencity.com/de/ueberblick/daten-fakten-
zur-hafencity-hamburg.html, last accessed 2021/02/08
3. Aeberhard, M., Kaempchen, N.: High-level sensor data fusion architecture for vehicle sur-
round environment perception, Proc. 8th Int. Workshop Intell. Transp, 665 (2011)
4. Pietzsch, S., Vu, T. D., Burlet, J., Aycard, O., Hackbarth, T., Appenrodt, N., Dickmann,
Jürgen., Radig, B.: Results of a precrash application based on laser scanner and short-
range Radars, IEEE Transactions on Intelligent Transportation Systems 10(4), 584-593
(2009)
5. Kaempchen, N., Buehler, M., Dietmayer, K.: Feature-level fusion for free-form object
tracking using laserscanner and video, IEEE Proceedings. Intelligent Vehicles Symposi-
um, 453-458 (2005)
6. Mahlisch, M., Schweiger, R., Ritter, W., Dietmayer, K.: Sensorfusion using spatio-
temporal aligned video and LiDAR for improved vehicle detection, IEEE Intelligent Ve-
hicles Symposium, 424-429 (2006)
7. Takizawa, H., Yamada, K., Ito, T.: Vehicles detection using sensor fusion, IEEE Intelli-
gent Vehicles Symposium, 238-243 (2004)
16
8. Floudas, N., Polychronopoulos, A., Aycard, O., Burlet, J., Ahrholdt, M.: High level sensor
data fusion approaches for object recognition in road environment, IEEE Intelligent Vehi-
cles Symposium, 136-141 (2007)
9. Nègre, A., Rummelhard, L., Laugier, C.: Hybrid sampling bayesian occupancy filter,
IEEE Intelligent Vehicles Symposium Proceedings, 1307-1312 (2014)
10. Genovese, A.: The interacting multiple model algorithm for accurate state estimation of
maneuvering targets, Johns Hopkins APL technical digest 22(4), 614-623 (2001)
Digital Twin to design and support the usage of
alternative drives in municipal vehicle fleets
1 Introduction
Since several years, reducing the environmental impact of mobility clearly is a politi-
cal objective on national and regional level in Germany ([1], [2]). For municipal gov-
ernments and local operators of public vehicle fleets, this translates into the task of
integrating battery electric or fuel cell electric vehicles into their operations.
However, to substitute a substantial part of a fleet comprising only fossil fueled
vehicles by battery electric or fuel cell vehicles imposes several challenges on the
fleet operators. These challenges comprise e. g. refueling strategies (charging strate-
gies in case of battery electric vehicles) and issues on how to organize and potentially
refurbish depots to cope with these strategies. Furthermore, routes or schedules served
by combustion engine vehicles may have to be adjusted due to different (typically
© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_2
2
smaller) ranges of battery electric vehicles. In addition, the investment for battery
electric and fuel cell vehicles is currently also comparatively high which simply does
not allow to just “go for” the replacement of a whole existing fleet of combustion
engine vehicles. Instead, a sequence of steps towards a zero-emission operation over a
period of several years will be quite common and needs to be thoroughly planned.
The stepwise conversion of a vehicle fleet in turn results in the necessity of operating
mixed vehicle fleets, possibly comprising vehicle with fossil fuel, batteries, and fuel
cells.
All these constraints lead to a setting where it is sensitive and consequent to evalu-
ate future scenarios utilizing available digital technologies such as digital twinning,
simulation, and optimization. Making use of digital scenarios of their operation was
exactly the objective of two fleet operators Hanauer Straßenbahn GmbH (HSB) and
Hanau Infrastructure Service (HIS) responsible for public buses (HSB) and garbage
collections trucks (HIS) in the city of Hanau in the federal state of Hesse, Germany,
when they joined a research project together with the technology providing company
SimPlan AG (SimPlan) and the Frankfurt University of Applied Scienes (FRA UAS).
Coming from the current state with only diesel-powered vehicles, the main subject of
the project was the development of digital twins for the processes of both fleet opera-
tors allowing to assess future scenarios with different fuel types of the vehicles in the
fleet. The perfect support for this joint undertaking was provided by a grant of the
Hessian Ministry of Economics, Transport, Urban and Regional Development
(HMWVL) as part of their initiative to promote electric mobility ([5]).
This paper presents some of the work conducted by the partners in the project
named SimCityNet. The remainder of the paper is organized as follows. In section 2
the as-is situations of the two fleet operators Hanauer Straßenbahn GmbH (HSB) and
Hanau Infrastructure Service (HIS) are explained in more detail. Section 3 introduces
the models developed to support decision making with respect to different fleet mix
scenarios and section 4 presents some of these scenarios including initial and prelimi-
nary results. The paper concludes with a summary and an outlook on the still ongoing
project activities and beyond.
HSB and HIS are both striving to integrate battery electric and fuel cell vehicles in the
best possible way into their future operations. As starting point, the as-is situation of
both operators will be described in the following two subsections.
The HSB runs a fleet of about 65 diesel buses in the city of Hanau. 40% of the vehi-
cles are articulated buses with the remaining 60% being standard ones. A timetable
with twelve bus lines is offered to the public, starting operation on some lines before
5am in the morning and operating until past midnight with cycles of 15, 20 or 30
minutes depending on the line, the time of the day, and the weekday. In addition,
3
buses are dispatched for school traffic around 7am in the morning and around midday.
HSB today is operating a single depot which is located near the main train station of
the city. The depot today is neither equipped to charge many battery electric buses
simultaneously nor to refuel fuel cells, i. e. it would have to be significantly refur-
bished if it were to maintain the respective vehicle. Until recently, the HSB has
gained some experience in testing the handling and the operation of single electric
buses but not yet with the large-scale integration of battery electric or fuel cell buses
into the fleet.
As it is quite common for most timetable driven traffic systems the HSB is facing
what is called vehicle scheduling problem (VSP, [3]). Trips, defined by the lines and
the timetable, need to be assigned to vehicles and must be combined to so-called
schedules. This needs to be done in such a way that each trip is covered exactly once
and that the resulting sequence of trips per vehicle is feasible for the respective vehi-
cle (i. e. each vehicle must have enough time to get from an endpoint of one trip to the
starting point of the next trip). The HSB is using a well-proven and established soft-
ware suite (init MOBILE PLAN, software vendor init SE) to support daily operation
and to compute the vehicle schedules. However, the limited range of battery electric
buses imposes additional restrictions on the VSP which until recently have not been
considered in many solution approaches ([6], [10]). Quite typically, most of the
schedules computed for diesel buses are not feasible for battery electric buses. This is
also true for the schedules the HSB uses and thus needs to be considered in the sce-
narios discussed in section 4.
The HIS operates a fleet of sixteen garbage collection vehicles with a type-dependent
capacity between 8.7 and 11.7 tons. Four different types of waste, so-called fractions,
need to be collected: general (RM), paper and cardboard (PPK), plastic (LVP), and
organic (BIO). Nine of the vehicles are equipped to collect general waste, four are
able to load paper and cardboard, three can collect BIO, and again four will pick up
plastic. This implies that not all vehicles can carry every fraction and that some vehi-
cles are equipped to collect more than one fraction. However, fractions must not be
mixed, i. e., at every given collection tour one vehicle is assigned to the collection of
one fraction. Whenever the capacity of the collection vehicle is exhausted and finally
in any case at the end of a tour, it needs to head for one of three disposal sites outside
of Hanau.
The number of customers (companies, households etc.) the HIS serves depends on
the fraction and varies between roughly 13,000 and 17,000 and the number of collect-
ed bins ranges from 14,000 to 20,000. Again, depending on the fraction and to a cer-
tain extent also on customer preferences, the bins are collected in cycles of 7, 14, or
28 days. Service times are between 6am and 2.30 pm or 2pm depending on the week-
day. A typical tour of a collection vehicle comprises around 500 customers and 600
bins.
It is quite common to decompose the planning process for garbage collection oper-
ators in several steps ([14]). For each fraction, the customers are divided in districts so
4
that all customers in each district can be served on a single workday. The breakdown
in districts needs to consider the length of resulting tours, the number of containers in
the tour and the estimated weight of the collected waste in order to get a somehow
balanced workload per district. The decomposition in districts is a mid-term planning
task since customers expect a collection schedule to remain unchanged for, e. g., the
next year or at least for a couple of months to come. The computation of tours within
given districts has to be done on a day-to-day basis. It is a short-term problem because
the conditions within a district are constantly changing: customers are added or re-
moved, road works may have to be considered, and the amount of waste differs from
cycle to cycle causing variations in the number of visits to the respective disposal site.
In parallel to evaluating the impact of battery electric or fuel cell vehicles in the
fleet, the HIS had a project running on re-shaping the collection districts in Hanau.
Since the results of this projects were not yet available for the modelling activities in
the SimCityNet project, it was decided to work with todays’ districts and to assess the
assignment of alternative vehicles to todays’ tours.
The modelling and digitization activities in the project were in a double sense two-
fold: models had to be designed and implemented for the bus operation of the HSB
and for the garbage collection tours of the HIS. And it took simulation models to as-
sess future fleet operations and optimization models to support vehicle scheduling and
tour planning for future scenarios.
hydrogen refueling at the depot with respect to number of charging or refueling points
or the usable electrical power may be set up. And finally, the schedules have to be
connected with the simulation model. This can either be done by importing schedules
generated with the init MOBILE PLAN software application, by importing manually
modified schedules, or by invoking a heuristic specifically developed for the schedul-
ing problem of the HSB.
This heuristic and the underlying mathematical model adapt and combine vehicle
scheduling approaches found in the literature (e. g. [9], [10], [13]) for the situation at
the HSB. For the fleet with diesel engines as it is today, the heuristic approach pro-
duces results almost identical to the schedules the HSB has currently in use. And for
mixed fleet scenarios, it allows a fast calculation of alternative schedules adjusted to
the (shorter) ranges of battery electric and fuel cell buses.
From a technical perspective, the modelling approach for the HIS is quite similar. An
Anylogic simulation model including a visualization on the map of Hanau (Fig. 2) is
available for an assessment of the vehicle operations and a mathematical model plus a
heuristic solution procedure are available for automatic tour calculation. The visuali-
zation is showing the garbage collection trucks operating on several districts for a
waste fraction (in case of Fig. 2 it is general waste). And it is displaying the remaining
range of each vehicle as well as the fuel type.
6
However, as a closer look at Fig. 2 reveals, there are some differences in several
details. Firstly, the simulation model and the visualization model are comprehensive
in the sense that each location of a waste bin is marked on the map. And as briefly
mentioned, the collection districts are different from day to day and from fraction to
fraction with a cycle of at most four weeks. Hence, the application allows to switch
between weeks (“Woche”) and weekdays (“Wochentag”).
The simulation model for the HIS has also some additional parameters on top of
the ones described in section 3.1. For each vehicle it needs to be set up which frac-
tions may be collected. And for the collection process itself, settings on the amount of
waste and the service time per bin need to be made. Here, constant values may be
used but random numbers are also offered bringing some stochastic variations into the
simulation model.
During each simulation run several performance indicators are collected such as
utilization of each vehicle, distance driven per vehicle, amount of waste collected,
tours to a loading station, and tours to deposit sites. Fig. 3 presents an example of the
tour distances per vehicle, weekday, and fraction. The codes starting with “HU-“ are
the IDs (e. g. the number plates) of the vehicles.
The mathematical model for the tour planning of the garbage collection vehicles is
designed closely to the model presented by Tirkolaee et al. [12]. The implemented
solution procedures are based on heuristic approaches by Golden and Wong [4]
(opening procedure) and Muyldermans and Pang [8] (improvement procedure). How-
ever, it became apparent that a major part of the mathematical modelling process
starts earlier and lies in adequately handling specificities such as u-turns (typically not
allowed for garbage collection trucks and hence to prevent in the models), one-sided
7
or two-sided bin collection (with two-sided collection being e. g. applied for one-way
streets – but for other types of streets as well), or the processing of remote bins (i. e.
with bins not directly positioned at a passable street). The modelled network covers
all these aspects.
Overall, the implemented heuristic approach allows to compute realistic garbage
collection tours. In connection with the simulation model it also provides on option
for dynamic re-calculations of tours which have been interrupted by not foreseen
visits to a deposit site.
The preceding two sections presented simulation models and mathematical models.
These types of models are an essential part of so-called digital twins, as steps 6 and 7
in Fig. 4 indicate.
However, it takes a lot more steps to turn (simulation) models into digital twins.
First of all, it takes a permanent connection and interaction between the digital “sib-
ling” and the real-world. Via this connection, data collected during the operation in
the real-world is transferred into the digital world in an appropriate way (with the
discussion of what appropriate means in that context being beyond the scope of this
article). In the digital world, the data is used to execute model-based analysis with the
results of this analysis being transferred back to improve real-word operations.
8
Such an integrated, automated digital twin is not within the reach of the
SimCityNet project. So, in the sense of the three categories “digital model” (manual
data exchange between physical and digital objects), “digital shadow” (automatic data
transfer from physical to digital objects and manual transfer back), and “digital twin”
(automatic data exchange between physical and digital objects) presented in Kritz-
inger et al. [7], the HSB models introduced in its current state may be classified as
digital shadow whereas the HIS models may be even more in the state of a digital
model.
The reason is that data preprocessing and transfer for the HIS models is too a large
extent manual. The data for the HSB models, however, is (semi-)automatically de-
rived from the data in HSBs’ IT systems. An automated transfer of analytical results
from the digital models back into the real-world IT is currently not intended. Howev-
er, it should be noted that the main purpose of the digital models in the HSB case and
the HIS case is to provide mid-term decision support. It may very well be that the
rather technology and interfaced motivated understanding of digital twins as it has
been discussed in this section is not fully appropriate for this kind of model applica-
tions.
At the time of the preparation of this paper, the work on the project was still a couple
of months away from being finished. And these last months are exactly the project
phase being dedicated to numerical assessment of different scenarios. Hence, this
section will present an overview on the planning scenarios the fleet operators want to
be assessed and a few initial, yet still preliminary calculations.
9
Table 1 presents an overview on the fleet mix scenarios the two operators want to
investigate further. Both have clear yet different preferences. The HSB does not ex-
pect battery electric buses alone to be a preferable technology for the future but sees
bigger potential in fuel cells. The situation for the HIS with respect to battery electric
vehicles is even more difficult. While the number of offered battery electric bus mod-
els is increasing, the selection of battery electric garbage collection vehicles is still
limited. The vehicles of potential vendors are still in a preliminary test phase. An
additional challenge is imposed by the relatively high-power consumption of the ve-
hicles during the collection phase and – specifically in the HIS set up – the distances
of 30km to 40km one way between the collection districts in Hanau and the disposal
sites. All these constraints in combination with the larger range fuel cell vehicles lead
to the HIS preferences shown in Table 1. Nonetheless, the HIS wants to use the model
to understand what a fleet with only battery electric garbage collection vehicles would
mean for their processes.
• Depot capacity for parallel loading / refueling the fleet in percentage the fleet size.
A value of e. g. 50% would mean that 50% of the battery electric vehicles may be
charged at the same time.
• Battery capacity usage defining a threshold that should not be exceeded when load-
ing and not be fallen short of when using the vehicle. A value of e. g. 15% would
mean that during a normal loading and operation cycle the charging state of the
battery should be between 85% (= 100% - 15%) and 15%.
• In case of the collection vehicles the assignment of fractions to vehicles is another
relevant setting. In that respect, HIS does foresee a scenario similar to the as-is sit-
uation: vehicles need to be assigned to more than one or in some cases only to one
fraction but typically to all four fractions.
• For the buses, the scenarios with and without a loss in range due to the use of bat-
tery electric energy for heating should be evaluated.
The different fleet mixes combined with the other parameters lead to several doz-
ens of simulation configurations for the HSB and the HIS. As already indicated, at the
time this paper was prepared, the project was still ongoing. Hence, only some initial
calculations can be presented here.
10
These initial optimizations and simulations led to results like the one displayed in
Fig. 5. It is showing the number of required buses as it is with the diesel fleet today
and the according results for a pure fuel cell, a pure battery electric, and a mixed fleet.
The ranges assumed were 500km for diesel buses, 350km for fuel cells buses, and
180km for the battery electric buses. Considered were solely the operations on a
Monday which is the peak day of the week for the HSB. The bus lines remained un-
changed but the schedules were of course re-calculated considering the ranges and the
re-fueling times of the buses in the fleet.
One rather obvious explanation for the results is that a use of short-range instead of
long-range buses leads to shorter schedules. These shorter schedules in combination
with the re-charging times increase the number of required vehicles. Apparently, this
effect sets in for a range below 200km whereas a range of 350km allows to operate
very much like today. This result is not necessarily an objection against a certain type
of power but rather against vehicles with insufficient ranges. What came a bit unex-
pected for the project team is that a fleet mix with an (almost) equal share of diesel,
fuel cell, and battery electric buses shows no increase in the needed number of vehi-
cles. This indicates, that intelligent combinations of vehicles and schedules may allow
the integration of buses with lower ranges. And it demonstrates how useful these vir-
tual scenarios are in avoiding surprises in real-world fleet operations.
However, as already pointed out, these computations have to be considered as pre-
liminary. The available results are still undergoing a careful validation and in-depth
analysis, which is also the reason for deliberately not showing concrete values on the
y-axis of the chart in Fig. 5. And more results are going to follow as the preferences
and parameters introduced at the beginning of these section will be investigated.
11
This paper presented a comprehensive simulation and optimization approach for bus
and garbage collection truck operations of the HSB and the HIS in the city of Hanau.
The modeling approaches allow a detailed investigation of fleet mix scenarios and are
a step towards a digital twin of the fleet operations. Open issues and tasks with re-
spect to digital twinning were discussed in the paper. Finally, the presented prelimi-
nary results underline how challenging the integration of electric vehicles into exist-
ing fleets is.
This paper also demonstrates the extent of work that is still ahead on several levels
in integrating fuel cell and battery electric vehicles in fleet operations. On the level of
the presented project, several more results have to be generated and more analysis
have to conducted for HSB and HIS over the next couple of months to support deci-
sion making for the city of Hanau. On a technical and modeling level, there is also
room for improvement with respect to optimization approaches and the according
solutions procedures, with respect to the integration of simulation and optimization
and last not least with respect to digital twinning. And on the level of zero-emission
fleet operations, it will be interesting to follow the progress towards technically and
economically improved vehicles. Models to virtually support and assess this progress
are ready to be used.
References
1. Ammermann, H., Ruf, Y., Lange, S., Fundulea, D., Martin, A.: Fuel cell electric buses -
potential for sustainable public transport in Europe, Roland Berger GmbH, Munich, Sep-
tember 2015.
2. BMUB - Federal Ministry for the Environment, Nature Conservation, Building and Nucle-
ar Safety Homepage, https://www.bmu.de/fileadmin/Daten_BMU/Pools/Broschueren/
klimaschutzplan_2050_en_bf.pdf, last accessed 2021/04/06.
3. Bunte, S., Kliewer, N.: An overview on vehicle scheduling models. Public Transportation
1(4), 299–317 (2009).
4. Golden, B., Wong, R.: Capacitated arc routing problems. Networks, 11(3), 305–315
(1981).
5. HMWVL - Hessian Ministry of Economics, Transport, Urban and Regional Development,
Homepage for Electric Mobility, https://wirtschaft.hessen.de/verkehr/elektromobilitaet/
foerderung-der-elektromobilitaet-hessen, last accessed 2021/04/06.
6. Jefferies, D., Göhlich, D.: A comprehensive TCO evaluation method for electric bus sys-
tems based on discrete-event simulation including bus scheduling and charging infrastruc-
ture optimisation. World Electric Vehicle Journal 11(3), 56 (2000).
7. Kritzinger, W., Karner, M., Traar, G., Henjes, J., Sihn, W.: Digital Twin in manufacturing:
A categorical literature review and classification, IFAC PapersOnLine 51(11), 1016–1022
(2018).
8. Muyldermans, L., Pang, G.: A guided local search procedure for the multi-compartment
capacitated arc routing problem, Computers & Operations Research 37(9), 1662-1673
(2010).
12
9. Rinaldi, M., Parisi, F., Laskaris, G., D'Ariano, A., Viti, F.: Optimal dispatching of electric
and hybrid buses subject to scheduling and charging constraints. In: 21st International
Conference on Intelligent Transportation Systems (ITSC), pp. 41-46, Maui, HI, USA,
(2018).
10. Rogge, M., van der Hurk, E., Larsen, A., Sauer D.: Electric bus fleet size and mix problem
with optimization of charging infrastructure. Applied Energy 211, 282-295 (2018).
11. Schele, M., Kühn, M.: Vorsprung durch Digitale Zwillinge. Smart Engineering 2017(5),
12-15 (2017).
12. Tirkolaee E., Goli A., Weber GW., Szwedzka K.: A novel formulation for the sustainable
periodic waste collection arc-routing problem: a hybrid multi-objective optimization algo-
rithm. In: Golinska-Dawson P. (ed.) Logistics Operations and Management for Recycling
and Reuse. EcoProduction (Environmental Issues in Logistics and Manufacturing).
Springer, Berlin, Heidelberg (2020).
13. Wen, M., Linde, E., Ropke, S., Mirchandani, P., Larsen, A.: An adaptive large neighbor-
hood search heuristic for the Electric Vehicle Scheduling Problem. Computers & Opera-
tions Research 76, 73-83 (2016).
14. Wøhlk, S., Laporte, G.: A districting-based heuristic for the coordinated capacitated arc
routing problem. Computers & Operations Research 111, 271-284 (2019).
Paving the way for trust in autonomous vehicles–
How OEMs, authorities and certifiers can foster
customers’ trust in AVs
Nils Köster and Torsten-Oliver Salge
1 RWTH Aachen University, Institute for Technology and Innovation Management (TIM),
Kackertstr. 7, 52072 Aachen, Germany
1 Introduction
Even though autonomous vehicles (AVs) are still some years down the road, it is al-
ready clear that trust will be a decisive factor for customer acceptance [1]. Only when
customers trust self-driving cars will the benefits of AVs, such as low-cost driverless
mobility services, reduced accident rates and more efficient use of road space [2] be-
come reality. In recent years, many breakthroughs in development have been cele-
brated, such as successful pilots with robo shuttles on public roads [3]. At the same
time, however, the first fatal accidents involving AVs were reported [4]. Customers
© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_3
2
will therefore pay close attention to whether they can really entrust an AV with full
responsibility for their safety. AV providers will have to actively build this trust. How-
ever, legislators and authorities, as well as certifiers (such as TÜV, ISO and DEKRA),
will also play important roles.
How do customers rate trust in AVs, and how can this trust be strengthened? How
exactly can OEMs (original equipment manufacturers; here: car manufacturers) and
mobility service providers, regulators and certifiers strengthen it? How does this differ
internationally and between different user groups? We investigated these questions in
a research project at RWTH Aachen University (part already published in [5]; further
scientific articles currently in publication), which combined surveys, interviews and
experiments.
As of today, 45% of German survey respondents would trust a self-driving car, as can
been seen in Figure 1. We were able to determine this in a survey with a total of over
800 participants. The more familiar respondents are with assistance systems already
available, the more likely they are to trust an AV. Interestingly, respondents from me-
dium-sized cities seem to be particularly inclined to trust AVs. In fact, medium-sized
cities could be good environments for self-driving cars, as the relief in everyday mo-
toring would be greater than in smaller cities with low traffic density, whilst traffic
situations might be more manageable than in significantly larger cities.
Differences emerge for different modes, as Figure 2 illustrates. Older respondents,
in particular, would have even higher trust in vehicles that were fully autonomous than
in those that were partially autonomous and required their supervision and interaction.
Overall, trust in robo taxis is lowest—respondents seem to be particularly skeptical
about being driven by an unknown AV on their own. A different picture emerges, how-
ever, when other passengers are on board. Respondents would actually trust a robo
shuttle shared with other passengers more than their own private AV.
What exactly, though, apart from potential peer effects, can help customers find trust
in an AV so that they are willing to use it? Even before they start their journeys, cus-
tomers must decide whether they want to entrust the AV with responsibility for their
safety. To trust an AV, customers need to be assured of an AV's functionality and reli-
ability, but should also find its driving maneuvers transparent and comprehensible. To
support this, authorities and certifiers can implement trust-building mechanisms.
3
If your own car could drive fully autonomously, would you trust it?
Share of respondents who agree, %
44 43 53 32 45 56
45
By familiarity with
By residence, thd. inhabitants driving assistance systems
Overall
32 31 71 57
32 34 80
Source: RWTH Aachen study "Trust in AVs“, 2020; total n for analysis: 214
a3616605734f152db5cf29b75ce85770
Fig. 1. Respondents who had previous experience with driver assistance systems would be espe-
cially likely to trust autonomous cars.
Automation/
autonomous level, By age group
mobility mode Overall <30 years 30-49 years > 49 years
Level 3 Own car
Robo taxi
33 33 38 29
Robo shuttle
Source: RWTH Aachen study “Trust in AVs”, 2020; total n for analysis: 807
e0c84b108feb045581599ea65d9f64ea
Fig. 2. Older respondents would trust a private AV the most, but overall, respondents would trust
robo shuttles the most.
4
Especially at the beginning of the adoption of AVs, reviews from other users could
also play a major role in building customer trust, as has been the case with other mo-
bility innovations such as ride hailing.
Fig. 3. AV providers, authorities and certifiers may increase trust in AVs through a range of trust-
building mechanisms, as the expert interviews suggested.
6
20
Technical protection 30 28
15
Provider protection 19 20
22
16
Legal protection 22
21
23
Certifier protection 19
22
Social protection 10 13
Source: RWTH Aachen study "Trust in AVs“, 2020; Germany: n = 202; USA: n = 202, China: n = 216
4e865e2908332c070cd006c0bbde8de4
Fig. 4. Effectiveness in trust-building—similar results in Germany and the U.S., and higher rel-
evance of user ratings in China.
7
There were significant differences not only between the various national markets,
but also between different user groups, as can be seen in Figure 5. To illustrate this, we
present two particularly succinct segments for Germany and China. In Germany, the
largest user segment identified can be labelled as "tech-oriented control seekers." This
group attaches great importance to technical protection measures. A high degree of
transparency, for example through voice or display information explaining the AV's
driving maneuvers, is quite important to users. Concerns that AVs were associated with
a loss of driving pleasure are particularly high in this segment. The segment of "pro-
vider-oriented optimists," in contrast, is significantly more open to the potential bene-
fits of AVs and less critical of potential risks. Users in this segment are mostly drivers
of premium brands. When choosing an AV they could trust, this user group would pri-
marily be attracted to OEMs that were among the pioneers of the technology. These
users are less interested in dealing with the technical details of the AV themselves, but
would rather rely on a full AV certification.
Segments identified
Source: RWTH Aachen study "Trust in AVs“, 2020; Germany: n = 202; USA: n = 202, China: n = 216
4e865e2908332c070cd006c0bbde8de4
Fig. 5. Different user segments can be identified in terms of trust preferences.
As described above, the ratings of passengers who already had experience with an
AV play a major role for Chinese users. This was particularly pronounced for the seg-
ment of "reputation seekers" (see Figure 5), for whom the absence of user ratings would
virtually be a no-go. The opinion of other users appears to be quite important to this
segment. This could also explain why many users would see a high prestige gain in
8
being among the early adopters of AVs. The segment of "young, urban trust out-
sourcers" is characterized by high incomes and comparatively low interest in self-driv-
ing. Therefore, this segment also has an affinity for AVs. Since they have relatively
little car experience, these potential AV users would mainly rely on certification by
certifiers. Moreover, for this user group, which mainly lives in metropolitan areas, strict
liability of the OEM in case of accidents are particularly important. For OEMs, the user
segments we identified in each market can be an important starting point to understand
different trust preferences. Thus, AV providers can derive typical personas from each
user segment to develop targeted trust-building strategies. Figure 6 shows an example.
How Sandra’s life could look like What’s (not) important to Sandra
y Sandra is a service operations manager at a telco in Duesseldorf. y Technical protection, i.e., backup sensors and
y She received her diploma in Business Administration from computers, are of utmost importance
U’Hanover in 2000. y Put high weight on transparency of AV, i.e., that
y Sandra is married and lives with her husband and their two kids driving maneuvers are announced and
(7 & 10 yrs.) in a suburb of Duesseldorf. explained
y They mainly drive a 2018 VW Tiguan (15thd km/year), and enjoy y Sees low value in convenience benefits of AVs
Tech-oriented control their frequent visits to Sandra’s older sister and her family in y Would be most concerned of loss of driving
Muenster. pleasure, compared to other segments
seekers: Sandra
Age: 44 Characteristics of segment “tech-oriented control seekers”
Occupation: Service operations
manager, telco This segment Average across segments
Location: Duesseldorf
Socio-demographics Personal car preferences Car attitude
Personality: Open towards
innovations; likes driving Education Yearly income Yearly mileage Car Brand Driving fun
Trust preferences No degree PhD <12thd EUR >100thd EUR <5thd km >45thd km Volume Premium Low High
Our study shows that, already today, almost one in two people would trust AVs. How-
ever, in order to achieve even broader, sustainable acceptance of AVs, manufacturers
and AV providers must consistently build trust along the dimensions of functionality,
reliability and transparency.
How transparency, for instance, could be created in practice, is demonstrated by ex-
amples such as Baidu's test vehicles [7]. The robo taxis inform their passengers in the
back seat about subsequent driving maneuvers via a display.
9
As this research project shows, certifiers, such as ISO, TÜV, SAE or DEKRA, will
be important enablers of customer trust in AVs. In all three markets investigated, ex-
aminations by independent certifiers were strong sources of trust for potential AV cus-
tomers. In the U.S., for example, such approvals were perceived as even more important
than regulation. Especially for customer segments that were unable—or unwilling—to
look more closely at the technical details or the reputation of the manufacturers, inde-
pendent certificates were a key source of trust.
The findings indicated that there was a large segment of customers in China who
were affluent and open to autonomous driving, but had little experience with the tech-
nical details of cars and wanted to outsource the decision on whether to trust the AV to
certifiers. In this respect, it seems only logical that the major German certifiers such as
DEKRA [9] are building up large testing capacities in China and could thus also be-
come partners of new Chinese AV producers, as described above.
From the customer's point of view, certifications were particularly trustworthy not
only if the finished AV was put to the test as a black box, but also if the certifier was
involved in the development process at an early stage. This is in line with the initiatives
10
of TÜV, DEKRA and Co., which are building up new testing capabilities and develop-
ment support capacities on a large scale. In practice, this could also mean more frequent
main inspections and, above all, regular software checks, as demanded, for example,
by the TÜV association [10]. Thus, in the interplay between providers, authorities and
users, certifiers could take on a central role in building trust in AVs by acting as inde-
pendent trust centers that could control access to vehicle data, for example, in the event
of malfunctions or accidents [11].
AV customers expected legislators to play active roles in building their trust. Authori-
ties should continuously check vehicle updates from OEMs and centrally track and
evaluate malfunctions and accidents. Authorities need to be involved at an early stage
of the technology development. For example, they can support testing under real-world
conditions by setting up AV test zones. The city of Beijing already recently established
its third AV test zone [12]. Collaboration among cities, for example to jointly develop
a catalog of traffic scenarios that AVs must be able to handle flawlessly, will be im-
portant in this regard. In practice, international public collaboration can also be seen,
such as the planned cross-border test zone in Germany, France and Luxembourg [13].
Notably, our results show that legal protection can be even more effective in building
trust than the measures taken by AV providers themselves. Therefore, AV providers
would be well advised to not oppose strict AV regulation (which might seem intuitive),
but rather to support appropriate initiatives. This is exactly what the Goggo Network
[14] is working towards. Similar to licensing in mobile communications, the initiative
calls for the public award of AV licenses to mobility providers. Through defined trust
standards, a high user acceptance for AVs and investment security for AV developers
should be achieved.
References
Abstract. In the past two decades, the automotive industry established its own dedi-
cated SW engineering approach, especially for control algorithm based, deeply embed-
ded SW. This approach is based on well-defined SW requirements specifications often
taking into account dedicated hardware designs, whereby both are largely fixed at start
of development. With the recent developments in the automated driving domain, the
industry faces new challenges arising with the introduction of µProcessor based control
units and data driven algorithms. In this paper, we describe a new approach in the do-
main of automated driving for
In the past two decades, the automotive industry established an automotive specific SW
engineering approach. This approach is based on well-defined SW requirements speci-
fications for dedicated, strongly coupled hardware specifications at start of develop-
ment. Consequently, SW engineers designed, implemented, verified, and validated the
ECU specific software following a widespread V-model approach.
With increasing computing performance within vehicles’ E/E architecture, a strong
paradigm shift happens. Automotive software is increasingly decoupling from hard-
ware [1,2]. Therefore, middleware solutions gain more and more importance and allow
for software engineering solutions and methodologies from the IT domain to enter the
development of in-vehicle software solutions. Managing and integrating increasing
amount of SW, coming from different sources (even from outside the automotive do-
main) into automotive solutions becomes a key success factor. New fast, efficient, and
© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_4
2
scenarios in the simulated world, which we might not be able to face in real driving
scenarios.
Figure 4 shows the setup for this validation challenge. During real driving situations,
intermediate states of the running software algorithms together with the information on
the sensed environment need to be measured and recorded with high bandwidth. At this
point it is important to note that the used measurement equipment needs to have access
to the internal states of the running software, without significantly influencing the run-
time behavior of the running software.
This can only be achieved if measurement and logging devices are supported by high
performance data sharing mechanism within the middleware. For example, data trans-
fer between different processes should not happen by copying data. This would lead to
a high memory and runtime footprint. Especially dynamically allocated memory should
not be used in safety related application features.
6
Figure 5: Shared memory concept for inter process communication and data access for
measurement adapters (MTA).
For this reason, our implementation uses shared memory concepts with a single-sender-
multiple-receiver approach. For efficiency reasons, the implementation avoids local
copies of data from other processes by using zero-copy mechanisms. Due to the fact,
that processes have their own local address space, the corresponding part of the shared
memory is mapped into the logical address spaces of the different processes. For safety
reasons the memory space should be fixed before run-time and should not be dynami-
cally allocated.
Transferring data (with high bandwidth) out of the system without influencing the sys-
tem itself is a considerable technical challenge. It requires optimized data communica-
tion mechanisms via specific measurement adaptors (MTA) implemented within the
AOS middleware, making the process internal data available for special purpose meas-
urement equipment. In this context, the MTA is considered as a separate process and
uses the shared memory zero-copy approach as well (see Figure 5).
Recording corresponding data on an in-vehicle data storages requires technologies (like
PCI-express connectivity) coming from non-automotive domains. (Obviously, ex-
tremely high amounts of data cannot be transferred over the air with currently available
technologies.) In an offline process, driving scenarios including corresponding meta-
data will be transferred into a cloud-based data-lake. From there it is available for re-
play and re-compute in a simulated environment, including visualization using tools
from the open source ROS2 environment.
task level cannot be simulated. (Note: we are not looking for a simulation of the phys-
ical target hardware.)
During the real driving scenario, the target system runs a different hardware and very
likely even a different operating system compared to the simulation environment (see
Error! Reference source not found.). Therefore, the AOS middleware is representing
an abstraction layer for the validation itself. Features running on top of the AOS mid-
dleware behave in the same way on different underlying software stacks and hardware
environments.
In Figure 7, from left to right different target systems during development towards se-
ries readiness are shown. On the left the AOS including the software development kit
is deployed on a standard Linux based laptop or PC. With this approach, the develop-
ment environment can easily be distributed within bigger development teams.
Getting closer to a series ready software stack, the AOS middleware carrying the au-
tonomous driving features is deployed on server boards with a QNX operating system.
For cloud-based scaling of validation tests, simulation server clusters running Linux
operating system are used.
Especially with the deployment on cloud-based servers, the AOS enables ContinousX
approaches for application feature developers. Features can be extended at developers’
desk and while committing changes into the configuration management systems, in the
background virtual tests can start automatically (even with several instances for differ-
ent test scenarios). The developer immediately gets feedback on his implementation
from validation tests in the simulated environment. Therefore, we closed the DevOps
loop for feature developers for highly automated driving, leading to a significant engi-
neering performance increase.
Coming back to Figure 7. The two examples on the right are closer to in-vehicle targets.
Development kits running QNX are the next potential deployment target. Finally, on
the right is an in-vehicle ECU with QNX operating system additionally running an
Adaptive AUTOSAR instance (called Vehicle Run Time Environment VRTE) is
shown.
Obviously, the runtime behavior of all these systems differs. With the data-driven AOS
approach, validation results contribute to the final software release for in-vehicle soft-
ware. Without the AOS approach, a full system validation on the in-vehicle target sys-
tem would be required, which physically would not be possible.
2 Summary
In the previous sections we’ve described the AOS platform which provides a runtime
and development environment to address key challenges in the domain of automated
driving and contributes to solving earlier mentioned issues:
References
1. Zerfowski, Detlef: „Automotive Software: Vertikalisierung versus Horizontalisierung? Die
(R)Evolution der Automotive SW schreitet voran“, Tagungsband Embedded Software En-
gineering Kongress 2018, Sindelfingen, 3.-7. Dezember 2018, Seiten 452-460, ISBN 978-
3-8343-3447-3.
2. Zerfowski, Detlef und Lock, Andreas: “Funktionsarchitektur und E/E-Architektur – eine
Herausforderung in der Automobilindustrie“, 19. Internationales Symposium, 19.-20. März
2019, Tagungsband (Herausgeber: M. Bargende, H.-C. Reuss, A. Wagner, J. Wiedemann),
Volume 2, Seiten 99-110, Springer Vieweg, Wiesbaden, ISBN 978-3-658-25938-9, Online
ISBN 978-3-658-25939-6.
3. Hammel, Christof: Collaborative Software Development in Automotive, ELIV (Electronics
in Vehicle). Baden-Baden 2015
4. Hammel, Christof: Collaborative Automotive Software Engineering, IBM Innovate Konfer-
enz. Orlando 2014.
5. Hammel, Christof: Challenges in the Automotive Industry: EMF Model Repositories –
Scalability vs Performance, EclipseCon Europe, 2012
A Safety-Certified Vehicle OS to Enable
Software-Defined Vehicles
1
Apex.AI, Inc., 979 Commercial Street, Palo Alto, CA, USA
2
Stanford University, Stanford, CA 94305, USA
1 Introduction
© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_5
2
requires much higher bandwidth network communication and increases the require-
ments for safety and security. The fifth generation, currently being developed, will then
see the introduction of centralized computing infrastructure, in which all major func-
tionality is implemented on one high-performance computing platform, “dumb” satel-
lite sensors are connected via a high-speed ethernet network and separation of domains
is implemented in software.
Overall, the architecture is rapidly moving from a complex system architecture with
simple software to a simple system architecture with complex software. If done incor-
rectly, this will pose incredible challenges and risks, such as exploding software com-
plexity, increased cybersecurity risks, incoherent cross-domain implementation, di-
verging non-consistent tooling, and so on. If done properly, this opens up the oppor-
tunity to introduce modern software development processes and paradigms, a clean and
scalable software architecture, cross-domain consistency and tooling, limited control-
lable cybersecurity risks, over-the-air updatability, just to name a few.
In this paper, we aim to describe a world as well as a path to this world in which
software throughout the whole vehicle is truly integrated end-to-end. A primary vehicle
operating system on the outside – but really an SDK (software development kit) in the
sense of Android SDK or iOS SDK on the inside – to abstract complexity away from
the developers, provide common and open APIs as well as robust and flexible enough
to cover major systems throughout the vehicle. What is needed to get to software-de-
fined vehicles? We argue that the following requirements need to be fulfilled:
1. A standardized software architecture with open APIs to enable mutually
compatible solutions ideally across many manufacturers, suppliers, and aca-
demia.
2. An awesome developer experience to enable developer productivity – based
on the understanding that the quality of the developer experience is directly
related to their productivity.
3. A software architecture that scales to massive software systems.
4. A software implementation based on modern software engineering practices.
5. Abstraction of the complexity of all underlying hardware and software.
6. All of the above with deterministic, real-time execution, and with automotive
functional safety certification.
addition, with the release of Apex.OS (Apex.AI proprietary fork of ROS 2) we ad-
dressed real-time and deterministic execution by rewriting the underlying implemen-
tation based on the same open APIs. And we developed a process to take this imple-
mentation through functional safety certification in record time.
To summarize, we will describe the advantages of working with open APIs and an
architecture that has been proven in use by tens of thousands of developers and in
thousands of robots and automated vehicles. We will close with outlining a software
development process that allows taking open-source software through functional
safety certification into production vehicles.
Fig. 1. The vehicle E/E-Architecture is evolving by introducing domain controllers and central
high-performance computer [9].
4
2.2 ROS 1
Various efforts at Stanford University in the mid-2000s involving integrative, embod-
ied AI, such as the STanford AI Robot (STAIR) and the Personal Robots (PR) program,
created in-house prototypes of flexible, dynamic software systems intended for robotics
use - which they named ROS [11]. In 2007, Willow Garage, a nearby visionary robotics
incubator, provided significant resources to extend these concepts much further and
create well-tested implementations. The effort was boosted by countless researchers
who contributed their time and expertise to both the core ROS ideas and to its funda-
mental software packages. Throughout, the software was developed in the open using
the permissive BSD open-source license and gradually has become a widely used plat-
form in the robotics research community.
From the start, ROS was developed at multiple institutions and for multiple robots,
including many institutions that received PR2 robots from Willow Garage. Although it
would have been far simpler for all contributors to place their code on the same servers,
over the years, the "federated" model has emerged as one of the great strengths of the
ROS ecosystem. Any group can start their own ROS code repository on their own serv-
ers, and they maintain full ownership and control of it. They don't need anyone's per-
mission. If they choose to make their repository publicly available, they can receive the
5
recognition and credit they deserve for their achievements, and benefit from specific
technical feedback and improvements like all open-source software projects.
The ROS ecosystem now consists of tens of thousands of users worldwide, working
in domains ranging from tabletop hobby projects to large industrial automation systems.
Despite its large success ROS 1 did not find its path to automotive production programs
because of the following reasons: low code quality, non-standard, non-automotive mid-
dleware, lack of real-time capabilities, no nodes with managed lifecycle, no security,
ack of testing and documentation, no support for automotive ECUs.
Fig. 2. Autonomous Driving stack and the role of the Framework Software in it.
3 ROS 2
• Prescribed patterns for building and structuring systems: While ROS 2 has the
underlying flexibility that is the hallmark of ROS 1, ROS 2 provides patterns and
supporting tools for features such as life cycle management and static configura-
tions for deployment.
Now let's have a look at how ROS 2 fulfills the promises given in the introduction of
this article.
Writing a standardized software architecture that will satisfy everything from an em-
bedded microcontroller running on a drone or radar sensor to the centralized high-per-
formance ECU for the autonomous car is no easy task. Many different aspects have to
be considered:
1. hardware abstraction layer
2. OS abstraction layer
3. runtime layer
4. support for various programming languages
5. non-functional performance
6. security
7. safety
8. software updates
9. tools for the development, debugging, recording & replay, visualization, simulation
10. tools for the continuous integration and continuous deployment
11. interfaces to the legacy systems (such as e.g., AUTOSAR Classic)
12. execution management in the user applications
13. time synchronization
14. support for hardware acceleration
15. model-based development
In order to integrate these aspects into one coherent framework the following design
decisions were followed in the design of ROS 2:
1. Adopt positive lessons learned from the extensive deployment of ROS 1
2. Apply negative lessons learned from the extensive deployment of ROS 1
3. Use hourglass design in order to keep the core of ROS 2 as a single code base
4. Use industry-proven middleware and focus on the layers above the middleware
5. Newly develop mechanisms for security, safety, software updates, interfaces to the
legacy systems (such as e.g., AUTOSAR Classic), execution management in the
user applications, time synchronization, support for hardware acceleration, model-
based development
With these aspects in mind, the architecture shown in Fig. 3 was created for ROS 2.
The largest change from ROS 1 to ROS 2 is that the underlying communication layer
was implemented in a plug-in-like interface for many types of middleware solutions,
and also has tier 1 support for DDS implementations. On top of the middleware layer
7
is a middleware abstraction layer called the ROS middleware API, which is used to
prevent a DDS vendor lock-in. Implementation of the middleware layer for e.g.,
SOME/IP is with this design rather trivial. On top of this layer are the ROS 2 client
libraries written in C++ and Python. Using these client libraries developers can write
application code. With this so-called hourglass design, the thin client libraries always
wrap around the same code base for operating system and communication.
Fig. 3. Architecture of ROS 2
One of the concepts that proved itself the most in ROS 1 was the concept of a node as
a central logical unit. The node component brings together different publisher and
subscriber patterns, threading, queues, and the execution model - all abstracted behind
the intuitive set of APIs. An example of a ROS node is depicted in Fig. 4. We see that
with a node we can subscribe to one or more data streams (called topics), the data is
then aggregated and processed in a single-threaded or parallel loop and the result is
published on another topic. The node can also be parametrized, composed into one or
more processes and it also comes with its own lifecycle for starting and stopping.
Nodes also emit diagnostics information such as logging, heartbeat, and node state in-
formation.
One of the missing concepts in ROS 1 was the concept of executor which was
added in ROS 2. The executor provides spin functions and then coordinates the nodes
and callback groups by looking for available work and completing it, based on the
threading or concurrency scheme provided by the subclass implementation. An exam-
ple of available work is executing a subscription callback or a timer callback. The ex-
ecutor structure allows for a decoupling of the communication graph and the execu-
tion model — a feature much needed when building real-time and deterministic sys-
tems.
ROS 1 provided standardized messages [1] for diagnostics, geometric primitives,
robot navigation, and common sensors such as laser range finders, cameras, and point
clouds. This allows for having the agreed-upon contract between different software
and hardware modules and in turn allows for great flexibility and reusability of devel-
oped software components. These messages were also adopted in ROS 2 but imple-
mented with the OMG IDL language which enables things like message versioning
and message forward and backward compatibility.
8
etc.), and the volumes provide additional development tools (e.g., IDEs, large third-
party libraries) or released software versions. Furthermore, ADE enables easy switch-
ing between versions of the images and volumes.
Arguably the most significant effort required for the production deployment of au-
tonomous vehicles (AVs) is the proof that the system is safe for homologation. To pro-
vide such proof, a holistic, software-first, approach to testing for autonomous vehicles
must be available. In other words, we need a framework that will enable the writing of
the relevant tests and support extracting the proofs. To develop such a framework, it is
important to look at the automotive V-model and to understand what kind of tests one
is expected to write: Unit Tests, Component Tests, Integration Tests, Sub-System Tests,
System Tests, Non-Functional Tests, Acceptance Tests. Furthermore, it is important to
understand how input data is generated for these tests: manually created, manually gen-
erated, simulation generated, vehicle recordings, and in what kind of environments the
tests are executed: Software-In-The-Loop (SIL), Hardware-In-The-Loop (HIL), Vehi-
cle-In-The-Loop (VIL). Lastly, it is important to understand what kind of key perfor-
mance indicators are to be extracted from the tests.
There are plenty of tools available for unit and component testing (e.g. [5]) but upon
our extensive research we could not find a framework that would satisfy other types of
tests, cover all other input data and environments and that would clearly between the
following phases: test configuration, test setup, test running and results logging and
extraction. We thus wrote a tool in the open source called launch_testing [6] which is a
framework for the integration, system and sub-system, non-functional as well as ac-
ceptance testing of applications written with ROS 2 and Apex.OS. The framework pro-
vides a python API and provides such features:
• The exit codes of all processes are available for the tests.
• Tests can check that all processes shut down normally, or with specific exit codes.
• Tests can fail when a process dies unexpectedly.
• The stdout and stderr of all processes are available to the tests.
• The command line used to launch the processes are available to the tests.
10
• Some tests run concurrently with the launch and can interact with the running
processes.
• The results of the test can be logged and processed post test shutdown.
• The data can come into the test from a live source or pre-recorded or simulated
source.
• Each update of a signal with Rte_Write() can trigger an update of the topic, or the
topic is updated after the runnable execution (corresponds to Logical Execution
Time approach (LET))
For SOME/IP communication we rely on the derived data mapping and serial-
ize/deserialize the data according to the SOME/IP and Apex.OS specification. For the
CAN communication, we implemented an extension that abstracts away the underlying
CAN device driver (e.g., socketCAN) and the CAN message definitions. For the com-
munication of the telematics data with the backend in the cloud we needed to consider
the safety-critical and non-safety data exchange, the interfaces to the state-of-the-art
backend brokers (e.g. MQTT), redundant communication paths, secure data transport,
and more frequent software updates of the gateway components written with Apex.OS.
Apex.OS enables such a solution because of its clear separation between the data bus
and the APIs accessing the data on the bus. Update mechanisms via the tools such as
AWS Greengrass, Mender, etc. are possible because of the well-defined Apex.OS APIs
and well-defined Apex.OS node lifecycle.
The next step was to determine the boundary of Apex.OS Cert. We decided to restrict
the boundary to just the source code in the first iteration of certification and add libraries
to the scope in future releases of Apex.OS Cert. At this point, we wrote a rudimentary
safety case of Apex.OS Cert. Apex.OS Cert safety case is an internal document that
lays out what are our safety objectives and what artifacts we will generate to put for-
ward a convincing safety argument for our safety. In the first audit with TÜV NORD,
we got tacit approval for both of the aforementioned TSC and safety case.
Fig. 7. Apex.OS certification safety case
14
Fig. 7 shows the Apex.OS Cert safety case. While defining the safety case, we also
inventoried all the development tools that we need to run TCL (Tool Confidence Level)
reports (ISO 26262-8:2018 [6]) and planned our processes in FSLC (functional safety
lifecycle) such as our change management, safety culture, code review enforcements
and so on. These plans were also captured in functional safety management (FSM) ar-
tifacts as dictated in ISO 26262-2:2018.
Fig. 8. Improvements in Apex.OS to achieve real-time performance
Checking for runtime memory allocations: Applicable tools for checking runtime
memory allocations and blocking calls are not available to our knowledge. We re-pur-
posed LTTng [8], the linux tracing tool next gen, to accomplish this task. Automation
was difficult and many things such as instrumenting code had to be done manually
which is suboptimal and undesirable for obvious reasons.
We have been able to prove that it is possible to certify an open-source project to the
highest level of integrity with the right expertise and strategic choices of scope and
tooling. Apex.OS Cert provides manufacturers and suppliers a fast track and scalable
process to build a safe automotive software stack. Our customers have shown applica-
bility to a wide range of automotive applications not limited to autonomous driving.
We have been able to accomplish development and certification in a very short amount
of time based on our prior expertise in ROS and through reuse of a proven architecture
and tooling. For future work, Apex.AI will not only expand the technical safety concept
of Apex.OS but also certify libraries including the important transport layer (middle-
ware) libraries. Apex.OS is also in the process of being integrated with AUTOSAR
Adaptive and SOME/IP.
References
1. ADE environment, https://ade-cli.readthedocs.io/en/latest/, last accessed 2021/05/08.
2. Commonly user messages in ROS., https://github.com/ros/common_msgs, last accessed
2021/05/08.
3. Eclipse iceoryx, https://github.com/eclipse-iceoryx/iceoryx, last accessed 2021/05/08.
4. GoogleTest Mocking (gMock) Framework, https://github.com/google/googletest/tree/mas-
ter/googlemock, last accessed 2021/05/08.
5. GoogleTest Framework, https://github.com/google/googletest, last accessed 2021/05/08.
6. ISO 2626 Road vehicles – Functional safety,
https://www.iso.org/obp/ui/#iso:std:iso:26262:-9:ed-2:v1:en, last accessed 2021/05/08
(2018).
7. launch-testing, https://index.ros.org/p/launch_testing/, last accessed 2021/05/08.
8. LTTng tracing framework for Linux, https://lttng.org/, last accessed 2021/05/08.
9. McKinsey & Company, Mapping the automotive software and electronics landscape
through 2030, https://www.mckinsey.com/industries/automotive-and-assembly/our-in-
sights/mapping-the-automotive-software-and-electronics-landscape-through-2030, last ac-
cessed 2021/05/08 (2019).
10. Pemmaiah, A., Pangercic, D., Aggarwal, D., Neumann, K., Marcey, K., Performance Test-
ing in ROS 2, https://www.apex.ai/post/performance-testing-in-ros-2, last accessed
2021/05/08 (2020)
11. Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.,
ROS an open source operating system, In: ICRA workshop on open source software (2009).
12. ROS Community Metrics Report, http://download.ros.org/downloads/metrics/metrics-re-
port-2020-07.pdf, last accessed 2021/05/08 (2020).
13. ROS core stacks, https://github.com/ros, last accessed 2021/05/08.
14. ROS tools, http://wiki.ros.org/Tools, last accessed 2021/05/08.
15. Vanthienen, D., Klotzbücher, M., Bruyninckx H. The 5C-based architectural Composition
Pattern: lessons learned from redeveloping the iTaSC framework for constraint-based robot
programming. In: 9th Journal of Software Engineering for Robotics, pp. 17–35.
www.joser.org (2014).
Theoretical Substitution Model for Teleoperation
From the evolution of on-road transportation, the in-vehicle human driver’s role seems
very clear. The human driver uses steering controls (e.g. steering wheel and pedals) to
execute the dynamic driving task. Applying the terms of SAE International Standard
J3016 [1], this refers to the role of the ‘conventional driver’ ([1], p.16). Next, also driv-
ing automation functions have been discussed thoroughly, and classification schemes
have been established and developed to account for different kinds of driving automa-
tion functions [1, 2, 3, 4]. Additionally, interactions between those two modes of con-
trol (human, i.e. ‘manual driving’, and automation, i.e. ‘automated driving’) have been
addressed in several publications [e.g. 5].
It is the role of teleoperation that has been considered only recently in the context of
automated and manual driving and has not yet received so much response. Unsurpris-
ingly, the focus has rather been on very practical issues, such as latencies influencing
teleoperators’ quality of vehicle motion control [6]. A literature search in the databases
IEEE Xplore, Web of Science and Scopus using the terms ‘teleoperation’ and ‘auto-
mated driving’ has yielded only 33 papers in total, published between 1997 and 2021
© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_6
2
(descriptive; publication year was no search criterion). Despite the search term ‘auto-
mated driving’, many of the resulting papers did not deal with automated driving, but
with medical surgery, other robots (e.g. welding, service, tentacles), general sensor de-
velopment, application in military context, educational distance learning or drones. Of
those that dealt with on-road driving, the majority addressed either questions related to
latency [e.g. 7, 8, 9] or questions related to human-machine-interfaces for human tele-
operators [e.g. 10, 11].
The two lines of research on teleoperation in the context of automated driving show
that teleoperation as a concept is indeed associated with automated driving. At the same
time, teleoperation seems to be treated as a stand-alone feature not yet substantially
embedded into automated driving.
2 Objective
The current framework shall serve as a link between teleoperation and automated and
manual driving. Not only the general research topics, but also the parallel lines of re-
search outlined above can be integrated more strongly, if we had a common language
and a common ground on the concept of teleoperation per se. This may also facilitate
overview about what teleoperation concepts are being discussed and researched, or in-
tentionally disregarded or unintentionally neglected.
Furthermore, with teleoperation entering as another vehicle guidance option, the
possibilities of how a vehicle’s motion is controlled increases. To be able to manage
this diversity in vehicle guidance possibilities and to talk about the multiple possibilities
technology offers, we need a straightforward conceptual basis that provides overview
about how which entity contributes to vehicle guidance.
The Substitution Model for Teleoperation aims at providing such a common ground
and common language for research and discussions on teleoperation in automated driv-
ing.
options. The framework stays the course of theoretical comprehensiveness very strictly
(which will be discussed in section 6). Therefore, we strongly highlight that parts of the
presented theoretical framework are not practically relevant from today’s perspective,
and their mention in this article shall not be misunderstood as a recommendation or
claim to translate the theoretical option into practice. Discrimination between the pre-
sented theoretical framework and the practical implementations needs to be kept in
mind, especially as this article strongly goes beyond what is currently being discussed
as practical implementation.
Teleoperation (tele greek for remote, operatio latin for operation) in general describes
the exertion of motion control from outside the moved entity by a third party [12]. Tel-
eoperation is applied in multiple fields such as underwater [13], medical [14], space
[15] and road traffic [6]. In the context of automated driving of on-road vehicles (i.e.
SAE Level 3 or higher), teleoperation is frequently discussed as a means to overcome
current limitations of driving automation functions, e.g. fallback regarding safety is-
sues.
For example, in the UrbANT project funded by the German Federal Ministry of Ed-
ucation and Research, a delivery robot is being developed. This robot is designed to
transport goods home, especially purchases and groceries, but also to serve as a mobile
parcel station. With a payload of up to 90 kg, the delivery robot can operate for about
four hours. Regardless of the robot's transport structure, which can vary in height, the
technical implementation of the basic platform (approx. 900 × 600 mm) is always iden-
tical: by means of camera and lidar sensors, the vehicle should be able to move auton-
omously in public road traffic on pavements at a maximum speed of 6 kph. Further-
more, the delivery robot will be able to follow a user automatically. For both function-
alities, the traffic environment will be precisely mapped. Normally, the driving auto-
mation will safely operate the robot. However, within the automation’s domain an un-
predictable situation (e.g. an insurmountable obstacle on the route) may occur, which
cannot be solved by the automation. This may also include unexpected complete failure
of the driving automation function. Replacing the driving automation by teleoperation
may solve these technologically challenging situations more efficiently than by seeking
the help of a person present on site. This prevents both the delivery robot's drive from
being interrupted and the robot from becoming a massive obstacle to traffic itself due
to obstruction. In the UrbANT project, consequently, a human teleoperator should be
able to resolve a situation that is unsolvable for the driving automation function and to
proceed the drive, but also to achieve a minimal risk condition in the traffic area if
necessary. With reference to section 4.1, the teleoperator in the UrbANT project thus
performs the task of remote driving in this case.
Motivated by research conducted in UrbANT, we propose a model that provides a
context for teleoperated and automated driving by acknowledging three different modes
by which vehicle motion control may be executed:
1. in-vehicle human driver
4
Fig. 1. Substitution Model for Teleoperation. (1) human, (2) automation, (3) teleoperating entity.
In the context of automated driving, teleoperation can substitute the in-vehicle human driver or
the driving automation function.
In case of remote assistance, the vehicle motion control is performed by either the
conventional driver or the driving automation function, and the teleoperator gives in-
formation or advice to the respective primary controller. For example, if a delivery ro-
bot’s driving automation function detects a large dark spot on the ground which cannot
be classified by the driving automation function, a teleoperator might be requested. The
teleoperator then may provide the information to the delivery robot’s driving automa-
tion function that the spot is a shadow. With this information, the driving automation
can continue the ride without further assistance by the teleoperator.
In contrast to remote driving, the primary controller (i.e. driving automation function
or conventional driver) is capable of performing the vehicle motion control, but requires
further information or advice by the teleoperator in order to solve the current situation
in terms of vehicle motion control.
The differentiation between remote assistance and remote driving refers to the task,
not to the teleoperating entity. For example, it is possible that one teleoperating person
performs both tasks.
a remote driver instead of a primary controller inside the vehicle (driving automation
function or conventional driver). From the substitution model’s perspective, it is indif-
ferent which entity (whether the driving automation function or the conventional driver)
has been substituted. Table 1 summarizes the transfer of SAE Standard J3016 to tele-
operation.
Abstract hazard
Concrete hazard
αII The automation supports the driver The teleoperating entity supports
by reinforcing the driver’s initiated the primary controller (in-vehicle
maneuver. human driver or automation) by re-
inforcing the primary controller’s
initiated maneuver.
βII The automation performs the driv- The teleoperating entity performs
ing task for a short period of time the driving task for a short period of
to resolve the concrete hazard. The time to resolve the concrete hazard.
driver needs to perform the driving The primary controller (in-vehicle
task subsequently. human driver or automation) needs
to perform the driving task subse-
quently.
γII The automation performs the driv- The teleoperating entity performs
ing task for a longer period of time the driving task for a longer period
to resolve the concrete hazard. of time to resolve the concrete haz-
Level γI is initiated, in case the ard. Level γI is initiated, in case the
driver does not perform the driving primary controller (in-vehicle hu-
task subsequently. man driver or automation) does not
perform the driving task subse-
quently.
Note. Definitions are based on Principle of Operation C [4]
At this point, we highlight and refer to the objective of our model. Especially with
regard to the application of teleoperation in case of a concrete hazard, we do not suggest
to implement this in road traffic from today’s perspective.
10
6 Discussion
The Substitution Model for Teleoperation allows to describe the interplay of teleoper-
ation, automation and in-vehicle human driver in the context of vehicle guidance [18].
So far, teleoperation has been rather portrayed and regarded as a stand-alone feature
that researchers agree on being related and needed with emerging automated driving
[7, 8, 9, 10, 11]. Yet, the conceptual incorporation of teleoperation into the context of
driving automation and manual driving was lacking.
Our Substitution Model for Teleoperation may serve as a common conceptual basis
to clearly describe which entity is in control of vehicle guidance and to what extent.
Even though we have emphasized theoretical comprehensiveness, we acknowledge that
any model can only be an abstract simplification of the real world. As the core differ-
entiation between the three modes of operation (direct manual interaction by a human,
automation, teleoperating entity) is very general, it might be applicable to other tele-
operation use cases as well. Yet, since we are no experts in those other application
fields, we do not presume to evaluate sensible application.
theoretical possibilities and practical application in road traffic. We did include all use
cases to make it possible to talk about what is suitable and appropriate and what is not.
This allows for informed decisions. On the one hand, we can deliberately decide to
exclude theoretically possible applications, rather than being unaware and ignorant to-
ward possibilities. On the other hand, we can decide which possibilities might be suit-
able for which use cases in particular. We favored theoretical comprehensiveness over
practical realistic implementations, because the implementation in practice is always
limited to the technology available at the time. But the conceivable use cases may reach
beyond these limits. For being technology open and non-restrictive, we therefore de-
cided to be strict and reveal the theoretical opportunities and to leave the practical im-
plementation open for today’s and future technology and emerging use cases.
To be clear, we highlight that we do not claim that all combinations of human manual
control, driving automation and teleoperation described in this article shall be realized.
We also do not claim that all combinations of human manual control, driving automa-
tion and teleoperation described in this article are similarly significant for practical im-
plementation. We see that not all of the revealed use cases are sensible. However, as
they are conceivable, we aimed to make them discussable and to provide the respective
language for both including and excluding conceivable teleoperation applications in
practice.
Furthermore, we are aware that by applying the existing concepts we do not only
transfer the strengths of approximating comprehensiveness. By transferring these con-
cepts, we also transferred the blind spots these concepts have. For example, cooperative
vehicle guidance is neither focused in SAE Standard J3016 [1] nor in the Principles of
Operation Framework [4]. Yet, we argue that having a primary controller who is re-
sponsible for the driving task is a convincing practical criterion, and it seemed most
promising to follow the schemes that have established and proved to gain acceptance
in the context of automated and manual driving.
References
1. SAE (2018): SAE International Standard J3016. (R) Taxonomy and Definitions for Terms
Related to Driving Automation Systems for On-Road Motor Vehicles.
2. Gasser, T. M. (2013): Legal consequences of an increase in vehicle automation. Consoli-
dated final report of the project group; Report on research project F 1100.5409013.01. Bre-
merhaven: Wirtschaftsverl. NW Verl. für neue Wiss (Berichte der Bundesanstalt für Stras-
senwesen F, Fahrzeugtechnik, 83). Available online at https://bast.opus.hbz-nrw.de/opus45-
bast/frontdoor/deliver/index/docId/689/file/Legal_consequences_of_an_increase_in_vehi-
cle_automation.pdf, checked on 8/6/2019.
3. Gasser, T. M., Frey, A., Seeck, A., Auerswald, R.: Comprehensive definitions for automated
driving and ADAS. (2017)
4. Shi, E., Gasser, T. M., Seeck, A., Auerswald, R.: The Principles of Operation Framework:
A Comprehensive Classification Concept for Automated Driving Functions. SAE Intl. J
CAV 3(1), 27-37 (2020). DOI: 10.4271/12-03-01-0003.
12
5. Jarosch, O., Gold, C., Naujoks, F., Wandtner, B., Marberger, C., Weidl, G., & Schrauf, M.:
The Impact of Non-Driving Related Tasks on Take-over Performance in Conditionally Au-
tomated Driving - A Review of the Empirical Evidence. In TUM Lehrstuhl für Fahrzeug-
technik mit TÜV SÜD Akademie (Chair), 9. Tagung Automatisiertes Fahren. Symposium
conducted at the meeting of Lehrstuhl für Fahrzeugtechnik mit TÜV SÜD Akademie,
(2019). Retrieved from https://mediatum.ub.tum.de/doc/1535156/1535156.pdf
6. Neumeier, S., Wintersberger, P., Frison, A.-K., Becher, A., Facchi, C., Riener, A.: Teleope-
ration. In: AutomotiveUI '19: Proceedings of the 11th International Conference on Automo-
tive User Interfaces and Interactive Vehicular Applications. pp. 186-197. ACM, New York,
NY, USA (2019).
7. Zhang, T.: Toward Automated Vehicle Teleoperation: Vision, Opportunities, and Chal-
lenges. In IEEE Internet Things J. 7 (12), pp. 11347–11354 (2020). DOI:
10.1109/JIOT.2020.3028766
8. Feiler, J., Hoffmann, S., Diermeyer, F.: Concept of a Control Center for an Automated Ve-
hicle Fleet. In : 2020 IEEE 23rd International Conference on Intelligent Transportation Sys-
tems (ITSC), pp. 1-6, IEEE 23rd International Conference on Intelligent Transportation Sys-
tems (ITSC), Rhodes, Greece (2020).
9. Georg, J.-M., Feiler, J., Diermeyer, F., Lienkamp, M.: Teleoperated Driving, a Key Tech-
nology for Automated Driving? Comparison of Actual Test Drives with a Head Mounted
Display and Conventional Monitors*. In: 21st International Conference on Intelligent Trans-
portation Systems (ITSC), pp.3403-3408, 21st International Conference on Intelligent
Transportation Systems (ITSC), Maui, HI, (2018).
10. Kettwich, C., Dreßler, A.: Requirements of Future Control Centers in Public Transport. In:
12th International Conference on Automotive User Interfaces and Interactive Vehicular Ap-
plications. AutomotiveUI '20: 12th International Conference on Automotive User Interfaces
and Interactive Vehicular Applications. Virtual Event DC USA, New York, NY, USA:
ACM, pp. 69-73 (2020).
11. Graf, G., Hussmann, H.: User Requirements for Remote Teleoperation-based Interfaces. In:
12th International Conference on Automotive User Interfaces and Interactive Vehicular Ap-
plications. AutomotiveUI '20: 12th International Conference on Automotive User Interfaces
and Interactive Vehicular Applications. Virtual Event DC USA, New York, NY, USA:
ACM, pp. 85-88 (2020).
12. Hokayem, P. F., Spong, M. W.: Bilateral teleoperation: An historical survey. Automatica 42
(12), 2035-2057 (2006). DOI: 10.1016/j.automatica.2006.06.027.
13. Giovanni Gerardo, M., Marcheschi, S., Fontana, M., Bergamasco, M.: Dynamics Modeling
of Human-Machine Control Interface for Underwater Teleoperation. Robotica 39 (4), 618-
632 (2021). DOI: 10.1017/S0263574720000624.
14. Mehrdad, S., Liu, F., Pham, M.T., Lelevé, A., Atashzar, S.F.: Review of Advanced Medical
Telerobots. Appl. Sci. 2021, 11, 209. https://doi.org/10.3390/app11010209
15. Cheng, R., Liu, Z., Ma, Z., Huang, P.: Approach and maneuver for failed spacecraft de-
tumbling via space teleoperation robot system. Acta Astronautica 181, 384-395 (2021). DOI:
10.1016/j.actaastro.2021.01.036.
16. Warm, J. S., Dember, W. N., & Hancock, P. A.: Vigilance and workload in automated sys-
tems. Automation and Human Performance: Theory and Applications, pp. 183-200, (1996).
17. Warm, J. S., Matthews, G., & Finomore Jr, V. S. (2018). Vigilance, workload, and stress. In
Performance Under Stress, pp. 131-158. CRC Press.
18. Donges, E.: Driver Behavior Models. In: Winner H., Hakuli S., Lotz F., and Singer C., edi-
tors. Handbook of Driver Assistance Systems: Basic Information, Components and Systems
13
for Active Safety and Comfort. pp. 19-33. Cham, Springer International Publishing (2016).
https://doi.org/10.1007/978-3-319-12352-3_2.
Physics-Based, Real-Time MIMO Radar Simulation for
Autonomous Driving
1 Introduction
© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_7
2
radars emit hundreds of ramped-frequency chirps per sensing frame, while processing
chirp responses from the environment on many receiver channels. Each received chirp
comprises hundreds of digitized frequency samples. When considering the need to a )
shoot millions of rays to sufficiently interrogate the 3D environment over b) hundreds
of chirps at c) hundreds of frequencies and filtered by d) tens of receiving a nt ennas,
general purpose electromagnetic scattering solvers may require hours to simulate each
coherent processing interval (CPI) (i.e., range-Doppler image f rame) o n a m odern
workstation. When real-time throughput is paramount, less rigorous sim u la tio n a p-
proaches that sacrifice physics are often adopted as a practical compromise.
In this paper, we present an all-GPU implementation of the shooting and bouncin g
rays (SBR) method that is optimized for the automotive radar application. Kn o wn a s
Ansys’ Real-Time Radar (RTR), it achieves roughly a factor of 3000 speed-u p o v er
HFSS SBR+ [2], its more general-purpose all-CPU antecedent. RTR simulates a 1-km
urban traffic scene with hundreds of objects at over 30 fps for f iv e sin gle -ch annel
radars or one 20-channel radar. A single-channel radar is simulated at the faster-than-
real-time rate of 160+ fps, and a 48-channel radar runs at 13 fps.
RTR incorporates arbitrary 3D scene and actor geometry, layered dielectric materi-
al treatments, 3D polarized antenna patterns, and pulse-Doppler and chirp -sequence
FMCW waveform processing to generate raw I and I+Q A/D sample outputs for mul-
ti-channel radar in dynamically changing driving scenarios. Objects (e.g., v eh icles,
pedestrians, road, infrastructure, etc.) can be assigned arbitrary positions, orientations,
and linear and angular velocities in a scene graph hierarchy through a ligh t -weigh t
API to characterize complex traffic scenarios with negligible simu la tio n o v erh ead.
Optional post-processing for range-Doppler image formation is also part of t h e GPU
implementation.
In Section 2, we present a brief review of the SBR methodology , a nd t h e mu lti-
chirp algorithmic improvement known as Accelerated Doppler Pro cessin g (ADP).
Section 3 presents key points of the GPU implementation using NVIDIA CUDA [3 ],
followed by example results and performance (simulation speed) data in Sect io n 4 .
Finally, Section 5 provides an overview on how to integrate the RTR with a 3rd party
driving simulator using the Open Simulation Interface (OSI).
2 METHODOLOGY
SBR uses geometrical optics (GO) to efficiently extend physical optics (PO) to
multiple bounces. Millions of ray tubes are launched in all directions from the radar’s
Tx antenna to sample the scene geometry. Escaping rays are ignored. For th ose ra ys
hitting an object, each carries a GO field initialized at the first-bounce h it p o in t a c-
cording to the gain pattern of the Tx antenna, the radar’s transmission power, and t h e
decay distance to the hit-point. The total GO field at the hit-point is determined f rom
this incident ray field, the incidence angle off the surface perpendicular, and the Fres-
nel reflection coefficient of the surface coating treatment. The latter can be efficiently
pre-computed based on the bulk electrical properties and thicknesses of the layers and
the selected backing material (vacuum, perfect metal, or half-space dielect ric). Th e
PO approximation is then used to convert the total GO field into equivalent elect ric
and magnetic currents. These currents are then radiated to each Rx antenna with free -
space Green’s function and spatially filtered according to its gain patt ern t o y ield a
received signal as a complex phasor.
Second-bounce rays are launched from the first-bounce hit poin ts, weigh t ed a c-
cording to the incident GO field and the same Fresnel reflection coefficient. If one o f
these rays hits an object in the scene, the same process is repeated, leading to second-
bounce currents, third-bounce rays, and so on. Multi-bounce signals coherently accu -
mulate at the receivers, each with a phase delay based on the total path len gt h f rom
the Tx antenna through each bounce and finally to the Rx antenna.
SBR also handles penetrable coatings by setting a vacuum backing under t h e d ie-
lectric layers and developing a corresponding set of transmissio n co efficient s. Fo r
example, this can be used to represent vehicle windshields. Rays hitting such surfaces
will continue to generate reflection rays but will now also generate transmission rays,
weighted this time by the transmission coefficients. Transmission rays are otherwise
handled in the same manner as reflection ra ys.
GPU
CPU Data I/O
OptiX CUDA cuFFT
Solver Initialization × To GPU × × ×
Loop over Time Steps ×
Update Object Positions × To GPU × ×
Loop over Radars
Multi-Bounce Loop
Shoot Rays from Tx × ×
Process Hits &
×
Create Next Rays
Compute Incident Field (1st
×
Bounce)
Compute Reflected & Trans-
×
mitted Fields
Radiate Currents to Rx ×
Compute Range-Doppler × ×
Transfer Results to CPU × To CPU ×
5
In the following example, the vehicle carrying the radar, known as “Ego,” travels a 1 -
km road with 14 buildings, 70 vehicles, and over 375 streetlights and traffic sign a ls,
as shown in Fig. 1. Roads include important radar-reflecting features lik e cu rb s a nd
medians. The entire scene is modeled with 5.3 million triangular facets rep resent ing
221,000 square meters total surface area, or over 14.5 billion square wavelen gth s at
76.5 GHz.
Fig. 1. (top) Aerial view of benchmark scene with radar shown as a rainbow Tx antenna pat-
tern. (bottom) Radar point-of-view perspective.
6
Time Rate
Radar Type Radars Tx Rx Output
(ms) (fps)
1 1 1 512x512 6.2 161.6
Single-Channel
5 1 1 Range-Doppler 29.4 34.0
1 1 20 33.4 30.0
Multi-Channel Chirp Response
1 1 48 76.7 13.0
RTR simulates this scenario at real time for one 20-channel radar or five single-
channel radars mounted around the Ego vehicle (see arrangement in Fig. 3). The I +Q
channel chirp-sequence FMCW waveform has the following properties: cen ter f re-
quency = 76.5 GHz, bandwidth = 200 MHz, A/D sampling rate = 15 MHz, 2 6 7 A/ D
samples/chirp, CPI duration = 4 ms, and 200 chirps/CPI. This yields reso lu t ion s o f
0.75 m in range and 0.49 m/s in velocity before application o f a H ann win d o w f or
sidelobe control. Additional configurations are shown in Table 2 . RTR outputs either
raw chirp responses or range-Doppler images, as shown in
Fig. 2. Timing results are measured for an NVIDIA Quadro RTX 6000 wo rkstat ion
GPU. Times are averaged over 3000 frames from five separate runs of a 2 0-seco nd,
600-time-step scenario. On average, for this complex city scene and the quoted simu -
lation times, RTR shoots 1.4 million rays and processes over 880,000 hits per frame.
Fig. 2. (left) Raw chirp responses before image formation and (right) 512x512 pixel
range-Doppler image.
7
Fig. 3. Five radars mounted around Ego with scene views and range-Doppler images. Cam-
era images do not show the full radar field of view.
Post Processing
RTR
Physics-Based Simulation
Custom Use-Case
Fig. 4. Example of a closed loop simulation framework
Adapting RTR’s open interface to OSI [9] makes its integration with 3 rd party driving
simulators seamless and effortless.
8
In the following section, we will first introduce the general concept of OSI and
proceed to demonstrate RTR’s integration with a 3 rd party driving simulator for closed
loop simulation.
Scalable Client-Server
Client
ZMQ
$FWRUV¶FRQWURO
OSI:SD
Setting world conditions SV: Sensor View
CARLA/OSI interpreter SD: Sensor Data
As represented in Fig. 6, CARLA’s ground truth data are converted into OSI:SV f or-
mat and directly communicated to RTR. In addition to position, orientation and veloc-
ity, the sensor view message also contains information about the 3D model reference
that indicates which 3D geometry represents a specific actor. RTR t h en f et ch es t h e
corresponding 3D models and their corresponding dielectric material properties.
In Fig. 6, we can see how a vehicle is disassembled into its subcomponents. Ea ch
subcomponent is a separate 3D model. Dielectric material properties can be applied to
an entire component or portions of a component. Component models also allow us t o
articulate parts of the vehicle independently, by modifying position and orientation, to
model sensor image features like the micro-Doppler effect of rotating wheels (in some
cases also referred to as Doppler smear of rotating wheels).
Number Plates
From the use case presented in Fig. 6, the main input to the object detection algorithm
are the range-Doppler data which are directly provided by RTR at a frame rate o f 4 0
fps. Several post processing algorithms execute on the range-Doppler data t o creat e
the target hit points output by the object detection algorithm. The hit points, converted
into OSI:SD format, are then communicated to CARLA, where t he serv er ren d ers
them on the screen. Similarly, these hit points could later be used b y p lann ing a n d
actuating algorithms to control the ego vehicle’s behavior.
2.5 m/s
6 Conclusion
The SBR methodology is already well established in providing radar response simula-
tion for large-scale and realistic scene geometry. In this paper, we have demonstrated
that by combining algorithmic acceleration with an all-GPU implementation of SB R,
it is feasible to generate multi-channel range-Doppler radar responses in real-t ime o r
faster. Finally, using RTR’s open interface, we also demonstrated its integratio n in a
real-time closed loop simulation.
11
7 Acknowledgment
The authors thank NVIDIA and Baskar Rajagopalan of NVIDIA for providing GPUs
to support RTR.
References
1. Waymo (2020) “Off road, but not offline: How simulation helps advance our Waymo
Driver”. [Online]. Available: https://blog.waymo.com/2020/04/off-road-but-not-of f lin e --
simulation27.html
2. Ansys (2021) “Ansys HFSS: High Frequency Electromagnetic Field Simulation Soft-
ware”. [Online]. Available: https://www.ansys.com/products/electronics/ansys-hfss
3. NVIDIA (2020) “CUDA C++ Programming Guide”. [Online]. Available:
https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html
4. H. Ling, R.-C. Chou and S.-W. Lee, "Shooting and bouncing rays: calculating the RCS o f
an arbitrarily shaped cavity," IEEE Trans. Antennas Propagat., vol. 37, no. 2, pp. 194-205,
Feb. 1989, doi: 10.1109/8.18706.
5. S-K Jeng, “Near-field scattering by physical theory of diffraction and shooting and bounc-
ing rays,” IEEE Trans. Antennas Propagat., vol. 46., no. 4, pp. 551 -558, Apr. 1998.
6. U. Chipengo, A. Sligar and S. Carpenter, "High Fidelity Physics Simulation of 128 Ch an -
nel MIMO Sensor for 77GHz Automotive Radar," in IEEE Access, vol. 8, pp. 160643 -
160652, 2020, doi: 10.1109/ACCESS.2020.3021362.
7. S. Parker, J. Bigler, A. Dietrich, H. Friedrich, J. Hoberock , D. Luebke, D. McAllister, M.
McGuire, K. Morley, A. Robison, and M. Stich, "OptiX™: A General Purpose Ray T r ac-
ing Engine," ACM Transactions on Graphics, vol. 29, no. 4, July 2010, doi:
10.1145/1778765.1778803.
8. NVIDIA (2020) “cuFFT Library User’s Guide”. [Online]. Available:
https://docs.nvidia.com/cuda/cufft/index.html
9. Hanke, T., Hirsenkorn, N., van-Driesten, C., Garcia-Ramos, P., Schiementz, M., Schnei-
der, S. & Biebl, E. (2017, February 03). A generic interface for the environment perception
of automated driving functions in virtual scenarios. Retrieved January 25, 2020, from
https://www.hot.ei.tum.de/forschung/automotive-veroeffentlichungen/
10. ISO 23150, https://www.iso.org/standard/74741.html.
11. OpenSimulationInterface, https://github.com/OpenSimulationInterface/open -simulation-
interface.
12. Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, Vladlen Koltun;
PMLR 78:1-16. CARLA: An Open Urban Driving Simulator.
13. ZeroMQ, https://zeromq.org/
Validation concept for scenario-based connected test
benches of a highly automated vehicle
Abstract. Highly automated functions and components for vehicle guidance lead
to increasing demands while still expected high reliability and safety. The part-
ners within Smart Load project addresses these challenges with new validation
concepts such as the cross-company or institute connection of test benches. Based
on the IPEK-X-in-the-Loop approach applying mixed physical-virtual models,
use case specific validation environments are modeled, compared and built. The
authors realize the scenario ”cornering of a people mover” with a location-dis-
tributed connection of a simulation environment, a gearbox test bench and an
engine simulation in a closed-loop setup. All partners involved in the network
can implement independent fallback mechanisms and substitute models.
The connections is supported by modeling interrelated elements like scenarios in
the Systems Modeling Language. Furthermore, the technical implementation is
supported using substitute models, a toolchain for deriving concrete scenarios
and the use of an adapted Distributed Co-Simulation Protocol. In result, the dis-
tributed tests show the inter-dependencies of different components on distributed
test benches, connected to a total vehicle simulation.
1 Introduction
Highly automated functions and components for vehicle guidance lead to increasing
demands while still expected high reliability and safety. This increase occurs for inter-
acting individual components and the overall vehicle system interacting with the envi-
ronment. In order to meet these increased demands, an intelligent use of redundant
components or functions can be used. Therefore, the authors investigates within the
Project “SmartLoad” “New methods for reliability enhancement of highly automated
electric vehicles”, exemplarily the interactions of the redundant steering function and
components in the validation of highly automated vehicles.
© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_8
2
In this paper, a validation concept for building and testing connected validation en-
vironments is presented. The concept includes a model-based approach for describing
and creating such a validation environment as well as a concept based on the standard
distributed Co-simulation developed by Modelica (e.g. [2,3]). In addition, activities are
derived from a process model for validating connected validation environments. The
influence of network-relevant parameters is investigated and integrated into approaches
to compensate for related problems (e.g. [4,5,6]). For example, measurements are per-
formed to identify fault conditions at subsystem and overall system level (cf. [7]). The
concept includes a (partially) automated coupling of Model-Based Systems Engineer-
ing approaches and necessary partial results of the validation.
1.1 Objective
The objective of the paper is to address the challenges of validation in highly automated
driving with distributed validation environments. The environments depend on scenar-
ios and further elements like requirements, whch need to be considered in the valida-
tion.
1.2 Questions
The following questions are dealt with:
1. ”How can a validation concept with a closed-loop distribution of component test
bench and complete vehicle model support the validation of highly automated vehi-
cles? How to handle issues in the connection of test benches?”
2. ”How can Model-Based Systems Engineering support the networked validation en-
vironments using the example of a cornering?”
testing inter-dependencies between different subsystems of the vehicle like the gearbox
and the engine.
Hereby the use case is modeled in the Systems Modeling Language (SysML) in the
tool Cameo Systems Modeler. The modeling enables the traceability from stakeholder
needs to maneuvers and validation elements like test parameters. The stakeholder needs
of a quick, secure transportation and a comfortable ride are considered in the use case.
In addition, standards and norms like the European New Car Assessment Programme
are linked to the use case in focus. Hence, a change in standards or new ones may result
in changes in the use case and further elements. Following the classification of scenar-
ios in logical and concrete ones, each use case is related to concrete (product describing)
use cases, like the general use case of a trailer transport is related to the automatic dock-
ing process between trailer and vehicle. These use cases may contain one or multiple
logical and concrete scenarios.
Fig. 1. Exemplary extract of the Relation Map with a trace from standard to parameter
4
3 Technical implementation
3.1 Toolchain
To derive concrete (testing) scenarios the au-
thors develop the so-called SmartLoad
toolchain. The use case (see number one in Fig.
2) and the scenario catalog can be used to select
already existing logical scenarios (see number
two in Fig. 2) according to the PGE - product
generation engineering model. PGE describes
the development of a new product generation
based on a reference system [13]. New genera-
tions are derived from reference elements with
the three variation types carryover -, attribute -
and principle variation [14]. Hence, the sce-
nario catalog with its reference elements like
parameters and scenarios is part of the refer-
ence system. The selection is based on prioriti-
zation and objective setting e.g. using methods
in the area of functional safety like FMEA. The
logical scenario with its human-comprehensi-
ble parametrization of boundary conditions
Fig. 2. Toolchain concept in contains i.a. the determination of the stochastic
the Project “SmartLoad” distribution. Step five shown in Fig. 2 gener-
ates the concrete scenario with an x-fold auto-
matic generation of a logical scenario.
result is a simplified finite state machine, which is shown in Fig. 3. This contains all
so-called Super-States from the DCP, which are necessary for a real-time networking.
The branch of the non-real-time domain from the DCP, however, was removed. This is
not relevant for the test benches, since the test benches must always act synchronously.
Furthermore, some intermediate states were neglected.
Fig. 3. State machine derived for the Project “SmartLoad” (left) and examples for the behavior
for different commands (right)
The commands to change the state of the finite state machine are also taken from the
DCP. This is intended to allow eventual integration of the full standard without having
to completely renew all existing implementations.
Since the speed of data transmission is much more important than data integrity
when linking test benches, especially in a closed control loop, UDP messages were used
for the linking. These have a low reliability, but are significantly faster compared to
TCP/IP messages. In order to ensure that a certain data plausibility exists, each test
bench involved is responsible for a plausibility check. For the communication between
the master and the slave, an int16 decoded value is used. This transmits the various
requests from the master to the slave, which can be seen in Fig. 4 on the right side in
the first eight bits. Subsequently, another eight bits are reserved to implement various
functions. Bit eight and nine allow the slave to inform the master if a request is invalid
or if a request cannot be processed at the moment because the slave is busy. The last
bit of the int16 value is used as watchdog toggle bit. Both sides send back the received
bit inverted. For example, if the master receives a zero from the slave, it returns a one.
The slave in turn sends back the bit received from the master in inverted form. Due to
the signal propagation time, this results in a square wave signal on both sides. Both
sides can use this square wave to check if the communication is still active. If a change
of the value takes too long, e.g. the slave can shut down and come to a controlled stop
and the master can terminate the whole network. The rest of the UDP message consists
of the values to be transmitted encoded as float. A representation of two exemplified
messages can be seen in Fig. 4.
6
Fig. 4. Exemplified UDP message between the master and one slave. Showing the commands
and answer in Bit 0 to 7 and further information's in bit 8 to 15, followed by some float values
reality of the test conception. This vehicle model is built up within the vehicle dynamic
software IPG-CarMaker. This vehicle model, compare to normal vehicle model, has no
model for the drivetrain, because the drivetrain is setup as a physical component. It is
yet necessary to equip the chassis kinematics to the vehicle model, so that the aligning
torque can be calculated realistically.
The test vehicle (Fig. 6) used in this project is a 1:1.5 scale vehicle. It is driven by
two electric motors at the front axle. It
has a double wishbone suspension con-
cept. The subsystem of the powertrain as
well as the steering system feature redun-
dant functionality.
The vehicle model has been validated
Fig. 6. Test vehicle used in the Project
by real driving tests. The validation con-
“SmartLoad” with setup of the mechatronic sists of two parts. To validate the longi-
system for the longitudinal and transverse tudinal dynamics, a roll-off test was car-
guidance of the vehicle (left) ried out. The lateral dynamics were vali-
dated by cornering driving maneuvers.
In order to compensate for the increase in torque due to the test specimen gearbox, an
additional test bench gearbox is used. This reduces the torque downstream of the SuI
to match the motor power. In addition, torque sensors are located both before and after
the test specimen. This allows the torque applied to the test specimen to be used directly
for networking. For networking, the output motor (Fig. 7 on the right) is operated in
speed mode and the drive motor (Fig. 7 on the left) in torque mode.
A real-time Linux system with an Ethercat fieldbus is used to control the motors.
The real-time system is directly integrated into the connected validation environments
as a slave. An ADwin system from Jäger Messtechnik is used for data recording.
8
Fig. 8. Test bench used for the electric motor located at IEW
The System under Investigation (SuI) represents one of the traction motors of the test
vehicle and, hence, is torque-controlled. The applied torque 𝑇𝑇 is measured by a sensor
and processed by the data recorder, which records 𝑇𝑇 as well as the voltages 𝑈𝑈 and the
currents 𝐼𝐼 of the electric machine and the DC-link. An optional battery-emulator can be
used to emulate the behavior of the vehicle's battery. The communication with the other
test benches is realized via an interface of the real-time system to the connected sys-
tems, allowing receiving the reference values and sending the measured values for
torque and speed, respectively.
An approach for a substitute model that emulates the behavior of the SuI on the test
bench is shown in Fig. 9. The model can be used when the test bench is not available,
e.g. due to maintenance, or connection issues of the network.
𝑑𝑑/𝑞𝑞
Depending on the operating strategy, different currents 𝑖𝑖𝑟𝑟𝑟𝑟𝑟𝑟 need to be applied to the
electric machine. Using field-oriented control, two proportional-integral (PI)-current
9
𝑑𝑑/𝑞𝑞
controllers calculate the required voltages 𝑢𝑢𝑟𝑟𝑟𝑟𝑟𝑟 that are generated by the power elec-
tronics. The model needs to take into account the dynamics of controller, converter and
electric machine. Both converter and electric motor can be approximated as first order
lag systems with time constants 𝜏𝜏𝑝𝑝𝑝𝑝 and 𝜏𝜏𝑒𝑒𝑒𝑒 , respectively. While 𝜏𝜏𝑝𝑝𝑝𝑝 depends on the
switching frequency of the converter, 𝜏𝜏𝑒𝑒𝑒𝑒 depends on the inductance and resistance of
the electric machine.
4 Results
Fig. 10. Trajectory of the test track at KIT (a) and in CarMaker (b)
Fig. 11 shows measurement data from a successfully performed maneuver. The top plot
(a) shows the torque of the motor handled by the IEW test bench.
15
(a)
10 Demanded Motor Torque
Torque in Nm
0
0 5 10 15 20 25 30 35 40 45 50 55
200
(b)
Gearbox torqe
Torque in Nm
100
0
0 5 10 15 20 25 30 35 40 45 50 55
200
(c)
Left Wheel Torque
Torque in Nm
100
0
0 5 10 15 20 25 30 35 40 45 50 55
Time in seconds
Fig. 11. Measurement data of a maneuver with the demanded motor torque (a), the gearbox
torque (b) and the corresponding torque at the wheel (c)
10
The middle part (b) shows the gearbox input torque, measured via torque sensors di-
rectly on the test specimen. The bottom part (c) shows the torque at the left wheel of
the vehicle model. The maneuver shown is used as a reference in the flowing chapters.
It shows a track execution without provoked errors or failures of one of the partners.
20
0 5 10 15 20 25 30 35 40 45 50 55
time in seconds
Fig. 12. Calculated gearbox ratio based on velocity (orange) and torque (blue)
Simulating all of the above influences would take considerable time and would not
necessarily be practical. This shows the potential of distributed validation. This also
allows an early prototype to be integrated and thus interactions with the remaining sub-
systems to be considered without the prototype having to be deployed at a different
location. Thus, integration can take place at the respective competence carrier.
11
Fig. 13 shows the reference torque 𝑇𝑇𝑟𝑟𝑟𝑟𝑟𝑟 over time for the maneuver described in the
previous section, one with a complex motor model and one with a simplified motor
model.
10
Complex Motor Model
Reference Torque in Nm
0 5 10 15 20 25 30 35 40 45 50 55
time in seconds
Fig. 13. Comparison of a maneuver with simplified motor model and complex motor model
In case of the simplified model, the reference torque is generally reduced, and, espe-
cially at the start of the maneuver for 𝑡𝑡<5, a lot smoother. Since no dead time occurs in
the motor model, the desired torque is achieved almost instantaneously, improving the
responding behavior of the controller in CarMaker.
Differences induced by the simplified motor model can also be observed in the
torque and speed measurement of the gearbox. Both are now smoother than during the
standard maneuver. It can be concluded that due to the closed-loop operation, the uti-
lized models of one component have an impact on the behavior of subsequent systems.
the signals sent to the corresponding partner are also continuously sent to a local simu-
lation model. This model represents the transmission behavior of the UDP connection
and the mechanical elements of the test bench. When switching to the simulation model,
the calculated variables of the simulation model are used instead of the received UDP
messages of the respective partner. This model runs continuously in parallel to the re-
spective partner and can thus be used immediately.
Fig. 14. Measurement data of the IPEK test bench, where the UDP signal of the IEW test
bench was interrupted after approx. 28 seconds.
continue their operation. This gives the CarMaker model a slightly different torque dur-
ing the time of the UDP failure (orange line in subplot a). In addition, when the gearbox
test bench is reconnected, a short high peak in torque can be seen.
Fig. 15. Measurement data from a maneuver during which the UDP signal of the IPEK test bench
was interrupted after approx. 28 seconds. The Data was recorded at the FAST test bench for (a)
and at the IPEK test bench for (a)
4.5 Conclusion
The measurement results are implemented in SysML in the tool Cameo Systems Mod-
eler. Based on the shown fallback mechanism, further security measures can be imple-
mented. For example, the fallback mechanisms are linked to the DCPLite state machine
and requirements of the developers. In the event of a failure of several partners, a deci-
sion can be made in the state machine to shut down the entire network. At the same
time, each participant can independently decide which additional measures should be
implemented locally as soon as there is no valid signal available locally. The use of the
models also enables the connected validation environments to be used without individ-
ual partners. If one of the partners is not operational, the corresponding model can be
used and the network remains operational, at least to a limited extent.
5.1 Discussion
The first research question is answered by a proposed toolchain, a DCPLite standard
and a process model for the validation of connected test benches. The concepts are
applied in the context of the Project “SmartLoad” for one specific use case of cornering
of a people mover.
14
With the support of Model-Based Systems Engineering the elements in the context
of distributed validation can be modeled and linked. Furthermore, the described con-
cepts to support the distributed validation can be modeled and linked to these elements
as well. For instance, the FMEA of the test bench connection or the state machine of
the DCPLite are linked to certain requirements and functions. Hence, MBSE can sup-
port the distributed validation by creation and adaption of views and models.
However, the research does not apply a broad range of use cases and focus on user-
friendly applicable concepts rather than on one integrated concept.
The purpose of distributed validation must be discussed for the use cases and conse-
quential connection parameters like sending frequency or real-time loop. In the Project
“SmartLoad”, the fastest transfer rate and sending frequency of 1.5 ms = 666 Hz and
average rate of 4.5 ms = 222 Hz only achieves some basic interrelation research. For
instance, an analysis of the inverter with a switching frequency of >16 kHz or the whole
bandwidth of a structure-borne sound (0 to 1.25 MHz) is not jet achievable with a real-
time testing of connected test benches. The frequency alone is not necessarily the de-
cisive criterion. The dead time of the signal transmission must also be considered and
can have a significant influence, especially in the closed-loop. In contrast to real-time
testing, sequential testing allows to simulate models or test benches and send the meas-
ured results to the next model or test bench. This allows testing systems with high fre-
quency. The disadvantage is the missing inter-dependencies between different systems.
Acknowledgement
This work has been supported by the Federal Ministry for Education and Research
(BMBF) in the project “New methods to increase the reliability of highly automated
electric vehicles (SmartLoad)” with the funding reference number: 16EMO0363.
References
1. Albers, Albert; Behrendt, Matthias; Klingler, Simon; Matros, Kevin. Verifikation und Vali-
dierung im Produktentstehungsprozess. In Udo Lindemann (Ed.). München: Carl Hanser
Verlag, pages 541–569, 2016.
15
Martin Krammer, Martin Benedikt, Torsten Blochwitz, Khaled Alekeish, Nicolas Amringer,
Christian Kater, Stefan Materne, Roberto Ruvalcaba, Klaus Schuch, Josef Zehetner, Micha
DammNorwig, Viktor Schreiber, Natarajan Nagarajan, Isidro Corral, Tommy Sparber,
Serge Klein, and Jakob Andert. The distributed cosimulation protocol for the integration of
realtime systems and simulation environments. In Proceedings of the 50th Computer Simu-
lation Conference, 2018.
2. Peter Baumann, Martin Krammer, Mario Driussi, Lars Mikelsons, Josef Zehetner, Werner
Mair, and Dieter Schramm. Using the distributed cosimulation protocol for a mixed
realvirtual prototype. In 2019 IEEE International Conference on Mechatronics (ICM), vol-
ume 1, pages 440–445, 2019.
3. Timo Combe, Thomas Cartus, Martin Schuessler, and Kurt Engeljehringer. Verfahren zur
dynamischen totzeitkompensation der abgaskonzentration, 2010.
4. Albert Albers, S. Boog, J. Berger, J. Matitschka, and M. Basiewicz, editors. Modellbildung
von Koppelsystemen in der dynamischen Validierung von Antriebssystemkomponenten:
SIMVEC – Simulation und Erprobung in der Fahrzeugentwicklung, VDIBerichte.
VDIVerlag, 2016.
5. Wenxu Niu, Ke Song, Yongqian Zhang, Qiwen Xiao, Matthias Behrendt, Albert Albers, and
Tong Zhang. Influence and optimization of packet loss on the internetbased geographically
distributed test platform for fuel cell electric vehicle powertrain systems. IEEE Access,
8:20708–20716, 2020.
6. Noushin Mokhtari and C. Guehmann. Classification of journal bearing friction states based
on acoustic emission signals. tm Technisches Messen, 85, 2018.
7. DIN SAE SPEC 91381:201906, Begriffe und Definitionen in Bezug auf die Prüfung auto-
matisierter Fahrzeugtechnologien, 2019.
8. Sebastian Geyer, Marcel Baltzer, Benjamin Franz, Stephan Hakuli, Michaela Kauer, Martin
Kienle, Sonja Meier, Thomas Weißgerber, Klaus Bengler, Ralph Bruder, Frank Flemisch,
and Hermann Winner. Concept and development of a unified ontology for generating test
and use–case catalogues for assisted and automated vehicle guidance. IET Intelligent
Transport Systems, 8(3):183–189, 2014.
9. Maike Scholtes, Lukas Westhofen, Lara, Katrin Lotto, Michael Schuldes, Hendrik Weber,
Nicolas Wagener, Christian Neurohr, Martin Bollmann, Franziska Körtke, Johannes Hiller,
Michael Hoss, Julian Bock, and Lutz Eckstein. 6layer model for a structured description
and categorization of urban traffic and environment. arXiv preprint server, 2021.
10. Scenario description: Requirements & conditions – stand 4, https://www.pegasuspro-
jekt.de/files/tmpl/PDF-Symposium/04_Scenario-Description.pdf, last accessed 2021/04/09.
11. Gerrit Bagschik, Till Menzel, Andreas Reschka, and Markus Maurer. Szenarien für entwick-
lung, absicherung und test von automatisierten fahrzeugen. In 11. Workshop Fahrerassis-
tenzsysteme. Hrsg. von UniDAS e. V, pages 125–135, 2017.
12. Albert Albers, Nikola Bursac, and Eike Wintergerst. Produktgenerationsentwicklung – be-
deutung und herausforderungen aus einer entwicklungsmethodischen perspektive. In Bert-
sche Binz, editor, Stuttgarter Symposium für Produktentwicklung 2015 SSP 2015, 2015.
13. A. Albers, S. Rapp, J. Fahl, T. Hirschter, S. Revfi, M. Schulz, T. Stürmlinger, and M. Spa-
dinger. Proposing a generalized description of variations in different types of systems by the
model of pge - product generation engineering. Proceedings of the Design Society: DESIGN
Conference, 1:2235-2244, 2020.
14. Modelica Association Project (MAP) Distributed CoSimulation Protocol. Distributed
cosimulation protocol (dcp), March 4, 2019.
Automatic emergency steering interventions
with hands on and off the steering wheel
Abstract. With the continuous automation of the driving task, driving while the
hands are off the steering wheel is progressively becoming reality. This situa-
tion is therefore of interest when evaluating driver reactions to an automatic
emergency steering (AES) system. So far, only the reactions with hands on the
steering wheel were investigated and this study is a first step towards expanding
currently available knowledge about AES to the case of hands off the steering
wheel. A driving simulator study was done where only steering was possible
and three interaction designs (Manual Driving (MD), MD with AES and Auto-
mated Driving (AD) with AES) were investigated. A suddenly appearing obsta-
cle from the side of the road while the vehicle was driving at a velocity of 80
km/h was used as driving scenario and the reaction of the driver was evaluated
objectively and subjectively. Results point in the direction that if an AES inter-
vention happens while driving automatically (hands-off), it is most likely that
the steering wheel will be gripped again while the intervention is still in pro-
gress. When compared to the reactions while the car was being driven manually
prior to the intervention, no tendency towards greater opposition of the inter-
vention or difference of the subjective variables could be found.
1 Introduction
© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_9
2
required for an evasive maneuver becomes lower than the one required for braking
[2]. Currently available CAS for evasion are EVA systems, which support the driver
during the evasion by providing additional torque in the direction of the evasion ([2]-
[6]). A more advanced version of EVA is AES, which does not only support the driv-
er in the evasion, but executes it fully automatically, similar to AEB systems for brak-
ing. The present study focuses on such an AES system and its human-machine inter-
action.
Up until now, CAS were designed under the aspect of supporting or replacing the
actions of a human driver. Since the CAS initiates its specific function if no reaction
occurs until the last point, it does not matter whether the car is being driven by an
automatic driving (AD) system or a human driver. However, it is of great importance
for the human driver whether he/she or a system is in control of the car. In the former
case, his/her hands are generally on the steering wheel and he/she has to pay attention
to all the tasks required for driving (e.g. [7]). In the latter case, the driver can, depend-
ing on the level of automation, take his/her hands off the steering wheel and does no
longer need to pay attention to the driving tasks but can engage in non-driving related
tasks (NDRT) such as reading or working on a laptop. Since these two cases (manual
driving vs. automatic driving) are completely different for a human, one can imagine
that his/her response to an AES intervention will also be different.
Although multiple studies can be found which investigate the human-machine in-
teraction with an AES during MD, there are none which evaluate human reaction
when the vehicle was being driven autonomously before an AES intervention hap-
pened. There is therefore a need for research and the goal of this study is to extent
current available knowledge of human interaction with an AES.
2 Theoretical background
As the name of CAS indicates, the situations of interest are imminent collisions where
the driver needs to react quickly in order to avoid an impact. Without any assistance,
most drivers tend to brake although braking alone cannot avoid a collision. However,
an evasion can, which is even more obvious at higher velocities as the available time
for a reaction decreases [8]. Equipping the vehicle with an AES system which exe-
cutes an evasion in the driver’s stead could therefore be helpful to avoid collisions.
The characteristics for such an intervention need to be chosen with care. Drivers tend
to oppose automated steering by either applying a torque in the opposite direction,
which in turn reduces the effectiveness of the intervention [9]. This initial opposition
lasts for 200 to 600 ms and is assumed to be the time drivers need to decide whether
to subdue the intervention or start an evasive maneuver [10]. There are also indica-
tions that the opposition as well as the amount of opposition depends on the torque
applied by the AES system, and that too high torques lead to more resistance of the
driver [11]. A possible counter measure could be to inform the driver prior to the AES
intervention, as an information with an auditory cue prior to the intervention as well
as at the moment of the start of the intervention show tendencies towards less driver
resistance [9], [12].
3
Not only is it important that the intervention is not hindered by the driver’s reac-
tion, it should also be subjectively accepted or approved by the driver. One possibility
to assess the subjective opinion would be to ask specific questions; for example if one
would be willing to buy a car equipped with a specific CAS system. In the case of an
AES system it was shown that more people answered “yes” after having experienced
such a system than before the experience [13]. Another possibility can be a subjective
scale, preferably standardized and also used by other studies so that results can be
compared across multiple studies by different authors. One such scale was developed
to measure user-satisfaction and usability of Advanced Driver-Assistance Systems
(ADAS). It was tested in six studies, where two of them were about CAS [14]. An-
other scale assesses the criticality of a situation [15]. Although it was initially devel-
oped to investigate driver reactions to errors of a steering system, it was later speci-
fied that the scale could also be used for other disturbances as it allows driver to com-
pare the situations to those of everyday life [16]. In addition to the errors of a steering
system, the scale was also used for a Traffic-Jam-Assistant (TJA) [17], a true positive
activation of an AES intervention [12], as well as a false positive one [9], [18].
Research Question. With the increasing automation of the driving task, there already
exist systems [19], [20] that take control of the vehicle and allow the driver to take
his/her hands off the steering wheel. For the future development of AES systems it
becomes therefore necessary to understand how a driver will react to an AES inter-
vention with his/her hands initially off the steering wheel, for which no existing study
could be found. The goal of the current study is to extent the knowledge of driver
reactions to an AES intervention and the research question is thus as follows:
Hypotheses. The focus was laid on a mental state in which the driver is able to per-
ceive the situation and react to the best of his/her capabilities in case of an emergency.
This is the reason why the driver is not engaged in a NDRT in this study. On the ob-
jective side, multiple studies found that drivers tend to counteract an AES intervention
because it happens before the driver can react to the situation. For this study, it was
assumed that this intervention will become stronger and a greater amount of steering
by the driver will be observed, as the driver is not in direct control of the vehicle at
the moment of the obstacle appearance during AD. The first hypothesis reads:
Drivers, who are not engaged in an NDRT, will be more likely to steer during an
AES intervention when the vehicle was previously being driven automatically than
when it was being driven manually.
On the subjective side, because the driver is no longer in direct control of the vehi-
cle, it was assumed that the driver would feel more endangered during AD when an
obstacle suddenly appeared. The 2nd hypothesis thus reads
4
The situation will be perceived as more dangerous when the vehicle was being
driven automatically before the intervention.
4 Methodology
4.1 Participants
4.2 Apparatus
The study was undertaken at the R&D Headquarters of JTEKT in Nara (Japan). A
basic, static driving simulator, shown in Fig. 1, equipped with a mass-produced steer-
ing system of type column EPS made by JTEKT was used. The monitor was a BenQ
XR3501 with a resolution of 2560x1080 pixels, a diagonal length of 35 inches and a
curvature of 2000 R.
Preliminary tests on the simulator showed that it was much more difficult to drive
in the simulation while controlling both the steering wheel and the accelerator/brakes
than in a real car. Because of this difference, the velocity is controlled by the simula-
tion and only steering is required. The display of the simulator is shown in Fig. 2 and
it shows the speedometer and the status of the driving system. A generic engine sound
was played at a level of 65 dB that changed frequency according to the RPMs of the
simulated engine.
5
The road and the driving scenario are shown in Fig. 3 and Fig. 4. The parameters used
in [13] were taken as a basis and whenever possible, the same parameters were cho-
sen. The vehicle is driving at a velocity of 80 km/h when the initially hidden obstacle
in form of a car becomes first visible at TTCOG = 1.8 s (Fig. 3). The obstacle reaches
its final obstruction of 1.5 m (half lane-width) 0.3 s later (Fig. 4). When the obstacle
has reached its final position, a FCW is given and the automatic steering intervention
is triggered, if included in the condition of the experiment.
Three settings of the driving scenario were created in an attempt to counter possible
learning effects. The obstacle stayed the same, but the obstruction, the scenery as well
as the distance travelled before the obstacle appeared were different.
Village: The obstruction was created with a suburban house. The driving time
from start of the setting to the appearance of the obstacle was around 2:11 min. A
truck was parked in a parking spot to the left of the road and was passed 1.7 s be-
fore the obstacle appeared.
Forest: The obstruction was created with a billboard and the driving time was
around 2:11 min. A car driving on the oncoming lane was passed about 3.7 s be-
fore the obstacle appeared.
Rural: The obstruction was created with a rural house and the driving time was
around 3:15 min. A car driving on the oncoming lane was passed about 9.3 s be-
fore the obstacle appeared.
6
𝑥 1 𝑥
𝑦(𝑥) = 𝑦𝑒 [ − sin (2𝜋 )] (2)
𝑥𝑒 2𝜋 𝑥𝑒
MD (control group): The car is driven manually and only a FCW is given at
TTCOG = 1.5 s.
MD & AES (hands on): In addition to the FCW, the AES intervention starts also
at TTCOG = 1.5 s. Once the intervention has finished, full control of the vehicle is
given back to the driver.
AD & AES (hands off): Almost the same as MD and AES but the car is being
driven automatically previously to the AES intervention. The AD system is acti-
vated 7.5 s before the obstacle appears.
participants. This gives the possible combinations shown in Table 1, where one com-
bination represents one whole experiment with one participant.
Table 1. All possible combinations of the interaction design with the driving scenario setting
Driving Scenario Setting
Combination
Village Forest Rural
1 MD MD & AES AD & AES
2 MD AD & AES MD & AES
3 MD & AES AD & AES MD
4 MD & AES MD AD & AES
5 AD & AES MD MD & AES
6 AD & AES MD & AES MD
The dependent variables can be divided into objective and subjective variables.
Objective
Lateral deviation at TTCOG = 1.5 s: initial lateral deviation y0, y = 0 represents the
lane center and negative values point towards the road center.
Reaction time during MD: a reaction occurs if the column angle as well as its ve-
locity are below -1° or below -1°/s.
Hands-on time: the time the driver needed to put his/her hands back on the steering
wheel during AD and AES.
Steering factor fsteering: this factor was designed such that it represents the steering
input of the driver during the AES intervention relative to the time over which
his/her hands are on the steering wheel. It compares the actual column angle θ to
the one which would occur without any driver interference θreference. During the
condition MD and AES, the time window starts at tAES, start, which represents the
point in time when the AES intervention is activated. It ends at tAES,end, which
stands for the point in time when the AES intervention ends and full control is giv-
en back to the driver.
1 𝑡 2
𝑓𝑠𝑡𝑒𝑒𝑟𝑖𝑛𝑔 = ∫ 𝐴𝐸𝑆,𝑒𝑛𝑑 (𝜃
𝑡𝐴𝐸𝑆,𝑒𝑛𝑑 −𝑡𝐴𝐸𝑆,𝑠𝑡𝑎𝑟𝑡 𝑡𝐴𝐸𝑆,𝑠𝑡𝑎𝑟𝑡
− 𝜃𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒 ) 𝑑𝑡 (3)
During AD and AES, the driver cannot steer as long as he/she does not put his/her
hands on the steering wheel. The time duration of the integral is therefore adapted
and starts at thands,on.
1 𝑡 2
𝑓𝑠𝑡𝑒𝑒𝑟𝑖𝑛𝑔 = ∫ 𝐴𝐸𝑆,𝑒𝑛𝑑 (𝜃
𝑡𝐴𝐸𝑆,𝑒𝑛𝑑 −𝑡𝑟,ℎ𝑎𝑛𝑑𝑠,𝑜𝑛 𝑡𝑟,ℎ𝑎𝑛𝑑𝑠,𝑜𝑛
− 𝜃𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒 ) 𝑑𝑡 (4)
9
Subjective
4.8 Procedure
Each participant was welcomed directly at the driving simulator, where the study and
the different driving systems used were explained and, as opposed to other studies, the
presence of an AES system was also included in the explanation. The participant was
instructed to:
A training session was held before the start of the real experiment. It was done on a
separate training circuit (one round ≈ 4.8 km), where all three scenario variants (with-
out the suddenly appearing obstacle) were placed along the circuit to make them not
stand out during the experiment. About two and a half rounds were done on the train-
ing circuits with multiple activations and deactivations of the AD mode as well as a
demonstration of the AES at the end of the second round. The driving times during
the training session were 7:44 min. in MD mode, 1:44 min. in AD mode with hands
off and 1:29 min. with hands on the steering wheel.
5 Results
The data of all 18 participants is only used in Section 5.1. A safety mechanism to
prevent excessive torques on the steering wheel was activated during the evasion of 4
participants, rendering the data unusable for further analyses and thus only the data of
the remaining 14 participants was used.
Wilcoxon signed-rank tests were done as post-hoc tests and a significance level of
0.0167 according to the Bonferroni correction was employed. The initial deviations of
the village setting were significantly different from the forest setting (W = 4,
p < .001, rb = -0.953 (rank-biserial correlation)), but not from the rural setting
(W = 55, p = .196). Lastly, the forest setting showed no significant change compared
to the rural setting (W = 122, p = .119).
The observed reactions during all interaction designs could be classified into three
categories:
Safe: The evasion happened without collision with the obstacle and the
road was not left.
Collision: The obstacle could not be evaded and a collision happened.
Off Road: Although the obstacle was evaded successfully, the road was
left during the evasive maneuver.
The results according to this classification are shown in Fig. 8.
The average reaction time during MD was 0.79 s (SD = 0.2 s).
During AD and AES the hands-on time could not be analyzed in detail due to the low
number of participants. Nevertheless, an overall classification into three categories
11
depending on the time when the steering wheel was gripped again was done, which is
shown in Table 2. The first two categories, during AES intervention and after AES
intervention, should need no further explanation but the category “always” signifies
that the hands were not taken off the steering wheel although AD mode had been
enabled. This reaction, where the participants did not let go of the steering wheel, was
only observed during the village setting.
6 Discussion
The aim of this study was to understand drivers’ reactions to an AES intervention
during both manual and automatic driving. Three settings of the same driving scenario
were initially created to counter learning effects but unfortunately, these settings cre-
ated another influence on the reaction of the driver. The lateral position of the vehicle
on the road when the obstacle appears was different across settings and its effect on
the results should therefore always be taken into account when analyzing and inter-
preting the results of this study.
Higher collision rates were expected during the MD interaction design since simi-
lar parameters to the driving scenario in [13] were chosen, where only 7 % of the
participants were able to avoid the obstacle without touching it. This difference is
most likely due to the fact that a reaction time of 0.3 s was required to evade the ob-
stacle safely in [13]. In this study, however, the participants who evaded successfully
had an average reaction time of 0.7 s which indicates that the scenario was less criti-
cal than the one used in [13]. Another possible explanation is the fact that participants
could only steer, as the option of braking was not possible. Therefore the possibility
that collisions occurred because the participants braked although it was not possible to
avoid a collision by braking alone, as was reported by [9] and [10], was eliminated.
Another big difference in driver reactions compared to other studies is that almost
half of the participants left the road during their evasive maneuver during the MD
interaction design, which was never reported in any of the studies reviewed in Section
2. The reason for this difference is believed to be the higher velocity at the start of the
evasion. This means that although the time available for an initial reaction is roughly
the same, subsequent reactions need to be executed much faster because the TTCOG
of the current study was similar to the TTCS of the other studies.
During the interaction design AD and AES with hands off the steering wheel, 8 out
of 14 participants gripped the steering wheel during the AES intervention. Although
the decision whether to grip the steering wheel again also comes down to a question
of trust in the system, gripping the steering wheel during the intervention seems to be
the most probable reaction. Since this kind of emergency normally occurs quite rarely
on the road and the driver has probably forgotten about the AES system by that point,
the real percentage of people who would grip the steering wheel during the AES in-
tervention could probably be even higher.
The hypothesis that drivers would be more likely to steer during AES when the ve-
hicle was being driven automatically could not be verified by the use of the steering
factor. One reason could be that the vehicle was driving in the middle of the lane dur-
13
ing AD and AES (since the car was being driven automatically) which was normally
not the case when it was driven manually during MD and AES. The higher amount of
steering for MD and AES could be explained by the characteristics of the AES inter-
vention used in this study, which always aims for a relative lateral deviation of 1.5 m
and does not consider the initial lateral deviation. This means that the intervention
gets less and less optimal the more the vehicle is driving away from the lane center.
Although general suppression of AES intervention have also been reported in other
studies ([9], [10]), it is not clear in the current study whether the suppression hap-
pened because the intervention was judged to be exaggerated or because of an instinct
reaction to the suddenly moving steering wheel. Another possible explanation could
have been different levels of acceptance or perceived usability. However, no such
tendencies could be found as neither acceptance nor usability ratings were different
for the two interaction designs with AES, which is similar to the result in [12].
The second hypothesis that the situation would be perceived as more dangerous
when the car was being driven automatically prior to the evasion could also not be
verified. The criticality showed no significant difference across interaction designs
which is in line with the results of [9] and [12]. This is most likely due to individual
scales for danger. Examples are that a safe evasion was rated 10 (the maximum)
whereas an evasion where the road was left was rated 5. Another reason for these
individual scales of danger could also be the subjects’ occupation, which ranged from
testing engineers for real cars to secretaries.
7 Conclusion
The main goal of this study was to extend currently available knowledge on AES to
the situation where the car is being driven automatically and the hands are off the
steering wheel and a first step in this direction was done. However, there were factors
that influenced the outcome more than anticipated. The driving scenario setting
proved to be such a factor, as a dependence of the initial lateral deviation at TTCOG =
1.5 s on the scenario setting was discovered. No such influence has ever been reported
in the reviewed studies and it certainly points in the direction of [12] that further in-
vestigation of the influence of traffic scenario parameters is needed for better under-
standing of driver reactions.
Only a low number of participants could be recruited and the driving simulator was
simple. However, even with these limitations, it is believed that a general reaction to
an AES with hands off the steering wheel could be found. It seems that gripping the
steering wheel during the AES intervention while the car is being driven automatical-
ly is quite likely to occur in reality since half of the participants did so in this study. A
detailed analysis of the hands-on time was not possible and a next step should be to
investigate how the point in time of the gripping influences the success and quality of
the automatic evasion maneuver.
Finally, the aspect of a false positive activation of the AES intervention has not
been looked at in any way during this study. Since the hands are off the steering
wheel during AD, the question of controllability becomes quite difficult to answer and
14
References
1. Fildes, B., Keal, M., Bos, N., Lie, A., Page, Y., Pastor, C., Pennisi, L., Rizzi, M., Thomas,
P., Tingvall, C.: Effectiveness of low speed autonomous emegency braking in real world
rear-end crashes. Accidents Analysis & Prevention (81), 24-29 (2015).
2. Zegelaar, P., Bosh, H., Allexi, G., Schiebahn, M., Vukovic, E., Notelaes, S.: Ford Evasive
Steering Assist – Steering You Out of Trouble. In: 27th Aachen Colloquium Automobile
and Engine Technology (2018)
3. ZF Homepage, https://www.zf.com/products/en/cars/products_30987.html, last accessed
2021/04/22.
4. Mercedes-Benz Homepage, https://www.mercedes-benz.com/en/innovation/by-far-the-
best-mercedes-benz-assistance-systems1/, last accessed 2021/04/22.
5. Volvo Homepage, https://www.volvocars.com/en-eg/support/topics/use-your-car/car-
functions/collision-avoidance-assistance, last accessed 2021/04/22.
6. Ford Homepage, https://www.ford.com/technology/driver-assist-technology/evasive-
steering-assist/, last accessed 2021/04/22.
7. Geiser, G.: Mensch-Maschine Kommunikation im Kraftfahrzeug. Automobiltechnische
Zeitschrift (87, 2), 77-84 (1985).
8. Lechner, D., Malaterre, G.: Emergency maneuver experimentation using a driving simula-
tor. SAE Technical Paper (1991).
9. Schieben, A., Griesche, S., Hesse, T., Fricke, N., Baumann, M.: Evaluation of three differ-
ent interaction designs for an automatic steering intervention. Transporation Research Part
F: Traffic Psychology and Behaviour (27), 238-251 (2014).
10. Schneider, N., Purucker, C., Neukum, A.: Comparsion of Steering Interventions in Time-
critical Scenarios, Procedia Manufracturing (3), 3107-3114 (2015).
11. Iwano, K., Raksincharoensak, P., Nagai, M.: A study on shared control between the driver
and an active steering control system in emergency obstacle avoidance situations. IFAC
Proceedings Volums (19), 6338-6343 (2014).
12. Sieber, M., Siedersberger, K.-H., Siegel, A., Farber, B.: Automatic Emergency Steering
with Distracted Drivers: Effects of Intervention Design. 18th IEEE Conference on Intelli-
gent Transportation Systems (2015).
13. Bender, E., Landau, K., Bruder, R.: Driver reaction in response to automatic obstacle
avoiding manoeuvres. VDI Berichte (1931), 219-228 (2006).
14. Van Der Laan, J.D., Heine, A., De Waards, D.: A simple procedure for the assessment of
acceptance of advanced trasnport telematics. Transportation Research Part C: Emerging
Technologies (5,1), 1-10 (1997).
15. Neukum, A., Krüger, H.-P.: Fahrerreaktionen bei Lenksystemstörungen - Untersuchungs-
methoden und Bewertungskriterien. VDI Berichte (1791), 297-318 (2003).
16. Neukum, A., Reinelt, W.: Bewertung der Funktionssicherheit aktiver Lenksysteme: ein
Human Factor Ansatz. VDI Berichte (1919), 161-179 (2005).
17. Naujoks, F., Purucker, C., Neukum, A., Wolter, S., Steiger, R.: Controllability of Partially
Automated Driving functions – Does it matter whether drivers are allowed to take their
hands off the steering wheel?, Transportation research Part F (35), 185-198 (2015).
15
18. Heesen, M., Dziennus, M., Hesse, T., Schieben, A., Brunken, C., Löper, C., Kelsch, J.,
Baumann, M.: Interaction design of automatic steering for collision avoidance: Challenges
and potentials of driver decoupling. IET Intelligent Transport Systems (9, 1), 95-104
(2015).
19. Cadillac: CT6 Super Cruise™ Convenience & Personalization Guide,
https://www.cadillac.com/content/dam/cadillac/na/us/english/index/ownership/technology/
supercruise/pdfs/2020-cad-ct6-supercruise-personalization.pdf, last accessed 2021/04/22.
20. Mercedes-Benz Research & Development North America. Introducing DRIVE PILOT: An
Automated Driving System for Highways.
https://www.daimler.com/documents/innovation/other/2019-02-20-vssa-mercedes-benz-
drive-pilot-a.pdf, last accessed 2021/04/22.
21. Sadekar, V.K., Grimm, D.K., Litkouhi, B.B.: An Integrated Auditory Warning Approach
for Driver Assitance and Active Safety Systems. SAE Technical Paper (2007).
22. Sledge, N. H. Jr., Marshek, K. M.: Comparison of Ideal Vehicle Lane-Change Trajectories.
SAE Technical Paper (1997).
23. Shoji, N., Yoshida, M., Nakade, T., Fuchs, R.: Haptic Shared Control of Electric Power
Steering: A Key Enabler for Driver-Automation System Cooperation. ATZ 2019 Confer-
ence on Automated Driving (2019).
24. Free Text-to-Speech and Text-to-MP3 for US English, https://ttsmp3.com/, last accessed
2021/04/22
Hand Over, Move Over, Take Over - What Automotive
Developers Have to Consider Furthermore for Driver’s
Take-Over
Abstract. Autonomous driving allows for the first time from a legal point of view
to permanently pursue non-driving related tasks. While during highly automated
driving (SAE Level 3) the driver must be constantly ready to take over, this is no
longer the case in fully automated mode (SAE Level 4). Nevertheless, there will
be situations in which take-over is required. The take-over situations in Level 4
will be more complex, since more activities will be permitted. Automobile man-
ufacturers must ensure a safe take-over process with the aid of appropriate vehi-
cle interior design. With the help of the HoMoTo-approach presented here, take-
over scenarios can be broken down into substeps and fixed time values can be
assigned to the individual movement sequences using the Methods-Time-Man-
agement technique. Two examples show that the application of this method is
suitable for optimizing the take-over process, however further adjustments to the
procedure are necessary in order to obtain valid results.
1 Introduction
Levels. Levels 0 to 2 represent driver assistance functions that support the driver in
longitudinal as well as lateral directions. In level 3, vehicle control is highly automated,
which is why the driver is not required to continuously monitor the vehicle. However,
the driver must be permanently able to take over the driving task within a certain time
window (7 to 15 seconds time budget). [1, 2, 3]
© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_10
2
Continuous take-over of the complete driving task by the automated driving function
occurs in Level 4 vehicles during fully automated driving. In certain use cases, these
vehicles reach the destination independently and can do so without returning the driving
task to the driver. [2]
In Level 5, the vehicle reaches the destination autonomously and independently of
the prevailing boundary conditions. The occupants act exclusively as passengers and
have no possibility to intervene in the driving process. [1]
Current State of Development. The market launch of the first Level 3 driving func-
tions in the form of Congestion Assistants up to 60 km/h has been announced for 2021.
A foreseeable further development is the so-called highway pilot, which steers the ve-
hicle up to 130 km/h and carries out overtaking maneuvers independently. [4]
Level 4 and 5 driving functions are currently being tested and further developed in
prototype vehicles by automobile manufacturers, suppliers, entrepreneurs from outside
the industry, service providers and university research groups [5].
Considerable simulation and testing effort is required to develop and validate these
automated functions. Since, from Level 3 onwards, responsibility also passes from the
driver to the vehicle and thus to its manufacturer during the automated journey from a
legal point of view, functional errors and failures not only pose a considerable risk to
road users, but also a significant financial liability risk for the manufacturer [6].
Limitations of Level 4. Due to the permanent and complete take-over of the driving
task by the system, the driver, by definition, does not have to remain ready to take over
[2]. The occupants of the vehicle can actually devote themselves completely and per-
sistently to other activities during the complete journey [6], leaving the driver's seat
unoccupied. Possible NDRA include, in addition to consuming media, sleeping or
working on a laptop.
3
For practical applications, this means that when a driving request is entered, the ve-
hicle must do a final assessment of whether it can provide a fully automated trip to the
destination before the actual trip begins. According to the current state of the develop-
ment, a possible driver take-over during the driving process from Level 4 to Level 2
should be induced exclusively by the driver and not be required on the vehicle side.
This complete autonomy of the vehicle during fully automated driving in Level 4
will nevertheless be feasible only to a limited extent over a longer introductory phase
depending on the situation (weather, traffic, etc.) and space. Due to the complexity of
the routes (intersections, oncoming traffic, pedestrians, etc.), the first Level 4 journeys
will probably only take place on highways and their feeder routes (first / last mile). In
the medium term, the high costs incurred by manufacturers for protection and approval
will further restrict the widespread use of Level 4. [8]
It is therefore to be expected that a take-over request will also be made to the vehicle
occupants for Level 4 journeys, comparable to the procedure from Level 3. Namely,
whenever it seems opportune to leave the routes approved by the vehicle manufacturer
for Level 4. An example is the bypassing of a highway congestion on a Level 2 route.
The spatial restrictions on the area of use will mean that a take-over in Level 4 mode
by a human driver will by no means only be made at the start of the journey or in a
stationary vehicle, but also on the road during the journey.
between NDRA and driving, but focus only on the actual driver take-over [16]. In con-
trast, NDRA is not considered in a study investigating the effect of the driver's passive
role and distraction on safe take-over and subsequent driving behavior [17].
Relevance. Current research and development projects are primarily concerned with
the transfer procedure from Level 3 to Level 2. With regard to Level 4, only the actual
take-over phase has been considered so far. However, the take-over of the driving task
in Level 4 is more important in the medium term than previously assumed.
Since the type of activity during passive driving has a considerable influence on the
take-over time required, each activity must be protected separately. Due to the liability-
relevant transfer of responsibility, it can be assumed that the manufacturer may only
allow the occupants to perform activities that he has also secured.
Standardizable, computer-readable and documentable procedures are required for
the simulative protection and approval of activities during passive driving and driver
take-over. The procedures must be able to clearly describe activities, actions, postures
and movements of vehicle occupants in interaction with automated driving functions.
In addition, an evaluation based on time requirements, interruptibility, importance, at-
tention as well as cognitive, psychological or emotional stress is necessary.
Since such a procedure does not yet exist, this article is intended to provide an ap-
proach as to how the individual substeps of the driver take-over from Level 4 to Level
2 can be described and evaluated in a uniform and standardized manner from the most
diverse activities and states.
First, the concept HoMoTo is presented, which divides the take-over scenarios into
individual subtasks in order to describe them systematically. The focus is placed on the
action and movement sequences, the interior design is included and the various factors
influencing the take-over time are taken into account. In order to evaluate the take-over
procedure on a granular level and under temporal aspects, the Methods-Time Measure-
ment (MTM) method originating from labor sciences is consulted. Finally, the model-
ing of the take-over procedure and the application of the standardized process language
of the Methods-Time Measurement procedure are evaluated and discussed.
Hand-Over first implies the termination of the activities of passive driving. In addi-
tion, the handing over to the vehicle of the objects used during passive driving and the
picking up of the objects required for driving, among other things, take place in this
course. Move-Over includes the physical, psychological and emotional adaptation of
the driver, as well as the geometric and informational adaptation of the vehicle in prep-
aration for taking over the driving task. This phase also implies the cognitive and emo-
tional processing or termination of the NDRA. Depending on the activity performed
and the corresponding modification of the interior, Hand-Over and Move-Over can pro-
ceed in parallel or sequentially (see Fig. 1). Take-over consists of the driver creating
situational awareness of the current traffic situation and the vehicle as part of it, and
finally taking over the driving task.
Fig. 1. System-initiated expanded transition model from automated to manual driving, figure
modified and expanded based on [16].
Last but not least, the environment, i.e., current traffic events, the traffic situation,
and other road users affect the duration of the take-over. Despite prediction, vehicle
control can be handed over at shorter notice in a critical situation, if necessary, and thus
with a smaller time frame. In addition, traffic-independent factors such as weather con-
ditions are influencing factors that must be taken into account when designing the take-
over process.
Basic Motion Elements of MTM. MTM is based on 19 basic motion elements that can
be used to describe all human movements. The basic motion elements are divided into
three groups, see Table 2. [19]
MTM Form. A standardized form (see Table 3) is used to describe workflows using
MTM. Left hand (columns 2-4) and right hand (columns 6-8) are considered separately.
In addition to a brief description of the motion sequence (Description), the form also
records any repetitions of the motion (quantity x frequency, QxF) and the associated
Code. As part of the standardized process language, codes describe a motion with the
aid of abbreviations to which information, such as the type of motion, is clearly as-
signed. For most motions, the code consists of the three parts Type of Basic Motion
(such as G for grasping), Length of Motion (such as 20 for 20 cm) or Angle of Rotation
and the Case of Motion. In addition, there are body movements that can be uniquely
described with fewer abbreviations or only with additional abbreviations. [19]
Based on the code and a metric card, the defined target time in the Time Measure-
ment Unit (TMU) is assigned to each element and summed up, where one TMU corre-
sponds to 0.036 seconds. [19]
Approach. In order to verify to what extent MTM is applicable for the description of
HoMoTo, several analyses are performed. In the first analysis, basic analogies between
the sub-aspects of MTM and vehicle use are elaborated. For this purpose, typical mo-
tion sequences of the driver during car driving in manual mode are considered and the
motion elements of MTM are added. This analysis forms the basis for the further ap-
plication of the MTM method to HoMoTo.
In the second analysis, it is checked whether the basic motion sequence from the
exercise of an NDRA to the assumption of the driver posture can be described by means
of the MTM process language. This analysis is performed using the upper body posture
8
as an example. The basic postures of the upper body during the execution of NDRA
identified by Fleischer & Chen: forward, neutral, backward, twisted and leaned are
used as starting postures [20]. Since the entire motion sequence towards the ready-to-
take-over position at the driver's workstation is to be analyzed, the sitting direction is
included as another factor. Accordingly, the five basic postures are combined with the
three seat direction categories: in driving direction (0°), against driving direction
(180°), and in between (5° to 175°). Based on these 15 initial postures, the resulting
motion sequence towards driver take-over is described and translated into the MTM
process language.
Thirdly, the application of the MTM method to the take-over procedure is demon-
strated exemplarily for two activities and the elaborated motion sequences are referred
back to in dependence of the upper body postures from the second analysis. For this
purpose, the activities necessary for the take-over of the driving task are first divided
into individual subtasks using the HoMoTo approach presented. Then, the transfer of
the basic MTM elements to the elaborated subtasks is performed using the MTM form
(see Table 3). As examples, reading a book in the driver's seat in the direction of travel
and watching a movie using VR glasses with the body leaning back against the direction
of travel are considered. The two examples differ in terms of their technical complexity
and feasibility in the interior of autonomous vehicles and can therefore highlight the
possible limitations of the MTM approach.
4 Results
vehicle elements are transferred from the driver to the vehicle and vice versa during the
hand-over procedure.
The Move-Over phase is divided in the first step with regard to vehicle and driver tasks.
On the vehicle side, the second level is subdivided into the interior elements that are
adapted during the move-over. On the driver side, the subcategory "adjusting posture"
includes both the adjustment of the body posture itself and the posture-induced, driver-
side adjustment of interior elements. "Prepare body" refers to the restoration of a ready-
to-drive condition. This includes putting on and taking off clothes or cleaning hands
10
after eating. In this subcategory, move-over and hand-over are coupled. The sensory
perception subcategory includes shifting focus from the NDRA to the driving task.
The detailed structuring of HoMoTo confirms that primarily the NDRA itself, the
driver, the vehicle characteristics and environmental influences affect the number, the
temporal sequence as well as the execution speed of the individual steps.
4.3 Examples
The sequence of the two examples considered is shown in Fig. 2. It should be mentioned
here that this is one of several possibilities of the flow.
Fig. 2. Visualization of the start, intermediate and end positions of the examples.
The structuring of the NDRA "reading a book" according to the HoMoTo categories is
shown in Table 4 and the detailed description by means of MTM basic elements is
shown in Table 6. Since movements can occur in parallel, for completeness the codes
are provided with corresponding symbols ([ and |), which are not further discussed here.
It is assumed that the center console contains a large compartment with a roller blind
for objects that is opened automatically by the vehicle, as well as a tray without a lid
for the safe storage of eyeglasses. Since the driver is already in the direction of travel,
it is not necessary to straighten and turn the seat to take over the driving task.
If the driver receives a TOR, he must first close the book he is holding in both hands
(1). He must then place the book in the center console with his right hand (2-4) and
stow his reading glasses (5-8). Finally, to take over, he must place his hands on the
steering wheel (9), look ahead (10) and simultaneously guide his foot to the pedal (11).
As another example, the NDRA "watching a movie with VR glasses" is structured
using the HoMoTo concept in Table 4 and described in the MTM process language in
Table 7. In this example, the driver is reclined and facing the direction of travel, so
appropriate seat adjustments must be made for the driver take-over. To stow the VR
glasses used, it is assumed that a corresponding holder is located on a side interior wall
and the driver's shoes are placed next to the seat.
First, upon receiving a TOR, the driver must remove and stow the VR glasses (1-5).
Then, he must straighten his seat, put himself in an upright position (6-9), and rotate
his seat 90° (10-12) to put on his shoes (13-26). After that, he must again rotate his seat
to the driver position (27-29) and advance (30-32). Finally, as in the previous example,
he must bring his hands to the steering wheel (33), direct his gaze forward (34), while
placing his foot on the pedal (35). The durations of the motor movements of the seat
(straightening, turning, advancing) are based on assumptions only.
5 Discussion
Table 7. NDRA „Watching a movie with VR glasses“ described in the MTM process language.
The method forces developers to structure the individual substeps of the driver take-
over at a granular level. This is essential for optimized interior design. Furthermore, the
MTM method focuses on time as the dominant factor and thus makes it possible to
14
identify potential for optimization in terms of time. The standardized description of the
transfer procedures enables a systematic and objective comparison of different activi-
ties and interior variants. This comparability makes it possible to identify optimization
potential with regard to ergonomic interior design at an early stage of development and
allows the take-over process to be designed more ergonomically, more efficiently and
with greater customer orientation.
When comparing the MTM analyses of the two examples, it becomes apparent that
the type of the NDRA can make the take-over procedures significantly more complex,
thereby significantly increasing the take-over time. Contributing factors include the
items used and the adaptation of the interior for performing the driving task. In addition,
the take-over time is also significantly increased by technical variables, such as the
motorized rotation of the seat. The times in the examples listed are based on assump-
tions only. To calculate the total duration of the HoMoTo phases, the automotive man-
ufacturer must record these times and take them into account in the take-over analysis.
5.3 Limitations
The MTM method is optimized for the field of production ergonomics and not the ve-
hicle interior. The application areas differ to some extent. For example, individual an-
thropometric characteristics of persons have a greater influence on the execution of
movements due to the limited space in the interior. Additionally, the variant variety of
different preferred motion strategies has a higher importance. Individual actions and
movements of the driver during the take-over procedure are not to be weighted as highly
practiced activities in the sense of the MTM principle and are therefore subject to a
15
higher uncertainty in the time calculation. The factors mentioned above have an influ-
ence on the standardized Time Measurement Units and their interindividual scatter.
Furthermore, not all conceivable activities of the vehicle occupants can be mapped
with the existing description catalog. In particular, the movements performed by the
vehicle cannot be described with MTM.
The influence of the driver's body dimensions, postures and forces must be analyzed
and described in a separate simulation system such as the 3D human model RAMSIS.
For the influence of the mental and psychological state of the driver during the take-
over process, the authors are currently not aware of any suitable description and analy-
sis procedure. Overall, an adaptation of the MTM method to the vehicle-specific bound-
ary conditions is necessary.
6 Outlook
Further work on this topic will require a more in-depth analysis, application and exten-
sion of the MTM process for use in vehicles. This initially involves a comprehensive
analysis of today's conventional activities in the vehicle. In addition, the development
of a specialized MTM application module appears to make sense, with which all activ-
ities and processes in the vehicle interior of today's and future vehicles can be analyzed
comprehensively and precisely. The detailed validation of the method and measurement
of vehicle-specific Time Measurement Units as well as the movement times of the ve-
hicle interior parts represents another indispensable work package.
A special challenge for further research work will be the coupling with anthropo-
metric as well as cognitive human models. This is absolutely necessary for a consistent
computer simulation of all driver and passenger scenarios in the vehicle interior.
References
1. VDA, https://www.vda.de/dam/vda/publications/2015/automatisierung.pdf, last accessed
2021/04/12.
2. SAE International, https://www.sae.org/standards/content/j3016_201401/preview/, last ac-
cessed 2021/04/12.
3. Zhang, B., de Winter, J., Varotto, S., Happee, R., Martens, M.: Determinants of Take-Over
Time from Automated Driving: A Meta-Analysis of 129 Studies. Transportation Research
Part F: Traffic Psychology and Behaviour 64, 285-307 (2019).
4. Daimler AG, https://www.daimler.com/innovation/case/autonomous/interview-hafner.html,
last accessed 2021/04/19.
5. ADAC, https://www.adac.de/rund-ums-fahrzeug/ausstattung-technik-zubehoer/autonomes-
fahren/technik-vernetzung/aktuelle-technik/, last accessed 2021/04/12.
6. Gasser, T. M., Seeck, A., Smith, B. W.: Rahmenbedingungen für die Fahrerassistenzent-
wicklung. In: Winner, H., Hakuli, S., Lotz, F., Singer, C. (eds.) Handbuch Fahrerassistenz-
systeme – Grundlagen, Komponenten und Systeme für aktive Sicherheit und Komfort, 3rd
edn., pp. 27–54. Springer Vieweg, Wiesbaden (2015).
16
Abstract.
Using external map information for automated driving is beneficial, as this fol-
lows an extension of the sensor range and global navigation to a priori specified
goal. However, common systems use high definition maps, which are expensive
to construct and hard to maintain. Therefore, the paper at hand proposes an sen-
sor-independent approach for navigation based on uncertain map data. This work
first builds an environment model and plans a global route based on publicly
available OpenStreetMap-Data. Afterward, it plans a trajectory considering the
uncertainty in the map. Experiments in simulation and on real-world data show
the efficiency of the approach.
1 Introduction
© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_11
2
The remainder of the paper is organized as follows. Section 2 describes related work.
Section 3 introduces the developed navigation approach. Section 4 presents simulations
and real-world experiments with data from our institute’s test vehicle (Figure 1 (a)).
Section 5 concludes the paper and gives a further outlook.
(a) (b)
Fig. 1. (a) Test vehicle of the Institute of Control Theory and Systems Engineering. (b) Planning
result based on real-world data. Illustrated are the occupancy grid map in grey, the Open-
StreetMap graph in blue, the planned global route in red, and the determined trajectory in green.
2 Related Work
Various approaches so far have focused on navigation with both high-definition maps
and inaccurate maps in unstructured environments. [2] use 3D-lidar scans to localize a
robot in an uncertain map. While achieving impressive robustness for global localiza-
tion, they rely on the classification of the scans into the classes of no street and street.
This approach becomes difficult when using other sensor technologies like radar sen-
sors. Another approach [3] corrects the inaccurate OSM path using a Markov-Chain
Monte-Carlo technique to match the roads from the map with the sensor data. There-
fore, semantic terrain information generated from the lidar scans is needed. The authors
of [4] construct a path with a Dijkstra-based global planning algorithm using a known
map and follow this path using nonlinear model predictive control. Another publication
[5] proposes a trajectory planning approach for navigation with uncertain GPS coordi-
nates in the context of global outer navigation. Therefore, they use a semantic occu-
pancy grid map as an environment model. Salzmann et al. [8] apply a deep learning
based method for path planning utziling camera images and high-level route infor-
mation. The authors in [9] propose an autonomous navigation approach utilizing only
OSM information. Therefore, they perform a topometric registration utilizing seg-
mented road information from lidar measurements. For an overview of different motion
planning techniques in the context of automated valet parking the interested reader is
referred to [6].
The contribution at hand builds on the work of Fassbender et al. [5]. In contrast, this
work uses a state-of-the-art classical occupancy grid mapping approach [7] and OSM
3
information. The algorithm constructs the local grid map with 3D-lidar data from a
realistic simulation and from the introduced test vehicle, which is illustrated in Figure
1 (a). However, in contrast to other publications, the proposed approach is sensor-inde-
pendent by using only occupancy information within the environmental model and thus
can also be used with other sensor technologies like radar or ultrasonic where less se-
mantic information can be extracted. Moreover, this work proposes a strategy for
switching between the goals resulting from a route planning module.
This section describes the developed approach for navigation based on uncertain map
information. Therefore, first, it introduces the problem formulation and system archi-
tecture. Afterward, it presents the different modules of the navigation approach.
Binary Grid
3.2 Perception
Based on sensor measurements, the perception module constructs different environ-
ment representations, which are used by the subsequent planning approach. This work
first builds an occupancy grid map [10]. Therefore, the environment is discretized into
cells of fixed size, which include occupancy probabilities. The algorithm filters lidar
measurements, which belong to ground reflections, based on the measured height in-
formation. Then this work applies a 2-D lidar SLAM approach [7], whose result is the
occupancy grid map together with an estimation of the relative pose of the ego vehicle
with respect to the map and as visualized in Figure 3 (a). Black color denotes cells with
a high occupancy probability. Light grey cells are likely not occupied. Green gray color
represents cells, whose occupancy state is unknown. Note, that this environment repre-
sentation can also be constructed with other sensor technologies like radar sensors [11]
or cameras [12]. Afterward, a binarized grid is generated by marking all cells whose
occupancy probabilities fall below a threshold as free space. Moreover, this work also
considers all unknown cells, resulting from missing lidar reflections, as not occupied to
avoid false-positive obstacles in front of the vehicle. This is common practice in occu-
pancy grid mapping [13]. Note, that this may result in an overly optimistic behavior of
the automated vehicle, especially in scenarios with occlusion. However, during motion
planning this work considers the occupancy uncertainty of the cells as described in
Section 4. Figure 3 (b) visualizes the environment model after binarization. Lastly, the
perception module applies the Euclidean distance transformation [14] to the binarized
grid map. Afterward, the product of the grid resolution and the distance values are
stored. The result is a grid map illustrated in Figure 3 (c), whose cells contain a distance
(in meters) to their closest obstacle.
Optimal Trajectory Planning. The following notation is partially adopted from [5]
and [16]. The goal of the trajectory planning module is to find a collision-free, kino-
dynamically feasible trajectory 𝜋(𝑡) from the vehicle’s start state 𝐬s ∈ 𝑆 at time 𝑡 to a
goal region 𝕊g ⊆ 𝑆. The state space 𝑆 is the union of an 𝑛-dimensional real space ℝ𝑛
and non-Euclidean rotation groups 𝑆𝑂(2). The set of all allowed configurations at time
𝑡 ∈ [0, 𝑇] is denoted as 𝕊free ⊆ 𝑆 and guarantees collision avoidance. The predicate
⃛, … ) represents differential constraints and ensures dynamic constraints on the
𝐷(𝜋̇ , 𝜋̈ , 𝜋
trajectory and kino-dynamic feasibility. Let 𝐽(𝜋) be a cost functional, whereas Π(𝑆, 𝑇)
denotes the set of all continuous functions [0, 𝑇] → 𝑆. Then the optimal trajectory 𝜋 ∗ is
given by
𝜋 ∗ = argmin 𝐽(𝜋) (1)
𝜋∈Π(𝑆,𝑇)
Subject to 𝜋(0) = 𝐬s ,
𝜋(T) = 𝕊g ,
𝜋(t) = 𝕊free ,
⃛, … ) ≥ 0.
𝐷(𝜋̇ , 𝜋̈ , 𝜋
This work describes the state by 𝐬 = [𝑥, 𝑦, 𝜃, 𝑣, 𝑐]𝑇 , whereas 𝑥 ∈ ℝ and 𝑦 ∈ ℝ denote
the 2-D position in a fixed coordinate system. 𝜃 ∈ 𝑆𝑂(2) is the heading angle be-
tween the longitudinal vehicle axis and the 𝑥-axis of the fixed coordinate system. 𝑣 ∈
ℝ denotes the vehicle’s velocity and 𝑐 ∈ ℝ its curvature.
Sampling-based Solution. This work follows a sampling-based optimization scheme
to find a globally optimal solution for the planned trajectory. In contrast to approaches
based on numerical optimization, sampling-based methods not always get stuck in local
minima and can deal with non-differentiable cost functions. Following the ideas of [5],
this work constructs a search tree by sampling reachable states.
Let 𝒩 be the set of all nodes. A node 𝑁 ∈ 𝒩 describes a clothoid arc defined by a
start pose, an initial curvature, a rate of change of curvature and a terminal velocity.
Based on this information the algorithm can estimate the pose and curvature at the end
of the clothoid arc given a fixed arc length.
Further, let 𝐽R (𝑁) be the running cost of the path from the root node to 𝑁. 𝐽H (𝑁)
denotes a heuristic cost from 𝑁 to the final node. The cost of a node 𝑁 is given by
𝐽(𝑁) = 𝐽R (𝑁) + 𝐽H (𝑁) (2)
This work stores the graph’s leaf nodes in a priority queue, called 𝒩OPEN ⊂ 𝒩. The
priority queue only contains the root node (i.e. the start state) at the beginning of the
planning stage. The elements are sorted in ascending order by their cost values 𝐽(𝑁)
and the collision-free node with the lowest cost is expanded first. The planner stops the
expanding and returns a solution when a runtime limit is reached.
6
Fig. 4. Two heuristics guide the expansion of the search tree. Green denotes the Dubins path to
the orange goal pose. Purple visualizes the collision-free path.
Heuristics. Similar to [5], the paper at hand uses two heuristics to estimate the cost
𝐽H (𝑁). The first one 𝐽H,Dubins (𝑁) describes the length of the Dubins path connecting
the start pose 𝐩s = [𝑥s , 𝑦s , 𝜃s ]𝑇 and the goal pose 𝐩g = [𝑥g , 𝑦g , 𝜃g ]𝑇 . Further this work
estimates the shortest path between the start pose 𝐩s and the goal pose 𝐩g , whereas the
vehicle kinematics are approximated by a single integrator, and its shape is approxi-
mated by one circle with radius 𝑟 ∈ ℝ+ 0 . Then, the cell with index 𝑏 ∈ ℕ is computed
which passes through the path and is closest to the sampled pose 𝐩𝑁 = [𝑥𝑁 , 𝑦𝑁 , 𝜃𝑁 ]𝑇 .
The second heuristic 𝐽H,obs (𝑁) is then given by the sum of the distance from 𝐩𝑁 to the
cell with index 𝑏 and the path from this cell to 𝐩g . Figure 4 visualizes the Dubins path
and the shortest path. Then, the cost heuristic cost is given by
𝐽H (𝑁) = max(𝐽H,Dubins (𝑁), 𝐽H,obs (𝑁)). (3)
Clothoid Construction. The planner expands its leaf nodes by constructing clothoid
arcs with fixed arc length 𝐿 ∈ ℝ+0 while considering the non-holonomic constraints as
in [15]. This work follows the method described in [5]. First, a set 𝒱 of potential termi-
nal velocities for child nodes of 𝑁 is estimated. This set contains both forward and
backward velocities. Assuming a constant acceleration 𝑎 ∈ ℝ with |𝑎| < 𝑎max ∈ ℝ+ 0 ,
a set 𝒦 of curvature changes is computed using the maximum steering rate 𝛿max ̇ ∈ ℝ+
0
, the maximum steering angle 𝛿max ∈ ℝ+ +
0 , the maximum curvature 𝑐max ∈ ℝ0 and the
sampled velocity 𝑣s ∈ 𝒱. For a more detailed description, the interested reader is re-
ferred to [5]. The variable 𝑘𝑁 ∈ 𝒦 describes a change of curvature between the current
and subsequent leaf node 𝑁l ∈ 𝒩. Then, following [17], this work estimates the end
pose of the clothoid arc by the parametric expressions
𝐿
𝑥(𝐿) = 𝑥𝑁 + ∫0 cos(𝜃(𝜉)) d𝜉 , (4)
𝐿
𝑦(𝐿) = 𝑦𝑁 + ∫0 sin(𝜃(𝜉)) d𝜉 , (5)
1
𝜃(𝐿) = 𝜃𝑁 + 𝑐𝑁 𝐿 + 𝑘𝑁 𝐿2 . (6)
2
7
Here, 𝑐𝑁 is the curvature of node 𝑁. 𝑥(𝐿), 𝑦(𝐿) and 𝜃(𝐿) describe the pose at the end
of the clothoid. This pose is also the new start pose 𝐩𝑁l = [𝑥𝑁l , 𝑦𝑁l , 𝜃𝑁l ]𝑇 of the subse-
quent leaf node. Solving the integrals in equations (3) and (4) in closed form is not
possible whereas numerical approaches come with a high computational burden. There-
fore, the algorithm approximates the sine and cosine terms by Tayler series expansion
as described in [18]:
𝜆2𝑛+1 𝜆 𝜆3 𝜆5 𝜆7
sin(𝜆) = ∑∞ 𝑛
𝑛=0(−1) (2𝑛+1)! = − + − ±…, (7)
1! 3! 5! 7!
𝜆2𝑛 𝜆0 𝜆2 𝜆4 𝜆6
cos(𝜆) = ∑∞ 𝑛
𝑛=0(−1) (2𝑛)! = − + − ±…. (8)
0! 2! 4! 6!
Using this approximation, the integral only needs to be solved once before planning
and the solution is generated at runtime by inserting the parameters.
Collision Avoidance. After constructing new nodes, the algorithm checks if these are
collision-free. Only when this condition is fulfilled the newly sampled nodes are added
to the search tree. Therefore, the approach subsamples the arc with a sample distance
of 𝐿𝑠 ∈ ℝ+ 0 . Then the proposed method checks if every subsampled pose is collision-
free. For this, the vehicle shape is approximated by three circles of equal radius 𝑟 ∈ ℝ+ 0
as illustrated in Figure 5. The center of the first circle lies in the center of gravity of the
vehicle. The centers of the other two circles lie in the middle of the front and rear axle.
1
The radius is defined by 𝑟 = 𝑤 + 𝑑safe , whereby 𝑤 ∈ ℝ+ 0 is the width of the vehicle
2
+
and 𝑑safe ∈ ℝ0 is a safety distance. Then the algorithm determines the grid cells, which
belong to the centers of the circles, and looks up the corresponding values in the dis-
tance map. If this value is smaller than radius 𝑟, the pose is subject to a collision. Hence,
per pose the algorithm uses three queries at most, resulting in a fast collision check. If
all poses per clothoid arc are collision-free the newly sampled lead node 𝑁l is added to
the queue 𝒩OPEN .
(a) (b)
Fig. 5. (a) Vehicle shape approximation by three circles for collision checking (b) Circles over-
laid on the distance map.
8
Goal Region. If the new generated leaf node 𝑁l satisfies the terminal condition it is not
inserted into queue 𝒩OPEN , but into another queue called 𝒩CLOSED ⊂ 𝒩. This queue is
also sorted in ascending order by its cost values. The target manifold consists of two
parts.
Let 𝑔term be the orthogonal line running through the goal pose 𝐩g . The minimum
Euclidean distance to the orthogonal is the first part of the target manifold. The second
part of the target manifold is the angular difference ∆𝜃𝑁l ,g between the target pose and
the final pose of the node. The node lies within the target manifold if the pose of 𝑁l is
less than 𝑑𝑁l ,g,max ∈ ℝ+ +
0 away from the target line and ∆𝜃𝑁l ,g < ∆𝜃𝑁l ,g,max ∈ ℝ0 .
Here 𝑤0 , 𝑤1 , … , 𝑤5 denote manually tuned weights. 𝐿s,g (𝑁l ) is the total length of the
path from the root node to the respective node 𝑁l . The maximum curvature of the path
is denoted by 𝑐max (𝑁l ) and 𝑑obs (𝑁l ) is the smallest distance of the path to an obstacle.
𝑜(𝑁l ) is the number of occupancy grid cells within a square with edge length 𝑑edge
around the node whose occupancies are unknown. 𝑘𝑁l and 𝑣𝑁l are the curvature change
and the velocity of the node 𝑁l .
Further, this work applies the pruning approach of [5] that filters out nodes whose
final states are almost identical. Hence, it prevents a too large search tree and limits the
required computing time. After reaching its time limit, the planner returns the least-cost
trajectory corresponding to the node at the top of the queue 𝒩CLOSED . Note that [5]
additionally computes a velocity profile such that the final state of the trajectory has a
velocity of zero.
4 Experimental Results
This section evaluates the developed approach in a realistic simulation and on real-
world data. The proposed approach is implemented with C++ in ROS using an AMD
Ryzen 5 3600 processor with 3.6 GHz base clock speed. The parameter configuration
is visualized in Table 1.
9
The goal of our experiments is to answer the following questions: (Q1) Is navigation
with the approach possible using different levels of map and localization accuracies?
(Q2) How efficient is the planning strategy compared to other methods?
ʹܘ
ॺ
ͳܘ
ܘ ݀ǡ
οߠୱǡ
Fig. 6. Example scenario of a goal region switching. The vehicle approaches the goal region 𝕊g
and fulfills the condition for the distance 𝑑s,g to the target line and the heading difference ∆𝜃s,r .
Hence it switches it goal pose from 𝐩1g to 𝐩2g . Blue color denotes the OSM graph. Red and green
colors illustrate the planned route and trajectory, respectively.
Fig. 7. The urban simulation scenario. The green rectangle denotes the ego vehicle. The red lines
denote the planned route.
Figure 9 shows the open-loop planning results in green. The vehicle should follow the
red route. In both time steps, the SSSP plans a much shorter path, whereas it is not able
to plan a path around the corner in the first time step. This is due to the fixed grid size
in the case of the sampling in the state space. Note, that this behavior would not occur
when the discretization is increased. However, this would also increase the computation
time. In contrast, the proposed trajectory planner uses an iterative sampling scheme,
which is informed by the two described heuristics. Therefore, this planner is more sam-
ple efficient. That is also proved in Table 2. This experiment runs both planners in
parallel and measures the average number of collision-free nodes 𝑉c ∈ ℕ over the whole
scenario. The scenario consists of 494 planning iterations. The proposed planner based
on [5] generates more collision-free nodes than the SSSP and explores the free space
more efficient.
(a) (b)
Fig. 8. Comparison of the trajectory planning result using different map accuracy levels (a) Plan-
ning result using accurate map data. Here the route to follow lies approximately in the middle of
the actual road. (b) Planning performance using inaccurate route information. The route to follow
is not collision-free.
(a) (b)
Fig. 9. Comparison of the planning performance in two different time steps. (a) Proposed trajec-
tory planner based on [5]. (b) SSSP.
Results. The data was collected from a manual driver in an automated valet parking
scenario. Figure 10 (a) shows a satellite image of the scene and Figure 10 (b) visualizes
the OSM graph in blue and the driven route in red. Further, it illustrates the constructed
global occupancy grid map in grey color. Note, that some parts of the OSM graph run
very close to obstacles due to map and localization errors. A strict path following rule
would lead to a collision here. Figure 10 (c) shows the successful open-loop trajectory
planning results over multiple time steps. This proves the applicability of the approach
in real-world scenarios.
switching. The approach has been first evaluated in a realistic simulation environment.
Further, this work compared it to another sampling-based path planning approach.
(a)
(b)
(c)
Fig. 10. Evaluation of the navigation approach on real-world data. (a) Satellite image of the
automated valet-parking scenario (image source: Google Maps). (b) OSM graph (blue) and
driven route (red). (c) Trajectory planning results in three time steps. Additionally illustrated are
the goal pose (orange arrow), target line (orange line), and constructed occupancy grid (grey).
14
This comparison demonstrated the sample efficiency of the proposed trajectory planner.
A second experiment showed the planning results on real-world data from the test ve-
hicle. This work also described two localization strategies which can be used in com-
bination with the proposed planning approach.
Future work will couple the method with a model predictive control approach based
on numerical optimization [4] and test the method in a closed-loop experiment. Further,
we achieved first results using the proposed environment model in conjunction with
radar sensors. Lastly, we plan to apply topometric registration techniques using a clas-
sical occupancy grid map to make our approach more robust to high localization and
map errors.
Acknowledgment
We thank Christoph Wunsch for his comprehensive contribution to the research pre-
sented in the article at hand.
References
1. OpenStreetMap contributors: OpenStreetMap, https://www.openstreetmap.de/, last ac-
cessed 2021/04/12.
2. P. Ruchti, B. Steder, M. Ruhnke, W. Burgard. :Localization on OpenStreetMap data using
a 3D laser scanner," 2015 IEEE International Conference on Robotics and Automation, Se-
attle, WA, USA, 2015, pp. 5260-5265
3. B. Suger, W. Burgard. : Global outer-urban navigation with OpenStreetMap. 2017 IEEE
International Conference on Robotics and Automation, Singapore, 2017, pp. 1417-1422
4. C. Rösmann, A. Makarow, T. Bertram.: Online Motion Planning based on Nonlinear Model
Predictive Control with Non-Euclidean Rotation Groups. In: European Control Conference
(accepted) (2021).
5. D. Fassbender, A. Mueller, H.J. Wuensche.: Trajectory Planning for Car-Like Robots in
Unkown, Unstructured Environments. In: 2014 International Conference on Intelligent Ro-
bots and Systems (IROS). IEEE, Chicago IL, USA, 2014, pp. 3630-3635.
6. H. Banzhaf, D. Nienhüser, S. Knoop and J. M. Zöllner.: The future of parking: A survey on
automated valet parking with an outlook on high density parking, 2017 IEEE Intelligent
Vehicles Symposium (IV), Los Angeles, CA, USA, 2017, pp. 1827-1834
7. W. Hess, D. Kohler, H. Rapp and D. Andor.: Real-time loop closure in 2D LIDAR SLAM,
2016 IEEE International Conference on Robotics and Automation, Stockholm, Sweden,
2016, pp. 1271-1278
8. T. Salzmann, J. Thomas, T. Kühbeck, J. Sung, S. Wagner and A. Knoll.: Online Path Gen-
eration from Sensor Data for Highly Automated Driving Functions, In IEEE Intelligent
Transportation Systems Conference, Auckland, New Zealand, pp. 1807-1812, (2019)
9. T. Ort et al.: MapLite: Autonomous Intersection Navigation Without a Detailed Prior Map.
In IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 556-563, (2020).
10. A. Elfes.: Using occupancy grids for mobile robot perception and navigation. In Computer,
vol. 22, no. 6, pp. 46-57, (1989).
11. C. Diehl, E. Feicho, A. Schwambach, T. Dammeier, E. Mares and T. Bertram.: Radar-based
Dynamic Occupancy Grid Mapping and Object Detection, IEEE 23rd International Confer-
ence on Intelligent Transportation Systems, Rhodes, Greece, pp. 1-6, (2020).
15
Alexander Frings
1 Motivation
In 2019, the Federal Statistical Office recorded 2.6 million accidents, 384,000 injuries
and 3046 fatalities on public roads in Germany [1]. In the European Union as a whole,
the number of road fatalities is around 22,800 [2]. Even if the numbers are slightly
declining, driver assistance systems and autonomous driving functions should never-
theless contribute to further reducing this number.
New virtual development methods are needed to fully validate these complex systems.
Because vehicle safety must be ensured in all imaginable situations, it is crucial to test
a very large number of scenarios. Since there will always be corner cases that are not
© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_12
2
known and unexpected, it will never be possible to test all relevant scenarios, which
means that complete test coverage cannot be achieved.
The so-called scenario-based testing using virtual test driving, combined with data-
driven development methods for the identification of scenarios can contribute to reduc-
ing this limitation as much as possible.
In real testing, there are many test parameters that are difficult or impossible to handle.
Virtual test driving offers the possibility of being able to handle some of these aspects,
such as traffic situations that are difficult to reproduce as well as rarely occurring criti-
cal situations, by creating corresponding scenarios. The scenarios are a fundamental
component of virtual test driving. A scenario contains the time-based representation of
dynamic traffic situations in a static environment model, which consists of the road
layout, topology, road markings, etc., as well as infrastructure like stationary traffic
signs, buildings or temporary changes such as construction sites. Scenarios also include
the definition of all dynamic road users and objects, for example other vehicles and
pedestrians.
An extremely large number of driven or simulated kilometers is necessary to validate
autonomous driving functions. However, it is imperative to have the right scenarios
available. The more situations are analyzed and transformed into virtual test runs, the
more critical scenarios can be tested. A high number of scenarios helps to challenge
developed driving functions.
There are several options to recreate the real environment in the virtual world. To re-
duce ambiguity in virtual validation, it is necessary to take advantage of all available
test scenario sources. Some examples are derived test cases from specified system re-
quirements, accident data bases, standardized and normed tests or records from field
tests.
This paper will mainly focus on the application of accident databases for simulation.
These are an essential source for scenarios because they classify real types of accidents
and their severity. They show situations in which a driving maneuver has not been car-
ried out as intended in the past, resulting in a collision - either with another road user
or surrounding objects. It is neither known nor relevant which advanced driver assis-
tance systems (ADAS) were already installed in the vehicle and whether and how they
contributed to the accident.
Accident databases usually contain speed profiles and trajectories of all involved
road users for about five seconds prior to the accident. The dimensions of those in-
volved in the accident are provided, as well as the object class to which they belong. In
addition, the road in the environment is roughly described by indicating lane markings,
signs, walls and the lane boundaries. In some cases, a longitude and latitude are also
given to determine the position, which can later be used to extract further information.
Likewise, the estimated vehicle state can be supplemented by other signals such as ac-
celeration with the help of simulation environments.
3
There are different user groups of accident databases, for which it can be very ben-
eficial to connect these databases to a simulation environment like the open integration
and test platform CarMaker (see Fig. 1). These groups and their different use cases are
described in more detail below: First, there are accident researchers looking at the con-
sequences of an accident. For them it is crucial that the recorded event of the accident
is rendered faithfully. They commonly work in the field of passive safety, but also in
the field of active safety, with the task of reducing the influence of the impact point on
the severity of the accident. For their task, the recorded accident event must be exactly
reproduced. This is complicated because assumptions have already been made when
generating the trajectories and velocity profiles for the databases. On the one hand, the
velocity plot has to be accurate. On the other hand, both the point of collision and the
angle at which the vehicles collide have to match precisely. To this end, the position of
the ego vehicle is indicated externally which relocates the vehicle. In this case, the sim-
ulation is no longer a closed-loop vehicle dynamics simulation.
Fig. 1. Different user groups and use cases with crash data
2.2 Requirements for the use of accident databases for real-time full vehicle
simulation
Today, systems such as automatic emergency braking (AEB), adaptive cruise control
(ACC), forward collision warning (FCW), lane keeping assist (LKA), evasive steering
assist and highway pilot are often tested and validated with accident databases. Here,
the integration of driving functions plays an important role to determine what would
(not) have happened if a new function had been implemented. In this use case, instead
of driving the actual trajectory from the accident database, the user might need to drive
an intended trajectory. The intended trajectory describes the expected trajectory if the
real driver or an integrated ADAS had not reacted, i.e. without braking by the driver or
an AEB and without human avoidance. For example, in many data sets, a "wrenching"
of the steering wheel can be observed shortly before a collision. Such an intervention
is usually recognizable in the accident data, so that the expected uncontrolled behavior
can be derived.
In other words, the vehicle state from the database is used until a determined state is
reached. Then the input is replaced by an estimated intended course. This allows for
observation of how the system – namely vehicle, sensors and functions – reacts to the
unaltered situation.
In both cases (trajectories from database and intended course), it may also be necessary
to extend the data of the accident database to obtain a more comprehensive, longer
scenario: As already mentioned, it can generally be assumed that the five seconds prior
to the collision are stored in the database. However, this data is not sufficient to enable
correct testing with sensor models and driving functions. Therefore, the scenario is ex-
tended towards the front, i.e. before the time of the first data point in the database (see
Fig. 2). This way, a start from standstill (relevant for HIL applications) but above all a
start of the scenario with the relevant objects outside the sensor range of the VuT is
made possible. Furthermore, the time after collision can also be extended in the sce-
nario. This way, for example, the vehicle behavior after the avoided collision can be
examined to see if the vehicle avoided the pedestrian and drove into oncoming traffic
or left the road instead.
5
Fig. 2. Starting point of the standard scenario (top) and the extended scenario (bottom)
Since in many cases it cannot be clearly determined which road user is responsible
for the accident, it is crucial to be able to adopt different ego perspectives for the sim-
ulation of vehicle-to-vehicle collisions. This means that the overall vehicle simulation
is carried out from the perspective of each vehicle involved in the accident. Often, a
change in the behavior of one of the involved parties already leads to a reduction in
severity of the accident.
6
For the aforementioned user groups, but also for those responsible in the area of
vehicle safety and validation, it is also necessary at this point to vary the parameters of
the scenario based on the requirements shown so that the test coverage can be as high
as possible. This results in a very high number of possible parameters and values which,
due to the sheer scope, can only be tested with the help of simulation. The type of
sensors used, the requirements of the system under test (SuT) and the operational design
domain (ODD) all play a role in the choice of parameter variation. In the first step, a
variation of the dynamic behavior of road users is often required, for example, to answer
the question of whether the SuT would also have reacted correctly if the pedestrian had
walked onto the road at a different speed or if the tested vehicle had turned at a higher
speed.
In addition, it is a requirement to vary the environmental conditions such as weather
or sun position in order to be able to investigate their influence on camera-based sys-
tems and a possible fallback level in the sensor fusion.
Moreover, it is possible to increase the variance in testing perception by varying the
representation of the 3D objects involved (with the help of an extensive database of 3D
objects), for example by replacing an adult pedestrian with a child or a black SUV with
a differently colored compact car. At this point, it should be mentioned that accident
databases usually only provide approximate dimensions that are standardized for many
objects. Furthermore, it can be of interest to use the imported scenario description as a
starting point for variations of the country-specific features, e.g. to vary the parameters
of the road markings such as color and width, but also to vary the traffic signs.
In a final step, it may now be necessary to create a completely new functional sce-
nario [3] by adding more road users to the imported scenario in order to further chal-
lenge the driving function (in the context of ODD) and increase the test coverage. It
may for example be relevant to add vehicles that lead to a temporary occlusion of other
participants or limit the decision space of ADAS.
In general, and especially due to the options presented for expanding the source data,
automating the import of an entire accident database while retaining existing structures
also plays an important role.
With its different sensor model classes (ideal ground truth, high fidelity and raw
signal interface) for the typical sensor technologies camera, radar, lidar and ultrasonic,
the CarMaker product family with its derivatives TruckMaker and MotorcycleMaker
offers a suitable starting point for testing ADAS.
7
Fig. 3. Principle of closed-loop scenario-based testing with CarMaker and its sensor models
By using CarMaker, accident scenarios can be created from the perspective of pas-
senger cars. For this purpose, traffic objects of all classes can be used, from pedestrians
to e-scooter drivers and commercial vehicles. In addition, multi-axle trucks and com-
mercial vehicles can be simulated by using TruckMaker or powered two-wheelers can
be selected as VuT in MotorcycleMaker. In order to meet the requirement of integrating
sensor models and control systems in multiple vehicles of a simulation, CarMaker of-
fers the feature SimNet which allows the networking of several vehicles under test in a
shared simulation environment. This way, the influence on the accident can also be
investigated if several vehicles have advanced safety systems for vehicle control. The
models of the driving functions can be integrated via the Functional Mock-up Interface
(FMI), for example. Likewise, the data exchange between the simulation environment,
sensor models and the control function can be carried out via standardized interfaces
such as Open Simulation Interface (OSI) (see Fig. 3).
Currently, this solution supports, among others, the GIDAS (German In-Depth Ac-
cident Study) [4] in the PCM format of version 4 and 5, CIDAS (China In-Depth Acci-
dent Study), and other small, local or proprietary accident databases. The trajectories
and speed profiles contained in the accident data are used as target specifications for
the scenarios. A distinction can be made as to whether the original trajectory or the
intended course is to be used. The options described above for extending the scenario
in both directions are also available. Further information, which is often generated with
the help of simulation tools, such as signals from the brake light switch or target accel-
erations, is not used as input data for the simulation intentionally: The speed profile and
the trajectory are followed by the virtual prototype as accurately as possible. Since the
virtual prototypes differ from the vehicles that were originally involved in the accident
and sensor models and integrated functions are used to test systems, small deviations
usually do not play a significant role.
The existing description of the road is also used to automatically create a road net-
work including the road furniture. For each object type of the road elements defined in
the accident database specification, a 3D object from the object library is defined as
default representation. They are selected based on the underlying dimensions. The var-
iation of the 3D objects of vehicles, as well as the change of the object class of each
traffic participant can be carried out in a straightforward manner.
Nevertheless, the experience has shown that the descriptions of the road in databases
are rather insufficient for a representative simulation so that suppliers of accident data-
bases also offer road networks in standardized formats for the simulation such as
ROAD5 or OpenDRIVE. For this reason, the ScenarioRRR toolbox was extended to
allow trajectories from the data (in this case the accident database) to be placed on a
referenced road network. This means that recorded trajectories can also be linked to
road networks that were exported from (high-precision) map data using the given GPS
9
The aim of the maneuver extraction is to obtain logical scenarios in which the be-
havior of the ego vehicle and all road users is described by several mini-maneuvers, i.e.
partial descriptions, which indicate target velocities (or accelerations) or phases of
steady-state driving. This allows for easy variation of the parameters in order to be able
to vary not only the longitudinal speed but also the lateral descriptions of the trajectory
in a simple and targeted manner. For example, the description available up to this point
in form of a velocity profile is broken down into several mini-maneuvers. If the thresh-
old values for maneuver identification are chosen appropriately, the initial scenario and
the reconstructed scenario differ only slightly after maneuver extraction. In this case,
only the description form of the maneuver has changed, but not its visualization com-
pared to replay-scenarios (see. Fig. 5). In order to be able to make a statement on the
deviation, a corresponding calculation of the deviation has already been added to the
10
toolbox. With the help of the maneuver extraction, it is also possible to classify and
evaluate what has happened. This is particularly important when processing sensor data
as a data source, whereas this type of information for classifying scenarios in accident
data was already provided during data collection. For example, lane changes and over-
taking maneuvers can be detected automatically and relevant parameters, such as dis-
tances between approaching vehicles or relative speeds, can be determined. Such an
evaluation makes it possible to interpret the recorded data in the following. Using a
genuine road network, it is now also possible to add further traffic objects, i.e. to create
a data augmentation of the original scenario.
The requirements for handling accident databases for use in closed-loop simulation
applications were presented and corresponding solutions for automated scenario gener-
ation for CarMaker were introduced.
The ScenarioRRR toolbox makes it possible to transform measurement data into sce-
narios and can be combined with accident databases and other sources to provide the
relevant elements for virtual test driving. The captured trajectories of dynamic road
users can be placed on road definitions to significantly increase test diversity and cov-
erage. The toolbox also provides solutions for various use cases to enable the user to
create logical scenarios and parameter variations.
IPG Automotive continues to work on expanding its toolbox for further use cases, for
example for easy variation of inner-city junction scenarios. Furthermore, it is possible
to carry out customer-specific adaptations for other input data formats. To increase the
degree of automation, integration into customer-specific solutions for requirements and
test management of scenarios is recommended.
References
1. Destatis Homepage, https://www.destatis.de/EN/Home, last accessed 2021/04/20.
2. European Commission Homepage, https://ec.europa.eu/commission/presscorner/de-
tail/de/qanda_20_1004, last accessed 2021/04/20.
3. Pegasus-Projekt Homepage, https://www.pegasusprojekt.de/files/tmpl/PDF-
Symposium/04_Scenario-Description.pdf, last accessed 2021/04/20.
4. Gidas-Projekt Homepage, https://www.gidas-projekt.de/index-11.html, last accessed
2021/04/20.
5. HERE HD Live Map Homepage, https://www.here.com/platform/automotive-services/hd-
maps, last accessed 2021/04/20.
6. OpenStreetMap Homepage, https://www.openstreetmap.de/, last accessed 2021/04/20.
A modular test strategy for Highly Autonomous Driving
based on Autonomous Parking Pilot and Highway Pilot
1 Introduction
On the way to highly automated and autonomous driving systems the vehicle (AV)
needs to be able to handle a sheer number of known and unknown situations [1]. This
large number of different scenarios results from the combination of different parame-
ters as they are structured in 6-Layer Model [Fehler! Verweisquelle konnte nicht
gefunden werden.] and used in the Pegasus Project [3]. Especially the transfer of
responsibility relating the driving task from human being to the AV leads to a fast-
increasing complexity within the AV system. Caused by the tremendous variation in
scenarios of the environment, that has been handled by the driver before, the AV sys-
tem needs an adapted system architecture, much more software and hardware func-
tionality and the necessity of additional safety and security mechanism. All this func-
tionality needs to be developed at the same duration as before and to be sure, that all
these modules work together, you need faster release cycles. Validating the whole
functionality in one release cycle driving on the road is not feasible anymore, because
a huge number of miles is needed [2]. Also, it is highly recommended to hit the criti-
cal scenarios, but the critical scenarios are typically the dangerous ones for the test
driver.
© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_13
2
Therefore strategies are needed to reduce complexity and to accelerate the amount
of test execution. One strategy can be the functional decomposition or so called “Slic-
ing the elephant” [5]. Graab et al. [6] uses a five-layer decomposition, compared to
Burton [Fehler! Verweisquelle konnte nicht gefunden werden.], who divides the
functionality into four functional components. The information processing (also
called Perception) consists of the Layer 1: Information reception and the Layer 2:
Information processing [6]. The method of slicing the Highly Automated and Auton-
omous Driving (HAD) functionality in several components leads typically in separate
test strategies for each discipline and exactly this leads to incompatible strategies not
fitting together anymore. Using methods like search-based techniques to optimize
these technical components or parts of them leads to an optimized technical compo-
nent but not to an optimized HAD functionality. Typically, these optimized compo-
nents do not lead to an optimized product, it can even lead to an unsafe system. This
can happen, if one component works with prerequisites that the previous one cannot
perform. In this paper an overall HAD test strategy is presented that is highly inter-
locked within its different technical components and avoid such unsafe situations. At
the end, the two domain functionalities “Autonomous Parking Pilot” (APP) and
“Highway Pilot” (HWP) are introduced to provide a practical basis to prove our strat-
egy and as examples for further activities like defining a toolchain and testing HAD
functionalities.
ing of the real-world environment and its infrastructure is a complex and difficult
task. Even the AV has the advantage of looking in several directions at the same time,
that a human being is not able to, the risk to misinterpret the situation will be hard to
reduce. The base to manage these challenges are a complex data management process,
with the underlying techniques and tools, huge quality metrics and measurements,
new concepts for validation in a virtual world and a new management mindset. For
executing this huge amount of data in a simulation environment, it is highly recom-
mended to validate the simulation and its results against the real world.
1.3 Challenges
Testing HAD functionality brings up many additional challenges compared to classi-
cal verification and validation strategies for vehicles within the SAE level 0-2. These
challenges can be categorized in three different areas: Environment, Vehicle and Test-
ing.
Complex situations can be found in the environment caused by road works, differ-
ent weather conditions and road users, especially by combinations of these. Scenarios
like different crossing lane markers or the need to cross a solid line are examples of
unusual situations in accordance with road work. Weather conditions, like snow, rain,
fog and wind, can even lead to the need of a full stop or at least reduce the speed. On
the other hand, these conditions are not easy to detect and quantifiable for the AV. All
these influences in combination with the non-deterministic behavior of the other road
users is a daily struggle also for human drivers. Another aspect within environment is
the perception of objects like pedestrians, children, animals, other dynamic objects
(like a ball). These objects are sometimes very hard to detect in regard of reflections,
poster advertising and many more situations as described in [11]. Additionally, the
infrastructure in parking garages, on highways and in cities is not really prepared and
optimized to the needs of autonomous driving.
The huge number of data needed for training and testing must be handled and an
analysis is needed, if the quality of the data is good enough. The data must be ana-
lyzed to extract ground truth information and identify and categorize the objects with-
in a scene. In this mass of data and different scenarios you have the need to find these
scenarios, that match to the attributes you are searching for. To define the structure of
the labels, to label the data and to store it, is a big effort especially with respect to data
privacy. Therefore, new concepts and technologies are needed.
The vehicle itself with its increasing number of sensors, with the necessity of re-
dundancy in terms of safety and its complex End-to-End infrastructure is a very large
enhancement compared to human beings driven vehicles. The internal communication
increased to get all the information from the sensor to the ECUs handling the driving
functionality, but also the external connection to environment and to the cloud for
additional information like HD-maps leads to a bigger effort in development and test-
ing.
The testing activities themselves become more complex and difficult caused by the
complexity of the use cases, that must be handled, by the huge number of test execu-
tions needed for every release, by the new technologies necessary to deal with and by
4
the need to develop reliable simulations. Testing these complex use cases can normal-
ly not be reproduced on public streets and due to the complexity not even in special
test tracks. Some of these test cases are critical and unsafe. Regarding special weather
conditions or special phenomena real drive testing is not feasible. To reproduce such
situations, you must relocate testing from vehicle to simulation. This can be done by
re-simulating recorded data, but what to do, if the vehicle acts in a direction where no
data was recorded. Therefore, and to simulate safety critical scenarios a digital world
is needed. This virtualization representing the world must be good enough to deliver
trusty data and must be representative. In general, all of this is a new field of testing,
where less experience exists. Thinking into direction of developing process these tests
must be repeated for each release and a higher number of test cases must be executed
in one release.
At the beginning of the development of the HAD test strategy we have built a refer-
ence architecture for the autonomous driving function. The architecture consists of the
four building blocks as seen in Fig. 1.
The sensors in the Perception block are categorized in two types of sensor clusters,
the one cluster called system sensors and the other one surround detection sensors. In
the system sensor cluster are for example inertial measurement units (IMU), steering
angle, drive torque, brake pressure, temperature and odometry, but also the connec-
5
tions like V2X, HD-maps and GNSS (global navigation satellite system). A special
position has the sensor for the driver monitoring system to detect the attention of the
driver especially in lower levels than SAE level 5 (fully self-driving) for hand-over
activities. The cluster of surround detection sensors (described in [13]) consists of
cameras, LIDARs, radar, ultrasonic and microphone. The Perception of this cluster
contains sub functions like the conversion of the physical signals into electronic sig-
nals, a preprocessing part, sometimes a 2D to 3D transformation, an object and fea-
ture detection and classification and the sensor fusion.
Fig. 1. Reference Architecture of HAD V&V strategy with functional building blocks.
Depending on the detected driving domain (DD) and the planned maneuvers a se-
lection of the corresponding motion planer is done. For example in parking situations
another planner is used than for Highway (high speed) scenarios. The motion planer
calculates a trajectory within the safety regions and the given maneuvers from Think-
ing. The calculation of the trajectory is influenced by parameters regarding the road
surface, safety and comfort aspects.
The trajectory represents the nominal value for the Acting block with its parts like
velocity, steering and stability control.
Each building block of an AD functionality has its own test strategy derived from the
overall test strategy. Every test strategy has its requirements for the global data struc-
ture and process.
Fig. 2. Building blocks of Perception and its reference to the OSI Interfaces
The whole test process shall be included in a continuous testing framework (Con-
tinuous integration/development/testing). Especially for the testing part a regression
test strategy must be developed. The regression test strategy shall base on the special
functionality and the progress in development (see quality gates and referring to a
maturity model of the functionality).
For all sensors and mentioned parts a standardized interface description is highly
recommended (see Fig. 2). The most widely used standard in the field of perception is
the ASAM Open Simulation Interface standard [12], that is currently extended for
actual and future needs.
Test Process
All parts of the Perception should be verified against its broken-down requirements
before an integration starts. Typically, each part uses behavior models at its interfaces
for verification and validation activities. There are several pre-integrations possible
within the preprocessing, the fusion of the overlapping sensor views, the object detec-
tion & classification and especially the different steps of fusion. After the integration
it must be validated, that all parts work together properly regarding performance,
stability and fallback. Thus, all parts are tested with the common ground truth data
adapted to its own needs and as much as possible in virtual simulation.
For the building block Thinking no standard exists regarding the interfaces be-
tween the different functionalities like the ASAM Open Simulation Interface for Per-
ception do. Just the input to Thinking as output of Perception is standardized within
OSI. This makes a general test strategy difficult, and a standardization would be very
helpful. The behavior prediction of the traffic participants can be tested very easily in
the aspect of the interface format given from the object list and the additional envi-
ronment information, but on the other hand to predict the behavior of other drivers
can be very hard. Some companies focus on models of drivers and other traffic partic-
ipants like pedestrian, that should be used to simulate the behavior. On the other side
analysis of field data, datasets from accident and behavior datasets like [15] can be the
basis of testcases and simulation of the behavior of traffic participants.
Determine the driving domain becomes fundamental to understand the scenario
and conclude on the valid traffic rules. Neither the description of ODD nor the de-
scription of DD are standardized and formalized. Currently the ASAM OpenODD
concept project [16] works on such a standardized description. The description in a
machine-readable format is necessary to use it in development and testing. Some typi-
cal properties of the given scenario, behavior of the traffic participants and the map
information lead to the current driving domain.
When the AV knows the DD, the environmental features and the behavior of the
traffic participants, it can identify the save areas over time. With this safety envelope
and the route information from the navigation the AV can plan his mission for the
next stage of the trip. Therefore, a planning and decision making is necessary for the
maneuver that is needed to reach the next stage.
Test Process
All parts of the Thinking should be verified against its broken-down requirements
before an integration starts (similar to Perception). Typically, each part uses behavior
model at its interfaces for verification and validation activities. After the integration it
must be validated, that all parts work together properly regarding performance, stabil-
ity and fallback. So, all parts are tested with the common ground truth data adapted to
its own need and as much as possible in virtual simulation.
Test Process
The Planning component should first be validated against requirements at its inter-
faces with the help of the common data pool. The test cases represent different scenar-
9
ios with variations of the lanelets, that are derived from traffic participant prediction,
variation in vehicle state and obstacle position, different maneuver planning and
changing drivable zones. To validate the functionality of the motion planning it is
necessary to use additional methods like Search-based Testing to identify the critical
cases by searching for corner and edge cases. After identification of these cases an
optimization of the functionality can be done. The information about such corner and
edge cases should be mirrored back to the global tooling and test data pool to build up
corresponding test cases for the system tests.
Test Process
Functional tests should be done as good as possible in a virtual simulation right
from the beginning. The influences of the redundancy and the drop out of one path
shall be tested in the virtual world with physical models of the environment and the
surrounding. These models shall be developed and delivered in a format, that they
fulfill the requirements to be re-used on system level for testing, but also as plant or
system model for other component tests.
Hardware dependencies for the acting component must be tested on test track. A
special educated driver is needed and the vehicle must be prepared with equipment to
force these redundancy check test cases.
For an efficient and effective HAD test strategy it is necessary to highly interconnect
the test strategies of the different functional components. On the one hand this needs
to be done in process and on the other hand in tooling and data structure.
The basic pillars of testing a HAD functionality are:
10
These pillars lead to a harmonized and synchronized toolchain with a central data
pool and a minimum of tool breaks. The data structure exists of several layers similar
to the layers defined by PEGASUS [4] and the formats defined by ASAM [12], [16].
But there is still a gap for logical scenarios and testcases, that should be closed for
future needs and the current projects. Therefore a need for standardization and har-
monization is obvious.
Our strategy uses a risk-based testing approach, because testing all scenarios is not
possible. So, the strategy focuses on scenarios with the highest risk. The key to identi-
fy these scenarios is to analyze the system in aspects of required functionality, archi-
tecture and environmental usage (ODD analysis). Additional risks typically results
from system changes during development. Tracking the errors and the probability of
errors are necessary to identify the regions of failures and the estimation of risks. The
use of automated risk identification methods like search-based testing help to find the
corner cases of the developed algorithms. The results of the previous test campaigns
to identify regions of high risk and the involvement of all stakeholders are additional
sources. After start of sales a concept for field data acquisition, replay and analysis is
elementary to improve the functionality and reduce risk. All these activities lead to a
better and intelligent selection of the test cases.
The high number of test cases and the methods of automated risk identification re-
quire a test infrastructure, that can execute a huge number of test cases within a min-
imum amount of time. Driving millions of kilometers in the real world is no longer
feasible anymore, so the test must be done in simulation and with highly parallelized
execution. Virtualization helps also to close the integration gap between the test levels
of software test and the system integration test. Extending the approach of continuous
integration and continuous development for the needs of testing in combination of
closing the gap makes the process much more efficient. Additional concepts and a
well-fitting strategy for using the same models between the integration levels and the
different functional components result in a very effective verification and validation.
A seamless process with a high degree of automation is essential for testing of
HAD functions. This refers to the areas of tooling, integration, data loop and data
format. Especially a high-performance data loop concerning real data, labeling, identi-
fy new scenarios and the balance between real data and synthetic data is the key factor
for a successful validation of self-driving cars.
11
Fig. 3. Example sensor architecture with field of view for Autonomous Parking Pilot
APP_UC1: Drop-off
APP_UC2: Driving between Drop-Off/Pick-up and parking area.
APP_UC3: Search free parking lot
APP_UC4: Park into parking lot
APP_UC5: Drive to Pick-up
HWP_UC3: The vehicle passes construction site, where lanes temporarily changed
and some lanes crosses others. There can also be special road signs and cars from the
workers may enter the highway in an unusual way.
HWP_UC4: The car must overtake a slower car. Various scenarios are possible,
like faster car comes from behind on ego lane or on neighboring lane, car in front
accelerates or car in front changes also the lane.
HWP_UC5: Cut-in means when another car changes into ego-lane between ego
vehicle and car in front. And a Cut-out, when car in front changes suddenly the lane
and a static obstacle (or also a much slower vehicle) can be perceived (noticed).
HWP_UC6: Planned route needs several lane changes, also during traffic jam.
HWP_UC7: Caused by internal failures of the car or an internal safety alert a take-
over request comes up.
HWP_UC8: The planned exit is reached, and the driver must takeover.
HWP_UC9: The end of a traffic jam is reached with high speed.
How to find the best test cases for such a functionality is described in [21].
6 Conclusion
In this work we compared the classical test strategies with the new challenges the test
strategies for HAD have to deal with. We described where the challenges of automat-
ed and autonomous driving come from and how they lead to a tremendous complexity
that was never seen before in the field of driving. The handling, the requirements of
safety and the need to validate this complexity brings up new strategies to handle the
testing effort. Typical strategies are to search for the critical test cases and to reduce
the complexity by dividing the functionality in smaller functional components. The
disadvantage of separate test strategies, that can be seen very often, can lead to func-
tional insufficiencies and to a non-efficient and non-effective validation. With our
HAD strategy we showed why and how all these strategies can be interconnected and
become highly efficient. We introduced a typical architecture as reference for the
system under test and described the functionality of the building blocks of a dividing
test strategy. The specific test strategy for each building block and its individual prop-
erties and methods were described. These methods, tools and processes are highly
interconnected and synchronized deducted from the pillars of the holistic test strategy.
For preparing an evaluation two functionalities of an AV were described as “Auto-
mated Parking Pilot” and “Highway Pilot”. These functionalities and the defined use
cases are the basis of a proof-of-concept and the definition of data structure and vali-
dation toolchain.
References
1. Koopman, P., Wagner, M.: Challenges in autonomous vehicle testing and validation. SAE
International Journal of Transportation Safety 4(1), 15-24 (2016)
2. Wachenfeld, W., Winner, H.: The release of autonomous vehicles. In: Autonomous Driv-
ing, pp. 425{449. Springer (2016)
14
3. Maike Scholtes, Lukas Westhofen, Lara Ruth Turner, Katrin Lotto, Michael Schuldes,
Hendrik Weber, Nicolas Wagener, Christian Neurohr, Martin Bollmann, Franziska Körtke,
et al. 2020. 6-Layer Model for a Structured Description and Categorization of Urban Traf-
fic and Environment. arXiv preprint arXiv:2012.06319 (2020).
4. “Project Pegasus” [Online]. Available: https://www.pegasusprojekt.de/en/about-
PEGASUS.
5. Amersbach, Chr., Winner, H.: Functional Decomposition: An Approach to Reduce the
Approval Effort for Highly Automated Driving, 8. Tagung Fahrerassistenz, Lehrstuhl für
Fahrzeugtechnik mit TÜV SÜD Akademie, München (2017)
6. B. Graab, E. Donner, U. Chiellino, and M. Hoppe, “Analyse von Verkehrsunfällen hin-
sichtlich unterschiedlicher Fahrerpopulationen und daraus ableitbarer Ergebnisse für die
Entwicklung adaptiver Fahrerassistenzsysteme,” in TU München & TÜV Süd Akademie
GmbH (Eds.), Conference: Active Safety Through Driver Assistance. München, 2008.
7. S. Burton, R. Hawkins, Assuring the safety of highly automated driving: state-of-the-art
and research perspectives, University of York (April 2020)
8. SAE J3016: Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Au-
tomated Driving Systems. Online: https://www.sae.org/standards/content/J3016_201806,
(2018)
9. ISO 26262:2018: Road vehicles – Functional safety (2018)
10. P. Koopman, M. Wagner, “Toward a Framework for Highly Automated Vehicle Safety
Validation”, WCX World Congress Experience (April 2018)
11. P. Koopman, “Ensuring Autonomous Vehicle Perception Safety”, Autonomous Think
Tank Gothenburg, Sweden (April 2019)
12. ASAM Open Simulation Interface, https://www.asam.net/standards/detail/osi/ , (April
2021)
13. E. Yurtsever (Member, IEEE), J. Lambert, A. Carballo (Member, IEEE), K. Takeda (Sen-
ior Member, IEEE), “A Survey of Autonomous Driving: Common Practices and Emerging
Technologies”, https://arxiv.org/pdf/1906.05113.pdf (April 2020)
14. Thatcham Research, “Defining Safe Automated Driving - Insurer Requirements for High-
way Automation” (September 2019)
15. A. Rasouli,I. Kotseruba, J. K. Tsotsos, “Are they going to cross? A benchmark dataset and
baseline for pedestrian crosswalk behavior”, Proceedings of the IEEE International Con-
ference on Computer Vision Workshops, pages 206-213 (2017)
16. ASAM OpenODD, https://www.asam.net/project-detail/asam-openodd/ , (26.10.2020)
17. Bosch Mobility Solutions, “Automated Valet Parking”, https://www.bosch-mobility-
solutions.com/en/products-and-services/passenger-cars-and-light-commercial-
vehicles/automated-parking/automated-valet-parking/, retrieved 22nd April 2021.
18. Uber ATG, „Safety Case Framework” https://www.uber.com/us/en/atg/safety/safety-case-
framework/, retrieved 3rd September 2020.
19. UL 4600, “1. Proposed First Edition of the Standard for Safety for the Evaluation of Au-
tonomous Products”, (13th December 2019)
20. S. Shalev-Shwartz, S. Shammah, A. Shashua, “On a Formal Model of Safe and Scalable
Self-driving Cars”, arXiv:1708.06374v6 [cs.RO] (27 October 2018)
21. F. Hauer, A. Pretschner, B. Holzmüller, „Fitness Functions for Testing Automated and Au-
tonomous Driving Systems“, (2019)
Mastering the Data Pipeline for Autonomous Driving
Abstract.
Autonomous driving is at hand, for some at least. Others are still struggling to
produce basic ADAS functions efficiently. What is the difference between the
two? It is how the data is treated and used. The companies on the front line real-
ized long ago that data plays a key and central role in the progress and devel-
opment processes must be adapted accordingly. Those companies that have not
adapted their processes are still struggling to catch up and are wasting time and
resources.
This article discusses the key aspects and stages of data-driven development
and points out the most common bottlenecks. It does not make sense to focus
on just one part of the data-driven development pipeline and neglect the others.
Only harmonized improvements along the entire pipeline will allow for faster
progress. Inconsistencies in formats and interfaces are the most common source
of project delays. Therefore, we provide a perspective from the start of the data
pipeline to the application of the selected data in the training and validation
processes and on to the new start of the cycle. We address all parts of the data
pipeline including data logging, ingestion, management, analysis, augmenta-
tion, training, and validation using open-loop methods.
1 Introduction
Big data is not what is standing in the way of the automotive industry’s attempt to
automize driving. It is the way the data is used and harnessed that is hindering pro-
gress toward driverless mobility. It is easy to collect a lot of data, it is easy to store
this data, and it is easy to retrieve the data. However, it is not easy to do this efficient-
ly. Efficient means not wasting resources and time. Data is great and it is considered
“the new oil,” but too much oil can choke the engine and even cause the engine to
© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_14
2
stall. Therefore, it is not the data itself but the process and appropriate tools that ad-
vance the progress.
There is a huge amount of data being collected by the vehicle sensors, adding up to
dozens of terabytes per vehicle per day. And while the total data lake can reach hun-
dreds of petabytes (e.g., BMW´s D3 data center features more than 200 PB of storage
[1]) current estimations say only 5% of the data is actually accessed and used produc-
tively.
Reliable autonomous operation is an ambitious goal due to the complexity of the
problem to solve. It is not a well-defined problem within a limited scope that would
allow traditional problem-solving approaches to succeed. The complexity of the envi-
ronment, the real world, in which autonomous vehicles must be operated productively
is enormous. And even an artificial limitation of the relevant conditions such as a
restricted area or weather conditions (often defined in ODD – Operative Domain De-
scription) does not make it possible to describe explicitly intended operation in the
form of if-then-else. That is the reason for which a completely different approach to
solving this problem has been adopted. It is a general heuristic approach that does not
intend to solve the problem with the highest accuracy and completeness, but rather
aims to solve the problem pragmatically in an efficient manner. It has been proven
that a more general code framework that can be configured and tuned with specific
examples of the problem to solve results in significant advances in autonomous driv-
ing. It can be called programming by examples or, as Andrej Karpathy, the director of
AI development at Tesla, calls it, SW 2.0 [2].
The clear separation in naming (SW 2.0 vs. classic software) is related to the com-
pletely new paradigm in programming related to the problem-solving method. It is no
longer the written code that defines whether the program successfully solves the task.
Rather, the parametrization of the code has the most significant impact.
In conventional software development, a programmer writes instructions line by
line to define what is executed automatically. In the new approach, only a few lines of
codes are written that define a model and parameters of the model that are set after-
ward. The success or failure of the program thus depends to a very large extent on the
configuration of the parameters. The parameters are not set manually. There are thou-
sands or even millions of parameters that have to be adjusted. Fortunately, this is done
automatically by describing the right examples of how the output should relate to an
input. In other words, the model is fed input data and its output is compared with the
expected output. The difference is back propagated in the model, adapting the param-
eters to achieve a smaller difference in the next run. After this optimization task and
many different iterations, the model parameters are set to deliver an output that is
“sufficiently close” to the expected result. The model is then adjusted to the provided
examples (data) and returns the expected results. This is good because the model au-
tomatically returns the expected output based on the provided input without having to
explicitly define every step in the process. Only the model is hard-coded, while the
parameters are defined automatically. The issue is that the model will work as long as
the provided input corresponds to the examples given to the model during the para-
metrization (training of the model). If the model has not a seen similar input, it will
3
most likely not deliver the expected output. The consequence is that the success of the
development is determined by the data the model sees during the training process.
Due to this fact, the roles in the development process are split between program-
mers who write the code to implement the model and data engineers who prepare
the data for the training phase. After the initial phase, the progress of the project is
meas-ured based on how much the training data, and to a lesser extent the code,
has im-proved. This means that the focus of the development optimization changes
complete-ly. The focus is not on code advances, code versioning, or integration
with all the tools that help streamline the code development and testing process.
Completely new requirements for supporting tools have emerged. The development
process must adopt this fundamental change of focus and place data at the center of
attention. The pro-cesses must be wrapped around data and allow for the efficient use
of the data in vari-ous development steps. The environment must be able to handle
large amounts of data as the trained model must not be surprised by new situations
(input) on the road.
Fig. 1. Data pipeline and the cycle of data-driven development in the automotive industry
The structure of the paper is aligned with the main domains of the data pipeline and
the data cycle supported by dSPACE (as depicted in Fig. 1). Each chapter below is
dedicated to one key topic. Nevertheless, since the main narrative of the data pipeline
is "seamlessness," the interconnections and mutual influences are highlighted in the
individual chapters.
The journey of the data through the data pipeline starts at the vehicle. The vehicle is
the first hub and the source of the data to be used to advance autonomous driving
functions. One vehicle can be seen as a single source of data when looking at it from
the perspective of a data collection campaign including several vehicles. At the same
time, the vehicle itself represents a data collection network. At the vehicle level, the
data sources are all sensors, buses, and networks that transmit information from inter-
nal vehicle sensors and ECUs. Simple recording simply means shuffling the data from
IO interfaces to data storage as fast as possible. More advanced logging introduces
functions that are applied to the data between their reception and their storage. This is
4
a data pipeline at the vehicle level. Both perspectives have their own challenges and
aspects, and thus they can be split up for further consideration.
Acquisition of data at the vehicle level has become yet another challenging topic in
the general pursuit of autonomy. Unlike legacy acquisition technologies, data logging
systems for ADAS/AD must cope with new interfaces, protocols, high data rates, and
potentially huge data volumes produced by environmental sensors.
The vehicle and the data collection system are also the first place where the data
pipeline can be optimized. The internal vehicle bus and network architecture can be
quite complex, and failures or inconsistencies can occur when acquiring data. This is
not infrequent and such failures must be identified early on in order to avoid wasting
time (when collecting corrupted data) and storage space.
The most common practice currently is for in-vehicle data logging systems to col-
lect all of the data and for the quality of the data to be checked while ingesting the
data or in the data center. This method wastes the resources in the vehicle. Most obvi-
ously, the size of the in-vehicle storage must compensate for the inefficiency. In addi-
tion, the drives must be interrupted more frequently than necessary just to swap drives
or upload the data from the vehicle to an ingestion station.
This inefficient process has been identified, and the emerging trend is to equip ve-
hicles with an additional computer to analyze the incoming data while driving. This is
an intermediate step as it is not optimal due to the increased complexity of the vehicle
measurement equipment. An additional computer requires additional power, space,
mounting, a high bandwidth connection, and precise synchronization with the rest of
the system. The final goal is for the data logger itself to be able to analyze the incom-
ing traffic and decide what is to be recorded and what is to be thrown away. This
poses additional requirements for the data logger to supply enough computational
power for the analysis. In addition, the logger has to offer efficient means to set up
such a data analysis in the software framework or logging application itself.
Several levels of detail can be distinguished when curating and analyzing data.
First, the most basic requirement is to detect any missing data streams, which would
indicate a problem with an in-vehicle component. This might happen if some of the
sensors or ECUs do not boot properly after restarting the system. Additionally, the
data rate of the data streams can be checked to indicate configuration issues. Analyz-
ing the headers and content of the received packages and messages offers far-reaching
possibilities for understanding the conditions and surroundings of the ego vehicle.
Such a content-based analysis not only checks the data quality but also optimizes
the entire data cycle. If the data logger is situation-aware, relatively simple rules can
be defined to improve the data flow. Such rules can trigger the tagging of the data for
later use, define filters on the data logging pipeline, or enable a triggered recording to
store only data of a defined interest.
The triggered recording can be enhanced with an over-the-air transmission feature.
In this case, the data is not only stored in the vehicle storage but immediately sent out
to a data center to be available for data scientists as soon as possible. Such a data flow
shortens the entire data cycle as much as possible and allows for the accelerated im-
provement of the neural networks employed.
5
Achieving in-depth insight into the data from the data logger has several prerequi-
sites. First, the data logging has to be centralized to have a comprehensive view of the
data from all devices. Then, it is necessary to provide support for all interfaces (CAN,
FlexRay, Ethernet, GMSL, etc.) and understand the communication protocols to ac-
cess the signal and data level. And potentially, it must be possible to interpret the
signals and data themselves.
Interpreting signals is quite trivial (assuming it is possible to read the communica-
tion matrix), while interpreting data from complex environmental sensors is the chal-
lenging part. It is exactly what ADAS/AD ECUs are intended for with the difference
that the reliability requirement is much lower. Much lower levels of perception are
accepted. The detection is only for informative purposes and does not have any safety
implications. False positives might just mean a bit more data is recorded. False nega-
tives at further distances are not as much of a worry as buffering is in place anyway.
Once the objects are closer, the probability of detection increases, and a specified
interval before the detection is recorded. It is not necessary to understand the com-
plete situation. Only certain aspects of the situation, certain objects, or certain maneu-
vers are of interest in the particular phase of development and testing. This simplifies
the logging configuration and the in-vehicle data pipeline. In addition, it can be dy-
namically adapted based on the actual needs.
Fig. 2. Logging data during road testing involves a specific logging method that incorporates a
minimal level of delay and jitter to not affect the system under test
The road testing phase provides another option to optimize the data logging pro-
cess. The combination of a performant computing platform and a data logger in one
device makes it easy to deploy a perception algorithm together with the logging pipe-
line. Certain KPIs for the algorithm under test (e.g., uncertainty level) can act as a
6
trigger to store the relevant time interval in the recording and even to send it over-the-
air to the data center to speed up the feedback loop.
When an ADAS/AD computer (or its prototype) is under test in the vehicle, the
computational and logging platforms are separated but the same synergy effect can be
achieved (see the data logging architecture in Fig. 2). KPIs for algorithms under test
can be a trigger to the recording. In this case, not only the sensor and bus data are
saved but also the state of the ECU under test can be saved consistently. This allows
for detailed and efficient debugging with a complete data snapshot of the situation.
Furthermore, it is possible not only to log externally exposed function parameters but
also to store the memory dump.
3 Ingestion Pipeline
Ever improving environmental sensors with increasing resolution produce ever in-
creasing amount of data. Cameras in particular are the drivers for data bandwidth and
storage requirements. Parking cameras might feature 1.2 or 2 Mpx of resolution, but
the imager in more common cameras found in SEA L2+ vehicles provide 5 to 8 MPx
of resolution. Recently, the company Ambarella announced that it would be launching
an Intelligent AI Vision processor capable of processing outputs of 8k resolution
cameras at up to 60 frames per second with low consumption [3]. This will make it
possible to introduce cameras with an 8k resolution in the automotive domain as well
(unlike the 4k cameras seen today). If data from such cameras were to be logged in a
raw form (as it is currently done), a single 8k camera running at 30 frames per second
would require a recording bandwidth of at least 11.5 gbps. Currently, typical data
rates (aggregated) range from a few gigabits per second to 40-50 gbps, making it pos-
sible to file even large storage drives very quickly.
When in-vehicle storage (usually SSD or HDD types) is full, data must be trans-
ferred to a storage location that can be accessed by developers and test engineers. If
we ignore the over-the-air transmission of selected sequences for now, there are two
basic means of transferring collected data. One option is to ingest data from in-vehicle
storage to a data center directly. Directly means that the drives (SSDs or HDDs) are
transported to the data center where the data is finally stored. The data is copied from
the in-vehicle storage to the server storage, while the drives are erased and returned to
the drivers. The second option is to not transport the storage drives themselves but
rather to transmit the recorded data. If all available storage drives in the vehicle are
full, the driver drives to a place with a good connection, and the data is transmitted
via the Internet (usually via dedicated links with high bandwidth such as Express
Route Direct from Microsoft [4]). Both approaches have their pros and cons, and the
goal is to optimize the cost/transfer time ratio.
Whatever approach is selected, the ingestion pipeline starts by reading the data
from the vehicle drives. As a rule, data that cannot (or will not) be used should not be
ingested. This is especially true for corrupted or incomplete data. Such data would
only take up space without any added benefit. If the quality check is not performed in
7
the vehicle, the ingestion pipeline is another possibility for maximizing storage and
thus saving costs.
In addition to the bandwidth requirements, implementing an ingestion pipeline can
be computationally demanding. Therefore, the processing takes place in the data cen-
ter itself or in what is referred to as a collocation center. Such facilities offer much
cheaper computational power than is available in the vehicle and have fewer re-
strictions in terms of space and power supply.
The ingestion pipeline is an abstraction of the hardware and location where it is
run. It represents logical steps that the data goes through to become available for us-
ers.
Quality control is the first stage in the ingestion pipeline. It is a step that pays off as
low-quality data does not have to be ingested because it will never be used and thus
can be deleted. Low-quality refers to incomplete data, corrupted data, inconsistent
data, or data with an error flag (from the data source).
Automated annotations are another stage. The primary purpose of annotation in
this step is to provide a possibility to analyze and search for data early on. This allows
for more informed decision-making concerning further data processing. At this stage,
annotation functions are executed to derive basic information about the data. Usually,
the functions are not power hungry as they process all data (instead of only preselect-
ed data). There might be different strategies, but easily derived annotations conducted
in this phase include map-based annotations, weather and day-time annotations, and
annotations derived directly from bus data.
After this stage, users can analyze statistical information based on the data: for ex-
ample, the distribution of road types, road infrastructure, or weather conditions such
as rain, fog, and storms. In addition, information about traffic signs is extracted from
the map data to provide further insight on road sections of interest.
Fig. 3. Data ingestion pipeline ensuring only quality data are ingested and automatically prepar-
ing data for searches and analyses in compliance with GDPR
Another step involves the preparation of data for visualization. Users expect detailed
visualization options that will allow them to see the vehicle route on a map and to
8
preview data from different sensors. Therefore, highly compressed previews opti-
mized for web visualization are generated for every recording. In addition, the pre-
views can be anonymized (faces and license plates are blurred or artificially generat-
ed) to comply with GDPR regulations and to simplify the organizational measures.
Ingestion can be executed fully automatically as a part of the overall data pipeline.
It all starts with the preconfigured upload station. At the push of a button, the data is
retrieved from the vehicle SSD (or hard drive) and processed by the ingestion pipeline
functions to prepare it without any human interaction. At the end of the pipeline, the
test or data engineers are able to access, preview, and search in the data to accomplish
their tasks.
• Analyzing data
Without annotation, a great deal of important metadata would be missing. By
annotating the data, it is possible to gain insight into the distribution of different
9
aspects of interest such as highway versus rural roads, the number of merge points
on the highway, or the number of surrounding vehicles (see example graphic in
Fig. 4).
Annotations for data analysis are also necessary to evaluate whether set goals are
achieved in data logging campaigns. A predefined number of recordings for a specific
situation is usually the goal that determines the length of the campaign.
• AI training
AI training requires high-quality annotations (also called labels). The more pre-
cise the labels, the better the training result and the better the performance of the
final ADAS/AD function [7]. For perception functions, both static and dynamic
objects need to be annotated with their size, position, orientation, and other aspects
to provide sufficient ground-truth information.
• Validation
Validation always needs a reference. When validating perception and fusion
functions, annotations serve as this reference (i.e., ground truth). As the name im-
plies, the ground truth is a precise description of reality that is captured in the sen-
sor data.
10
There are several types of annotations that are extracted and used in the process of
data-driven development. Often the term “tag” is used for situational annotations de-
scribing general aspects of the situation such as rainy or pedestrian crossing and the
term “label” is used to identify the object in the data precisely. However, there are
overlaps and such differentiated terminology is not always unequivocal.
It is also worth mentioning that annotations are not free and entail certain costs. If
the annotations are extracted automatically, then the cost of computational resources
is considered. If a human annotator performs the job, then the personnel costs are
considered. For high-quality annotations, used for AI training and validation, the
combination of automated process and human quality controls is often the preferred
method as it optimizes the costs.
11
5 Data Replay
Autonomous vehicles can only gain the trust of consumers if the technology is proven
to work in all cases, emphasizing the role of automated testing. Key technologies for
autonomous driving such as environment perception and sensor fusion require a dif-
ferent approach to testing. Data replay, a relatively new testing strategy, has estab-
lished itself as an efficient and cost-effective testing and validation method for these
technologies.
Various methods can be used to validate an autonomous driving system (see Fig.
5). Test drives with test vehicles and AD prototypes can be used to test all vehicle
components under realistic conditions. However, test drives are cost intensive. More-
over, critical traffic situations are rarely observed on the real road. Another possibility
for validation is environment simulation. With synthetically generated data from high-
fidelity simulators (e.g., dSPACE ASM), hazardous situations can easily be simulated
and manipulated. For example, it is possible to change the weather and the time of
day of a test for the same operational design domain. Although the quality of sensor
simulations is steadily increasing, synthetic data will never truly match the real world
due to the simulation paradox.
Fig. 5. Applicability analysis of different tests according to “safety first” for automated driving.
Therefore, replaying recorded sensor data, known as data replay (i.e., data repro-
cessing), has established itself as another key test methodology. Data that was previ-
ously recorded during test drives is used for validation. When replaying the data, the
recording conditions must be recreated exactly as recorded, thus allowing for street
conditions to be replicated in the laboratory. When using real-time data replay, the
environment perception algorithms are supplied with recorded data around the clock,
the system responses are measured, and the object detection quality is evaluated con-
tinuously. This ensures that the new software version of the AD stack is tested proper-
ly against real-world data and scenarios.
12
When testing with recorded and/or simulated data, it is possible to distinguish be-
tween two basic test methods. Software-based validation such as software replay fo-
cuses on validating an algorithm without considering time constraints or connected
bus systems. Testing can be performed before a hardware prototype is available. The
second level is hardware-based validation such as hardware replay, which involves
deploying the central computer or sensor ECU, which is connected to the test system
and supplied with synthetic and/or real data. This makes it possible to test all of the
hardware as well as the electrical interfaces and the software in the conditions closest
to on-road testing.
An example architecture (see Fig. 6) can be used to better understand the challeng-
es of data-replay. In this architecture, the sensors of the autonomous vehicle generate
a data volume of 70 Gbit/s. The data replay test station must stream this 70 Gbit/s
around the clock while ensuring that the device under test does not recognize that it is
not installed in a real vehicle and driving on the road. The challenge lies in the nature
of the streamed bits. It should be noted that raw camera data, radar data, lidar data,
and especially data from CAN and Ethernet (SOME/IP) packets are extremely differ-
ent. Nevertheless, all these data streams must be played back in a synchronized man-
ner within microseconds to ensure the correct test conditions. Taking the software
replay scenario into account, an additional layer of requirements is added: the correct
simulation of the virtual image of the device under test. Not only does this need to be
accurate, but it must also contain the accurate simulation of the virtual buses attached
to this virtual device under test. In addition, the whole software replay test has to run
as fast as possible, ideally faster than in real-time.
Data replay methodology can be enhanced even further. Introducing failures and
manipulating data in the streamed data makes multiple test variants possible. The real-
time manipulation of the streamed bus/network data makes it possible to validate the
bus security stack of the device under test. Moreover, manipulating sensor data and
13
introducing failures can test the reaction of the perception algorithm against vandal-
ism attacks.
Nevertheless, the data replay test station, be it software or hardware, is part of a
bigger data and software delivery pipeline. Continuous testing and over-the-air soft-
ware updates mean the faster a test campaign is carried out the faster car manufactur-
ers can monetize new features. A crucial factor for this is the transfer of data from the
data lake to the data replay test station. An optimal approach here is to bring the data
replay test stations close to the measurement data, thus saving time and money. This
is more challenging for hardware data replay systems than for pure software data
replay systems with regard to colocation with the data storage.
With such an approach, the environment perception components can be tested 24/7
against thousands of driven kilometers. Being infrastructure agnostic is automatically
required to pursue this endeavor as data could be stored in a native public cloud, on-
site, or in hybrid cloud/edge constellations. Especially for hybrid cloud constellations,
there is a great need for a single data access point to manage and allow for data replay
test campaigns as it simplifies the testing process significantly. Intempora Validation
Suite (IVS) abstracts these different data infrastructures and provides a single web
interface to control both software and hardware data replay test stations.
Data replay is becoming increasingly established as a centerpiece of autonomous
driving homologation as it guarantees a high level of confidence in the vehicle’s per-
ception stack. Nevertheless, data replay testing poses challenges regarding the data
replay test bench due to the large number of sensor and bus interfaces and with regard
to the infrastructure and efficient data storage [8].
A smooth transition between software and hardware data replay testing accelerates
the entire validation process. However, the increasingly common use of public cloud
services complicates the situation. The public cloud infrastructure is inaccessible to
test engineers and the data must be co-located using the device under test. A new
level of cooperation is needed among automotive OEMs, tooling providers, and the IT
companies providing the infrastructure (Fig. 7).
14
Fig. 7. Scalable data replay from dSPACE in the cloud and colocation center
Validating ADAS or AD systems with recorded data from real traffic situations is
not enough. The real world is very complex, traffic situations are varied, and the vali-
dation timeline is limited. The vehicles must be prepared for situations that were not
encountered during the data collection or testing phase. This is where simulation
comes into play. The realistic simulation of sensors and the optimization of the total
test cases are major challenges.
Regardless of the challenges, the need for synthetic data is evident. Therefore,
well- defined data pipelines are able to seamlessly employ both real and synthetic data
during the development process. As synthetic data is becoming more and more realis-
tic, both data sources can be used for training and validation. Data management sys-
tems must be able to distinguish between these two types of data and maintain the
connection if synthetic data is derived directly or indirectly from the collected data.
There is also a new approach to how to expand the data sources for the training and
validation used in data management and the data pipeline. This is referred to as data
augmentation and it is a combination of real collected data and synthetically generat-
ed data. There are two ways to augment a dataset. One is to replicate real data in the
simulated environment with additional traffic participants. This becomes a fully simu-
lated output for the system under test. The second option is to integrate a simulated
object in the collected data. Most of the output is still collected data (the exact data
from the sensors recorded), but that data is overlaid with artificially created objects or
effects.
15
It is indisputable that data from real traffic situations plays a key role in the devel-
opment of new cars and will continue to do so in the years to come. ADAS functions
are mandated by regulatory authorities and there is no way around this. Even with
basic ADAS functions, machine learning models are employed. With the complexity
of the functions increasing and vehicle autonomy growing, more and more neural
network models and their parameters are being integrated into the car software. Work-
ing efficiently with data and not just with code increments is a challenging task for
companies that are not prepared, meaning they do not have proper infrastructure and
have not adopted adequate methods to progress as expected by the market. Many
companies have designed their internal tools to support data-driven development.
However, it is evident that such approaches are difficult to scale, in terms of the data
volumes, team size, and new function development. Internal tool development binds
resources and keeps companies from achieving their main goal. Fortunately, commer-
cial solutions have emerged to help build an automated, flexible, and scalable pipeline
with stable features and secure maintenance to keep the pipeline running during the
critical phase of development projects.
References
1. BMWGroup. [Online]. Available:
https://www.press.bmwgroup.com/global/article/detail/T0293764EN/the- new-bmw-
group-high-performance-d3-platform-data-driven-development- for-autonomous-
driving?language=en; last accessed 2020/04/08. [Accessed: April 19, 2021].
2. A. Karpathy, "Software 2.0," November 11, 2017. [Online]. Available:
https://karpathy.medium.com/software-2-0-a64152b37c35. [Accessed: April 8, 2021].
3. Ambarella, "Ambarella introduces CV5 high performance AI vision processor for single
8K and multi-imager AI cameras," January 11, 2021. [Online]. Available:
https://www.ambarella.com/news/ambarella-introduces- cv5-high-performance-ai-vision-
processor-for-single-8k-and-multi-imager- ai-cameras/.
4. Microsoft, "Microsoft docs," [Online]. Available: https://docs.microsoft.com/en-
us/azure/expressroute/expressroute-erdirect-about. [Accessed: May 9, 2021].
5. P. Moravek, "Smart Data Logging – Part I: Reducing Redundancy," dSPACE, February
15, 2021. [Online]. Available: https://www.dspace.com/en/inc/home/news/engineers-
insights/smart-data-logging-redundancy.cfm. [Accessed: May 5, 2021].
6. D. Hansenklever, "Dataset Enrichment Leveraging Contrastive Learning," December 7,
2020. [Online]. Available: https://towardsdatascience.com/dataset-enrichment-leveraging-
contrastive- learning-ea399901f24. [Accessed: May 2, 2021]
7. M. Mengler, "Quality — The Next Frontier for Training and Validation Data," May 17,
2018. [Online]. Available: https://understand.ai/blog/annotation/machine-
learning/autonomous-driving/2018/05/17/quality-training-and-validation-data.html. [Ac-
cessed: May 13, 2021].
8. dSPACE, "Validating ADAS/AD components using recorded real-world data," [Online].
Available:
https://www.dspace.com/en/inc/home/applicationfields/our_solutions_for/driver_assistanc
16
Abstract
A new method is described regarding over-the-air analysis of fleet data on a
backend server in order to optimize driving routes and to evaluate the perfor-
mance fulfillment level of ADAS. With this e.g. insufficient coverage of maneu-
ver diversity can be corrected immediately and new driving routes adjusted to
avoid inefficient wasted kilometers during real road validation of assistance fea-
tures in the fleet. Thus, one of the most relevant demands from vehicle manufac-
turers is addressed, namely, a significant reduction of the effort for validation
without compromising safety, quality and reliability of ADAS functions.
The new over-the-air method consists of the four core elements driver guid-
ance, secure data transmission, automated maneuver evaluation and web-based
data enrichment including a statistical analyzer on a backend server. Useful ap-
plications including practical examples and figures are highlighted in the next
chapters. Advantages versus conventional approaches are discussed. Finally, an
outlook to potential extensions is shown.
© Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature 2021
T. Bertram (Hrsg.), Automatisiertes Fahren 2021, Proceedings,
https://doi.org/10.1007/978-3-658-34754-3_15
2
1 Introduction
1.1 Objective
Before driving assistance functions (ADAS) and automated driving functions (AD)
are launched to the market vehicle undergo comprehensive fleet test campaigns under
real world environment and traffic conditions. The target is on the one hand to validate
robust, safe and reliable operation under all practical circumstances and on the other
hand to evaluate the features performance regarding high expectations as well as ac-
ceptance of the end consumers.
A fleet test program for e.g. in Europe usually covers driving routes through 28
countries with high diversity at urban, rural, expressway, highway, straight, curved and
mountain road types including different weather, daylight and traffic conditions. As an
example, for validation of 15 basic ADAS L0 to L2 features 10-20.000 km per country
are tested. Thus, driving range would rise to about 300-500.000 km. Furthermore, in
case of first-time introduced ADAS components, like sensors or perception functions,
these test-km distances could easily be doubled.
Hence, the real road fleet validation is a considerable time and cost consuming effort
and yet essential to ensure safe and reliable operation.
100%
90%
80%
70%
60%
Secondary or regional roads
50%
Main or national roads
40%
30% Motorways
20%
10%
0%
IT
LV
LT
RO
IS
BE
BG
LU
DK
DE
IE
MT
PL
PT
RS
UK
MK
HR
HU
NL
NO
FR
AT
SI
SK
FI
CZ
CY
SE
CH
EE
TR
EL
ES
A common approach for route planning is to rather define statistical distribution tar-
gets of road types, weather & light conditions and driving maneuvers or scenarios with
3
respect to a total number of validation distance (ref. Table 1) instead of just making
kilometers in each country.
Once the fleet validation program has been started it is therefore not clear if the
statistical target ranges will be met by end of the test campaign. The evaluation of test
coverage through fleet data analysis usually takes weeks and daily route adaptions are
not feasible. As a result, gaps in the test coverage can occur. For multiple test scenarios
this means the validation may not be completed accordingly.
In the best case and based on online fleet monitoring, a test program with daily op-
timization would finish earlier providing the same test coverage while reducing effort
in time and cost.
Data acquisition with high data quality is the key to efficient data analytics. In the spe-
cific case of fleet monitoring real-time over-the-air data collection is enabling a com-
pletely new way of tracking and operating the fleet. It is essential to combine measure-
ment data from the vehicle as well as meta data from the environment or the driver in
a centralized data storage. Integration and automation of the data acquisition and data
transport is key to data quality as the central fundament for later analytical assessment.
Centralizing the data is a prerequisite for all the objectives and benefits mentioned
above.
Key capability of AVL’s data analytics pipeline is the central persistence of all data
analytics results e.g. coverage of attributes as described per chapter 1.2. For this pur-
pose, scripts used for evaluation in the distributed test vehicles (currently the predomi-
nant state in the industry for analyzing fleet data) are performed onto a powerful and
scalable analytics platform to enable automatic processing. Centralizing the analytics
scripts enables re-use and clean and proper versioning and governance of code and re-
sults.
This accessibility of the measurement allows the decoupling of data from the original
test request. All measurement data will be evaluated by constantly refined analytics
scripts for suitability. This leads to a maximal synergetic use of each mile driven. Hence
4
with more test miles as well as further test programs and therefore more evaluation of
measurement data every trip is contributing to higher coverage for each of the attributes.
Fig. 2. Instant Analytics and insights on the fleet test show potential overlaps of the testing pro-
gram (route planning) and enable synergies to reduce testing effort.
Fig. 3 describes the high-level topology of our solution architecture. All components
have well defined interfaces and communicate either via the API ecosystem (REST and
gRPC) or via a messaging bus (Apache Kafka and AMQP) for the event driven parts of
our logic.
5
Some of the core capabilities are in the scope of target fulfillment for fleet operation
are:
A completely automated analytics pipeline from data income (via message arrival at
FASER originating from the data acquisition system in the vehicle) to the persistence
of the coverage attributes in the result database (see chapter 2.2).
Within our processing engine our measurement data is connected with metadata.
Examples are:
─ Map matching of GPS points onto drivable road segments
─ Fetching metadata from map providers (e.g. OpenStreetMap Overpass API) and
enrich the detected events or even enable event detection (eg. Road types)
─ Weather/environment data enrichment on collected data during data transfor-
mation
─ Traffic information extraction (e.g. average speed per road segment) or enhance-
ment (via third party traffic API’s)
With the help of a connected mobile digital logbook, running on mobile handheld
devices like mobile phones or tablets, additional label information is collected from
test execution. For example the driver is able to tag situations on the road which
might be hard to extract based on automated perception (odd vehicles on the road,
special types of vulnerable objects on road, maintenance work and configuration
updates on the ego vehicle, …). Again, this metadata is connected to the events in
the central result database (see chapter 2.2).
The event detection and characteristic value calculation algorithms are defined inde-
pendently from the actually executed route/test case.
Any event detection is applied to any measurement data as long as the necessary
signal information is available.
All necessary metadata (e.g. software version, calibration version) are connected to
the analytics results (event and characteristic value).
Finally, this requires a common data model for all results within the Result compo-
nent. It is achieved by only 4 major business objects (shown in Fehler! Verweisquelle
konnte nicht gefunden werden.):
Events have a certain EventType (which are built as a tree of dependent EventTypes)
SessionMetadata (e.g. measurement files, with all necessary metadata to be con-
tained in Result) contains a list of all Events which belong to a session
Characteristic values are in the end Aggregates which belong to an Event
6
On top of these general result representation the Result component is serving API end-
points (e.g. histogram, heatmap, violin, categorical,…) to present the values by provid-
ing the filter condition in a way to be highly compatible with the visualization compo-
nent to create the charts in the frontend. It is the Result component and the visualization
on top which enable the engineers to perform completely interactive and explorative
investigations on the data collected across the complete fleet of vehicles and across all
measurement trips.
Since testing and validation under real-world conditions is very time consuming, all
effort to reduce the time duration to a minimum is feasible. At the same time, data
diversity reached and test cases as well as scenarios covered, should be maximized.
Therefore, AVL’s approach of route optimizations is very important to increase effi-
ciency in the validation program. In chapter 1 example for requirements (or boundary
conditions) for reaching a certain data diversity is shown in Table 1. In order to do so,
it’s necessary to know right from the start, if certain routes fulfill the mandatory re-
quirements. The route optimization is performed from high level abstraction to more
and more detail in the process, until specific routes are defined. The process is shown
in Fig. 4 .
At first, the boundary conditions like countries, road type-distribution and vehicle
availability must be considered. This leads to an initial route planning with a time se-
quence of countries and a concrete time plan.
Fig. 5 shows an example for an initial route planning across Europe. The color of
the countries represents the priority of those and therefore, how much time and mileage,
there should be spent and covered, respectively. Dark blue countries represent a very
high priority, blue countries a medium priority and light blue countries a lower priority.
The number together with the arrows show the order in which the countries will be
visited by the vehicle. Obviously, this example just gives an overview for one vehicle.
For a whole fleet the initial planning is performed for every single vehicle or a group
of vehicles, respectively.
With the initial route planning done, the detailed route planning follows. The dedicated
routes for each vehicle must be defined in order to fulfill required road statistics as well
as time of day distributions. AVL’s toolchain supports to plan and compare different,
specific routes in terms of those coverage KPIs and to decide routes which best fits to
the boundary conditions. An example for the comparison of three different routes from
Graz to Gothenburg is illustrated in Fig. 6
8
Fig. 6. Example for comparison of three different routes from Graz to Gothenburg
The drop down box of the selected Route 3 shows only some of the calculated KPIs.
Route 1 in light blue color and route 2 in orange color are shown on the map and can
be easily compared with each other. In this example the KPIs for each route are as
described in Table 2. Depending on the specific target requirements the route which
best fits will be chosen.
Table 2. Example of the KPIs for three different routes from Graz to Gothenburg
A well-prepared route planning is basis for a successful and efficient real-world fleet
validation. The strengths and advantages of an automated data evaluation will be de-
scribed in the following chapter.
9
During a fleet validation the vehicle with the ADAS functions is tested under real-
world conditions as the end customers will encounter them during their day to day us-
age. The evaluation take place in all dedicated target markets. To ensure a good cover-
age of the predefined ODD`s (Operational Design Domains) and DDT`s (Dynamic
Driving Tasks), an accurate route planning and optimization is key for efficient testing
– as discussed before. Not only the driven mileage is an indicator for robustness but
also the fulfillment of certain coverage KPI`s like:
road conditions and types (highway, rural, city, etc.) and static POIs (Points
of Interest) like tunnels, roundabouts or bridges;
the environment in terms of light and brightness, the weather conditions, like
light, medium or heavy rain, low standing sun and so on;
the traffic conditions as well as the participants;
driving tasks like different dynamic maneuvers and
physical parameters like vehicle speed, acceleration and the position in the
lane, to just name a few.
Key for the performance evaluation of the ADAS features is the automated maneu-
ver detection based on certain signals from the environmental perception. Those signals
include information about the vehicle under test (e.g. speed and acceleration) and in-
formation about other traffic participants, like other vehicles and VRUs (Vulnerable
Road Users), as well as the relation to those (e.g. distances to other traffic participants,
relative speeds, distance to lane borders and time to collision). Based on those signals,
which are transferred over the air to the fleet monitoring backend, ADAS relevant sce-
narios (see Fehler! Verweisquelle konnte nicht gefunden werden.) are automatically
detected within AVL’s fleet monitoring framework.
10
Fig. 7. Examples for dynamic driving scenarios which are automatically detected
The scenarios range from simple follow maneuvers, like encountered in stop and go
traffic, to more complex maneuvers, where a lane change is performed, either by the
Ego vehicle (in blue) or the TOF (Target Object Front, in red), to even more complex
maneuvers, where three or more vehicles are involved. All those maneuvers are de-
tected automatically and certain relevant KPIs (Key Performance Indicators) are calcu-
lated. Most of them represent an extremum like the maximum or minimum of a certain
signal like speed, acceleration or distance to another vehicle. Based on those parame-
ters, the system requirements and performance targets can be verified.
For the validation of the ADAS features under e.g. not foreseen but relevant scenar-
ios, the input from the test driver is essential. This input is logged with a digital tagging
device in the vehicle, which can be either a smartphone or a tablet. The driver has the
possibility to tag every situation that is noticeable, either due to certain environmental
conditions or an unexpected or wrong reaction of the system. With those false positives
and negatives, together with the general coverage statistics as well as the activation and
availability statistics of the systems, analysis on the feature performance can be done
easily. The analysis enables a clear statement whether the dedicated ADAS feature can
be considered as mature and therefore validated under real-world conditions.
The whole validation process of ADAS features is shown in Fig. 8. Over the air
transfer of the needed vehicle signals is precondition for the following analysis. The
scenario detection and KPI calculation are fully automated. With the in-vehicle evalu-
ation of the driver regarding false reaction of the ADAS feature together with a general
11
subjective evaluation of the automated driving performance, the validation can be per-
formed without the need of co-drivers. Enabler for public road testing is the objective
interpretation of system behavior against the system requirements and the according
acceptance criteria of end consumers.
been tested and therefore can be considered mature or not. If certain boundary condi-
tions or corner cases are missing, further testing might be necessary in order to verify
and validate the system under all conditions. That is especially important, if a poor
performance in one or more environments or critical situations was identified.
To sum it up, AVL’s approach offers a high level of automation and therefore a high
potential of increasing efficiency in real-world testing saving time but also significant
cost, because the validations are done without co-drivers. The clustering of data accord-
ing to certain dynamic driving scenarios together with the data enrichment of environ-
mental conditions, grant the possibility of filtering and understanding big amounts of
data. The relevant events and weakness can be identified, strengths highlighted, and
necessary improvements shown. Thus, fleet monitoring and automated performance
evaluation offer a valuable solution in the ADAS validation process.
5 Summary of Advantages
6 Outlook
The shown methodology already provides a high level of maturity. However, more
improvements are in the pipeline. A new point is data reduction on the edge (in vehicle)
via similarity measures for scenarios. This means, that only high-volume data for rele-
vant scenarios is collected. Relevant in this context means either new, in the sense of
13
unseen situations in database via scenario cluster density analysis, or critical, in the
sense of either poor perception performance or critical ADAS/AD function behavior.
Another future topic concerns data lifecycle management in the backend. The same
criteria and method, respectively, as used for selective collection of data in the vehicle
can also be utilized in the backend to balance the density of scenario data. The aim is
to maximize diversity in the scenario database by, at the same time, minimizing the
storage demand. This optimized real-world scenario set should be utilized to extract
input for virtual testing in the simulation, e.g. by definition of according OpenSCE-
NARIO® and OpenDRIVE® files.
For further increasing test efficiency, an automatic control loop will be realized. The
current coverage KPI’s from test execution will be compared with the target distribu-
tion. Based on the delta, the route planning will be automatically adjusted and updated
routes send to the fleet for execution. Furthermore, intelligent maneuver proposals can
be provided (according to degrees of freedom the driver still has in public traffic) to
fasten the process of achieving maximum test coverage. This enables the combination
of verification typically performed on proving ground and real-world validation.
Finally, enhancing the ability of in-vehicle labeling to get as much driver perception
and label information as possible, will increase the value of the collected data. Speech
to text with follow up NLP (natural language processing) to extract as much semantics
as possible is key for that purpose.
References
1. Data source: Statistical pocketbook 2020, Chapter 2.5 Infrastructure, Eurostat, Image AVL