Sie sind auf Seite 1von 20

NORTH SOUTH

UNIVERSITY

DEPARTMENT OF CIVIL AND


ENVIRONMENTAL
ENGINEERING

CEE 490B TERM PAPER


SUBTIMITTED TO: DR. JAVED BARI

GROUP 07
DATE OF SUBMISSION: 02.04.19
PREPARED BY:
ROMANA YASMEEN
SHAHRUKH HAMID (153 0373 625)
NAILA NABILA ORCHY (153 0413 625)

1
ABSTRACT

Autonomous vehicles are the emerging technology in the automotive industry which will likely to do
significant changes in transportation systems. However, driver-less cars are still in the primitive period in
terms of its advancement of the sensors that is going to be used for environment perception and localization.
In this paper, an overall study of these sensors considering the existing advancements for road safety is
done.
The study has been conducted mainly considering the key sensors of Autonomous Vehicles (AV) that are,
Camera system, RADAR, and LiDAR. Development of these sensors to make them prepare for safe-
operation of AV are the main concerns of both the researchers and the industries. These sensors are used
mainly for perception of the surroundings which is the first and foremost thing for automated driving.
Despite of significant development in the efficiency of sensors, they are still not prepare for all the roads
and traffic conditions to assure safety. Hence, a ‘Safe and Sustainable System’ is still need to be designed
for the era of transportation automation.

2
TABLE OF CONTENTS
CHAPTER 1 INTRODUCTION .................................................................................................................. 4
CHAPTER 2 BACKGROUND .................................................................................................................... 6
2.1 COMPONENTS OF AUTONOMOUS VEHICLES .......................................................................... 6
2.1.1 CAMERA .................................................................................................................................... 7
2.1.2 RADAR........................................................................................................................................ 8
2.1.3 LIDAR ......................................................................................................................................... 9
CHAPTER 3 FINDINGS ............................................................................................................................ 12
3.1 SENSORS ASSESSMENT IN TERMS OF RISKS AND HAZARDS ........................................... 12
3.2 COMPARATIVE ASSESMENT OF SENSORS IN SENSING ROAD ENVIRONMENT ........... 14
CHAPTER 4 CONCLUSION..................................................................................................................... 17
REFERENCES ........................................................................................................................................... 19

LIST OF FIGURES
Figure 1: A typical AV system overview (pendleton, et al., 2017) .............................................................. 5
Figure 2: Typical components of Autonomous vehicles (Google image, 2018) .......................................... 6
Figure 3: Sensors of AVs .............................................................................................................................. 7
Figure 4: Roadmap to automation - Driver Driven to Driverless………………………………………..….11
Figure 5: Diagnostic Reasoning: Potential causes of pedestrian injury……………………………………14

LIST OF TABLES
Table 1: Levels of Automation……………………………………………………………………………..4
Table 2: Severity Level description………………………………………………………………………..12
Table 3: Hazards Identification…………………………………………………………………………….13
Table 4: Predictive and inter-casual reasoning: impact of the evidence on pedestrian safety……………..13
Table 5: Assessment of various driving-related sensing systems…………………………………………..15
Table 6: Assessment of sensor performance across various driving tasks…………………………………16

3
CHAPTER 1
INTRODUCTION
The continuing evolution of automotive technology aims to deliver even greater safety benefits and
automated driving systems (ADS) that, one day can handle the whole task of driving when we do not want
to or cannot do it ourselves. Fully automated cars and trucks that drive us, instead of us driving them, will
become a reality. These self-driving vehicles ultimately will integrate onto roadways by progressing
through six levels of driver assistance technology advancements in the coming years which are as follows:

Table 7: Levels of Automation (Road Safety Factsheet: Autonomous vehicles, 2018)

Executi
on of Monitoring Fallback
steering of the performance System
Level Name Definition
and driving of ‘dynamic capability
accelera environment driving task’
tion
Human driver monitors the driving environment
All aspects of the dynamic
driving task are performed
No Human Human
0 by the human driver, even Human driver n/a
automation driver driver
when enhanced by warning
systems
The driving mode-specific
execution by a driver
assistance system of either
steering and or Human
Some
Driver acceleration/ braking using driver Human
1 Human driver driving
assistance information about the and driver
modes
driving environment. The system
human driver performs the
rest of the \dynamic driving
task.
The driving-mode specific
execution by one or more
driver assistance systems of
both steering and Some
Partial Human
2 acceleration and braking system Human driver driving
automation driver
using information about the modes
driving environment. The
human driver performs the
rest of the dynamic task.
Automated driving system – system monitors the driving environment
The driving mode specific System
Conditional
performance by an Some
automation
3 automated driving system System Human driver driving
/ semi-
of all aspects of the modes
automated
dynamic driving task. The

4
human driver is expected to
respond to a request to
intervene.
The driving mode specific
performance by an
automated driving system
Some
High of all aspects of the System
4 System System driving
automation dynamic driving task, even
modes
if the human driver does
not respond to a request to
intervene.
The full-time performance
by an automated driving
Full task under all roadway and All driving
5 System System System
automation environmental conditions modes
that can be dealt with by a
human driver.

As levels of automation increases, road safety issue also get higher dependencies on the systems. The first
challenging task for an AV is sensing the environment accurately and then act accordingly. The software
system that regards the competencies of an AV can be categorized into three categories, namely perception,
planning, and control.
Perception deals with the aspects that help the autonomous vehicle to gather required information and
relative knowledge from the surroundings (Pendleton, et al., 2017).
Environmental perception is a the fundamental function that enables vehicles to extract knowledge from
the environment, such as, driving environment, obstacles’ locations, velocities and also the predictions of
their future states. This can be performed by using several sensors. Usually, LIDARs, Cameras, RADARs,
and some ultrasonic sensors are used. This paper regards the study of these particular sensors and their
performance in terms of safety. The following flow chart overviews the working system of AV:

Hardware Software
Environment
perception
Perception
Sensors Localization
Environment

Mission Planning
V2V
Planning
Behavioral Planning

Motion Planning
Actuators
Control
Path Tracking

Trajectory Tracking

Figure 1: A typical AV system overview (pendleton, et al., 2017)

5
CHAPTER 2
BACKGROUND

2.1 COMPONENTS OF AUTONOMOUS VEHICLES


To navigate the roads autonomously and spot signs of danger, these vehicles employ a wide range of
sensors.
Sensors are an integral part of autonomous vehicles. Without them, it would not be possible for a vehicle
to navigate their environment, spot dangers and safely drive along a road. LiDAR, RADAR and cameras
are used as a primary technology for navigation representing a typical optical sensor systems for
autonomous vehicles. AVs may be equipped with other navigation technologies such as Inertial
Measurement Unit (IMU) or Global Positioning System (GPS) receiver.
.

Figure 2: Typical components of Autonomous vehicles (Google image, 2018)

6
Sensors not only help to determine the actual environment and the present dangers, they also help the
vehicle to provide an appropriate response. These responses can range from accelerating/decelerating to
turning, emergency stopping and evasive maneuverers. Whilst the responses are determined by a central
software component, it is the sensors that provide the data for such actions to be taken. In this paper, mainly
Camera system, RADAR and LiDAR sensors are studied broadly.

Figure 3: Sensors of AVs

2.1.1 CAMERA

Cameras are already commonplace on modern cars. Since 2018, all new vehicles in the US are required to
fit reversing cameras as standard. Almost all development vehicles today feature some sort of visible light
camera for detecting road markings. Autonomous vehicles are no different rather an autonomous vehicle
will feature multiple cameras for building a 360-degree view of the vehicle’s environment. They are
outfitted to the car at all angles to maintain a 360 views to the surroundings. They mainly detect traffic
lights, read road signs and keep track of other vehicles, while also looking out for pedestrians and other
obstacles (James Quinn, 2017). There are variety of technologies that are employed:
Stereo Cameras
In case of stereo cameras, two digital camera works together. Similar to the stereoscopic vision of a pair of
eyes, their images enable the depth perception of a surrounding area, providing information on aspects
including the position, distance and speed of objects. The cameras capture the same scene from two different
viewpoints. Using triangulation and based on the arrangement of pixels, software compares both images
and determines the depth information required for a 3D image. The result becomes even more precise when
structured light is added to the stereo solution. Geometric brightness patterns are projected onto the scene
by a light source. This pattern is distorted by three-dimensional forms, enabling depth information to also
be determined on this basis. (Future Markets Magazine)

7
Time-of-Flight (ToF) Cameras
ToF is another method that determines distance based on the transit time of individual light points. ToF
technology is highly effective in obtaining depth data and measuring distances. A Tof camera provides two
types of information on each pixel: the intensity value – given as grey value – and the distance of the object
from the camera, known as the depth of field. Modern ToF cameras are equipped with an image chip with
several thousand receiving elements. This means that a scene can be captured in its entirety and with a high
degree of detail in a single shot. (Future Markets Magazine)

Cameras work just like the human eyes. They are able to pick a lot from lane markings and road signs. This
makes cameras advantageous over other technologies employed in self driving cars, because the ability to
read signs and see colors, will allow cars to navigate modern roads without driver input. Another advantage
of cameras is their pricing. Compared to other technologies, cameras these days are quite cheap (James
Quinn, 2017). However, cameras have their own sets of disadvantages:
 Just like the human eyes, adverse weather conditions can hamper their 'vision'. They could fail to
classify objects. E.g. A shadow in the dark could be misunderstood as a discarded pipe whereas it
is from a light post. (Tero Heinonen, 2017)
 It is difficult to produce a 3D image using a 2D camera. 3D cameras that need to function in varying
lighting conditions are hindered by large pixels and therefore lower resolution.
 In order to extract information from camera, output requires a complex and heavy computational
process. The edge detection, color segmentation, morphological analysis, feature detection and
description, and object detection are performed by computer vision algorithms which need large
amount of processing (Vivacqua, Vassallo, & Martins, 2017). This could lead to accidents if the
data is not processed quickly enough.
 Cameras are vulnerable to unauthorized access. A situation could arise where the cameras had
already been worked with and we do not get any information in case of any attack. So our
autonomous vehicles could be vulnerable to attack. (Tero Heinonen, 2017)
There could be more problems and uncertainty associated with cameras in autonomous cameras. One
solution could be to develop a software that can fuse together 3D camera images with those of a high-
resolution 2D camera. This will enable high-resolution 3D data to be obtained, which can then be further
processed with the help of artificial intelligence: using the high-resolution images, the detected objects can
be classified. A person standing on the street will not be processed as a fixed object and a shadow will be a
shadow only. (Future Markets Magazine)

2.1.2 RADAR

Ever since radio waves were discovered and their ability to reflect on objects were understood, engineers
are finding newer applications based on their detection & ranging properties. The technique, called radar
(radio detection and ranging), works on the principle of a source transmitting the radio wave, being reflected
by a surface, and received and processed by a receiver system. (ansys, 2018)
The radar system uses radio waves, emits them and the reflected rays are intercepted. The time taken for
the radio wave to intercept gives the distance from the obstacle and the car. In the radar instrument, the
antenna doubles up as a radar receiver as well as a transmitter. However, radio waves have less absorption

8
compared to the light waves used in lidar when contacting objects. Thus, they can work over a relatively
long distance. The most well-known use of radar technology is for military purposes.
There are few shortcoming in the use of radar based systems in cars.
-Detecting small objects with radar is relatively difficult for shorter wavelengths
-Static object detection whose relative velocity is exactly that of the moving object, obtaining necessary
reading is quite difficult and requires proprietary algorithms to address this issue.
-Existing radar-based ADAS systems do not work well in closed environments such as tunnels and usually
go into standby mode
-Radar based systems have their limitations when it comes to recognizing and classifying objects
-Interference from other radar system, which causes accuracy problem
This shortcoming are been investigated and they are installed with advance algorithms that will differentiate
between obstacles. The interference between other vehicles are minimized by their own car radar based
system. There is still room for improvements in radar system but it will surely be the best one as it is
relatively low in cost and the range is wide for detecting the obstacles.
It is known that radar works on the principle of transmitting and receiving radio waves after reflection.
While other sensors such as camera, LiDAR, ultrasonic systems are in prevalence, radar based systems
have some inherent advantages such as:
 Radar is touted to be an all-weather solution. Real world working conditions such as temperature,
humidity etc. Do not affect the functioning of radar-based systems. One of the key advantages of
radar is that it works seamlessly under varying lighting conditions – night or day.
 Long range radar systems can see really far – it can comfortably handle between 30 to 250 meters
range.
 Materials that are generally considered as insulators such as rubber do not affect radar-based -
systems.
 It is relatively easier to accurately measure velocity, distance and exact position of the object using
radio waves.
 Radar can easily differentiate between stationery and moving objects which is one of the major
shortcomings of proximity sensor-based systems.
 Radar can detect multiple objects simultaneously which cannot be done in proximity based sensors.
 When used in conjunction with existing camera- based systems, 3d image can be created by use of
radar by means of angle detection of the object and sensor fusion with existing camera-based data.

2.1.3 LIDAR

LiDAR measures the shape and contour of the ground and environment. It bounces a laser pulse off a target
and then measures the time (and distance) each pulse traveled. The science is based on the physics of light
and optics, measuring wavelengths in nanoseconds.
In a LiDAR system, light is emitted from a rapidly firing laser. LiDAR gathers its information by sending
out laser light and gathering the light each laser beam generates. Laser light travels and reflects off points
of things in the environment like buildings and tree branches. The reflected light energy then returns to the
LiDAR sensor where it is recorded.
There are three basic methods developers have employed to make LiDAR technology, all based on firing a
single or multiple laser light into the environment and through different methods, determining the distance
from the points it illuminates in the environment and reflects back.

9
The distinct approaches to LiDAR technology are
1) Solid-state hybrid technology, invented in 2007,
2) MEMS technology (in development stage) and
3) Mechanical mechanism technology.
Solid-State Hybrid Lidar (SH LiDAR) defines Lidar used by Google over the past ten years. Mechanical
Mechanism LiDAR has been around for decades and it describes the technology of Leica, Riegl, and Sick
Lidar, which are companies that play in the high-end industrial LiDAR market (Frost & Sullivan, 2016).

Solid-State hybrid LiDAR


Solid State Hybrid LiDAR is a combination of a solid-state detector system within a spinning system taking
advantage of a 360 degree view in order to gather data at a very fast speed.
Using multiple laser beams, 16 to 64 lasers scan the environment at 1.2M points per second.
The Hybrid design was break-through technology in 2007 because it vastly simplified what was previously
a complex mechanical part into one solid-state part. Without the hybrid technical advance, multiple laser
systems of 16 to 64 lasers at one time would be complicated and unmanageable.
SH LiDAR is a point-by-point measurement of distance and reflectivity in a horizontal field of view of
360° and a vertical field of view of up to 40°, updated up to 20 times per second. It’s a Time of Flight (ToF)
measurement, where a pulse of light is emitted and the return trip to an object is measured and by using the
speed of light, the exact distance can be determined. In the case of SH Lidar, multiple pulses of light from
multiple lasers are simultaneously pulsing and measuring distances in nano-seconds. The calculation for
measuring how far a returning light photon has travelled to and from an object is:
Distance = (Speed of Light x Time of Flight)
Multiple Points of Viewing (MPV) was invented with the development of SH Lidar. The intrinsic
technology of the SH Lidar provides MPV, a radical 3D visualization feature which allows the computer-
brain of the system to have greater “visibility” of the environment, seeing from multiple points of view
simultaneously due to its 360 degree coverage.
The Lidar brain and control center is able to take advantage of a million points of view, scanning the
environment in nanoseconds. The 360 degree rotating view of the environment is able to assess the
environment, “seeing” faster and more completely than the human brain, excelling beyond what the eyes,
ears, and human brain can perceive from the driver’s seat of a moving vehicle. It can “see” emergencies in
some cases before a human driver can because the system has the advantage of a million different points of
view, directly centered on the moving vehicle, using the view from the driver’s seat, but not confined to
this as a single point of view.
The data capability of the SH LiDAR sets it apart from other sensor technologies, making it the solution
for ADAS with the higher level requirements of safety, including levels 3, 4, and 5 autonomous vehicles.
Solid-State Hybrid LiDAR produces optimal data with following details:
1) Abundant data for up-close, mid-range, and far range viewing.
2) Accurate data results without false positives or false negatives. The LiDAR is not blinded by
sunlight and can “see,” perfectly at night.
3) SH Lidar is durable and reliable in all kinds of weather, including snow, fog and rain.
4) It can “see” in 360 degrees.
5) It “sees” in measurements, so it is highly accurate and not confused by optical illusions, like
cameras. (Frost & Sullivan, 2016)

Within the Statement, NHTSA published a chart of safety levels for ADAS automated vehicles.
It included the complete range of vehicle automation, ranging from vehicles that do not have any of their
control systems automated (level 0) through fully automated vehicles (level 4).
Frost and Sullivan (2016) segmented vehicle automation into these five levels to clarify the levels of
automation. The distinct levels provide car makers and developers of sensors and systems for ADAS a

10
language and way to categorize the new safety systems and the degree and features of driver assistance it
provides.

Figure 4: Roadmap to automation - Driver Driven to Driverless (Frost & Sullivan, 2016)

ADAS (Advanced Driver Assisted Systems) covers a wide variety of capabilities that make for easier and
safer driving. Car makers are currently using an array of less expensive sensors to provide safety, when
minimal data, 2D information, and low resolution data is helpful and acceptable.

11
CHAPTER 3
FINDINGS
Automation can prevent crashes and limit injuries and reduce risky behavior and provide support to high
risk groups in high risk situations. As fully autonomous vehicles will reduce or eleiminate the element of
human error, it is likely that road crashes and the number of casualties will be significantly reduced (Road
Safety Factsheet: Autonomous vehicles, 2018).
A study by the University of Michigan Research Institute (Shoettle & Sivak, 2015) and a study by Virginia
Tech Transportation Research Institute (Blanco, et al., 2016) found that automated driving systems were
characterized by much lower crash rates than conventionally driven cars. However, the small number of
automated vehicle driving system involved crashes, none fatal (although this has changed since the time
these studies were published), make the results statistically insignificant. For this to change, automated
driving miles travelled and incidents reported would need to be scaled up considerably. It must also be
noted that the majority of test kilometers driven are generally in sunny clear conditions on wide,
uncomplicated roads, not necessarily representing the average road. The system would be unable to function
in extreme conditions often faced by human drivers. This is a sensible approach for testing the technology
but it must be noted that results could bias safety assessments (Road Safety Factsheet: Autonomous
vehicles, 2018).
Despite several benefits of autonomous vehicles, there are also a number of risks faced, particularly during
the transition period where conventional, semi-autonomous, highly autonomous and fully autonomous
vehicles will share the road.

3.1 SENSORS ASSESSMENT IN TERMS OF RISKS AND HAZARDS


Incorrect operation of AV may result in mishaps of various severity levels. One may identify risk as a
measure of potential consequence of a hazard representing both the likelihood and the severity of something
bad or undesired happening. During the hazards identification stage, hazards can be classified according to
their risks. A preliminary Hazard Analysis (PHA) is the starting point to classify these hazards. As with
most safety critical systems, the AV system hazards can be classified in a qualitative manner, using pre-
defined arbitrary categories known as risk classes computed as a product of severity and the likelihood of
occurrence. For AV system, these levels are: Negligible (RV<1), marginal (1<RV<10), critical
(10<RV<100) and catastrophic (RV>100). The severity levels are described below:
Table 8: Severity Level description (Duran, Zalewski, Robinson, & Kornecki, 2013)

Severity level Description


1 No loss of any kind
2 Minor property loss (low cost hardware parts)
3 Major property loss, damage to the environment
4 Loss of critical hardware, human injuries, major damage to the environment
5 Catastrophic loss of life, loss of the entire AV system, serious environment damage

Since an AV system is dependent on LiDAR, RADAR and Camera systems, to assure proper navigation
and avoidance of obstacles, any hazards of either subsystem constitutes also a hazard for the AV system.
From this perspectives, LiDAR, RADAR and Camera systems are safety-critical. Any further failure of

12
these subsystems could propagate and lead to a disaster with severe consequences. A list of types of hazards
with severity levels in given below:

Table 9: Hazards Identification (Duran, Zalewski, Robinson, & Kornecki, 2013)

Item Sub-item Fault condition/ Severity level


hazard
LiDAR Position encoder Fails to read 4
position data
Electrical Short circuit 4
Overvoltage 4
Optical receiver Misalignment 3
Optical filter Damaged 3
Mirror Motor Malfunction (of 4
either laser or 4
LiDAR)
Camera system Camera Misalignment 3
IR filter Missing 3
Lens Damaged 3
Camera Improper Lighting 3

A Bayesian Belief Network model was developed for predictive reasoning of camera misalignment and an
inter-causal reasoning of LiDAR failure. A detail analysis of these predictive and inter-casual reasoning
model was also prepared to identify the impacts of the evidence of pedestrian injury (Duran, Zalewski,
Robinson, & Kornecki, 2013).

Table 10: Predictive and inter-casual reasoning: impact of the evidence on pedestrian safety (Duran, Zalewski, Robinson, &
Kornecki, 2013)

Sensor Evidence Pedestrian injury likelihood


(%)
Camera Improper lighting 9.07
Severe damaged camera lens 9.76
Camera misalignment 14.70
LiDAR Optical receiver misalignment 16.50
Damaged optical filter 19.50
Mirror motor malfunction 31.00
Overvoltage 31.40
LiDAR failure 34.40
Computer Vision (CV) failure 35.00
CV and LiDAR failure 47.70

13
The following chart represents another partial result of the modeling. The columns present likelihood of the
model events in two scenario: when there is no evidence and when there is evidence of pedestrian injury.
The LiDAR and CV failure show as the leading causes of potential pedestrian injury.

60 Potential causes of pedestrian injury


55

50

40

28.8
30 27.6

20
20 17.4
14.7
12.5 13.1
9.29 9.89 10
10
5 5 4.95
2.81 3 2.4 2.98 3.45
0.52
0

Figure 5: Diagnostic Reasoning: Potential causes of pedestrian injury (Duran, Zalewski, Robinson, & Kornecki, 2013)

In accordance with the hazard table, the Bayes Network (BN) analysis shows that in the case of total state
estimator and thus navigation failure, improper lighting bears a significant probability of being the reason,
with CV and LiDAR being evidently on the top. The risk value corresponds equally well to LiDAR failures
on the BN, with the laser malfunction as the primary cause (Duran, Zalewski, Robinson, & Kornecki, 2013).

3.2 COMPARATIVE ASSESMENT OF SENSORS IN SENSING ROAD ENVIRONMENT


At the core of all levels of automation is the ability for automated vehicles to perceive or sense their
environment, process this information to determine what is relevant to safety, carry out the allocated driving
task, decide on a course of action, and successfully carry out this action carried out (Parasuraman, Sheridan,
& Wickens, 2000). Until recently, the with which the combined sensing, processing, deciding, acting and
assessment cycle could be carried out by automated systems has been below what humans could achieve
both in scope and latency.
Technology is now reaching the point where the fusion of different sensors, processing systems and
actuators can replicate and in some cases improve on human driving performance. This convergence is,
although, at the heart of the incipient revolution promised by highly and fully-automated driving, not
complete and there remain key areas where automated driving systems still lag behind the capabilities of
the average human driver (Safer Roads with Automated Vehicles?, 2018)

14
Table 11: Assessment of various driving-related sensing systems (Safer Roads with Automated Vehicles?, 2018)

Eyes (human Drivers) 1. Color, stereo vision with depth perception


2. Large dynamic range
3. Wide field of view, movable both horizontally and
vertically
4. Range: no specific distance limit; realistic daytime of
at least 1000 meters and realistic night limit of 75
meters under low-beam headlamp illumination.
RADAR (Automated Vehicles) 1. Accurate distance assessment.
2. Relatively long range.
3. Robust in most weather conditions.
4. Immune to effects of illumination or darkness.
5. Fixed aim and field of view (deploying multiple radar
sensors can compensate for this.
6. Field of view (horizontal): around 15-to-90 degree.
7. Range: around 250 meters
8. Resolution: 0.5-to-5 degree
LiDAR (Automated Vehicles) 1. Accurate distance and size information
2. Able to discern high level of detail (shape, size, etc.),
especially for nearby objects and lane markings.
3. Useful for both object detection and roadway
mapping
4. Immune to effects of illumination or darkness.
5. Fixed aim and field of view, but able to employ
multiple LiDAR sensors as needed (although some
LiDAR systems are capable of 360 degree within a
single piece of equipment).
6. Field of view (horizontal): 360 degree (max.)
7. Range: around 200 meters
8. Resolution: around 0.1 degree
Camera systems (Automated 1. Color vision possible (important for sign and traffic
Vehicles) signal recognition).
2. Stereo vision when using a stereo, 3D, or time-of-
flight (ToF) camera system.
3. Fixed aim and field of view, but able to employ
multiple cameras as needed.
4. Field of view (horizontal): 45 to 90 degree
5. Range: no specific distance limit (mainly limited by
an object’s contrast, projected size on the camera
sensor, and camera focal length), but realistic
operating ranges of 150 m for monocular systems and
100 m (or less) for stereo systems are reasonable
approximations.
6. Resolution: large differences across across different
camera types and applications.

The table describes and assesses the principal performance aspects and relative advantages of various sensor
technologies in comparison to human eyesight. Each of these human or machine sensing platforms must be
able to, singly or in combination with other sensors, adequately replace or improve on human vision and

15
perception if they are to improve safety outcomes. Their capacity to do so, however, will be challenged in
a number of situations (Shoettle, Sensor fusion: A comparison of sensing capabilities of human drivers and
highly automated vehicles, 2017) :
 Extreme weather or other degraded environmental conditions (such as heavy rain, snow or fog).
These phenomena reduce maximum range and signal quality for human vision, optical sensors
(Cameras, LiDAR).
 Excessive dirt or physical obstructions such as snow or ice on the sensor surface or the vehicle
surface. These phenomena reduce maximum range and signal quality for human vision, cameras,
LiDAR and RADAR.
 Darkness, low illumination or glare. These phenomena reduce maximum range and signal quality
for human vision and cameras.
 Large physical obstructions (buildings, terrain, heavy vegetation, etc.) interfere with line of sight
for human vision and all basic AV sensors (Cameras, RADAR, LiDAR).
 Dense traffic: interferes with or reduces line of sight for human vision and all basic AV sensors
(Cameras, RADAR, LiDAR).
Table 12: Assessment of sensor performance across various driving tasks (Shoettle, Sensor fusion: A comparison of sensing
capabilities of human drivers and highly automated vehicles, 2017)

Performance aspect Human eyes Automated Vehicles


RADAR LiDAR Camera
Object detection Good Good Good Fair
Object classification Good Poor Fair Good
Distance estimation Fair Good Good Fair
Edge detection Good Poor Good Good
Lane tracking Good Poor Poor Good
Visibility range Good Good Fair Fair
Poor weather performance Fair Good Fair Poor
Dark or low illumination performance Poor Good Good Fair
Ability to communicate with other Poor n/a n/a n/a
traffic or infrastructure

The table summarizes the assessment of how different sensor strategies can handle individual driving tasks
on the basis of a comprehensive review of human and machines sensing capabilities applied in the case of
various driving and exemplar pre-crash scenarios.
Humans still retain an advantage over single sensor-based automated systems when it comes to reasoning
and anticipation, perception and sensing when driving. Overcoming this gap (and only in certain conditions)
requires multi-sensor fusion on the part of the automated system. This strategy is commonly employed on
various vehicle test-beds deployed in current trials. Even in the case of multiple sensor fusion, human
capabilities still outperform that of automated systems in certain problematic and complex contexts. Some
common traffic scenarios still confound automated driving system capabilities. These include straight
crossing path and left turn across, opposite direction crashes which are consistently difficult for automated
systems to assess and avoid (Safer Roads with Automated Vehicles?, 2018).

16
CHAPTER 4
CONCLUSION

The automation of vehicles is set to become more widespread, with the emergence of advanced sensing
devices and new on-board processing capabilities. This will mean that less and less input is needed from
the ‘human driver’ during the driving task. This could also lead to shifts in the way that cars and other
vehicles are used and owned. In fact, the results of some advanced trials suggest that some automated
vehicles are already able to operate reliably in some contexts, but variable performance in other conditions
means that these technologies will need to be further developed before autonomous vehicles become a
common sight on the roadways. However, for third world countries those days seems far away from today.
As per objectives of automation, it should prevent crashes and limit injury and reduce risky behavior and
provide support to high risk groups in high risk situations. As fully autonomous vehicles will reduce or
eliminate the element of human error, it is likely that road crashes and the number of casualties will be
significantly reduced. However, a long transition period is expected before the day when only fully
autonomous vehicles will be on road.
Human error is considered a contributory factor in around 90% of all fatal road crashes in Britain (Road
Safety Factsheet: Autonomous vehicles, 2018). As automated vehicles are not subjected to being driven
impaired, driven while texting or subjected to other forms of distraction such as being fatigued, it is likely
that there will be a reduction in collisions. However, it must not be believed that human error has been
correctly identified as a contributory factor in collisions or that all crashes could have been otherwise
avoided by addressing that error. Many crashes that involve human error also involve other factors that may
have still contributed to the crash even if the human had not made a mistake. Errors linked to poor roadway
design or faulty vehicle design are often attributed as human factors, when they are in fact design errors.
One way in which automated vehicles could reduce the number of road casualties is through speed limit
compliance. Study says in Britain, on 30mph roads, just over half of drivers tend to travel above the speed
limit and 20% travel at more than 35mph (Road Safety Factsheet: Autonomous vehicles, 2018). However,
in future, driving speeds will be controlled by the system. Evidence suggests that the risk of fatal injury to
a pedestrian hit by a car travelling at 30-40mph is 350-500% greater than vehicles travelling below 30mph
(Road Safety Factsheet: Autonomous vehicles, 2018). Therefore, suburban accident rates could be reduced
dramatically by this change alone. Sensors that are expected to be installed in automated vehicles are likely
to be much faster and more reliable at detecting and avoiding vulnerable road users than most drivers today.
This could provide large reductions in road casualties for vulnerable groups such as children, the elderly
and cyclists.
There is an expectation that driverless cars and autonomous systems will deliver a ‘near zero’ harm solution
for everyone, including vehicle occupants and those termed ‘vulnerable road users’ such as pedestrians and
cyclists. However, ‘near zero’ does not mean absolutely zero, as there could be times where the driverless
vehicle will be forced to choose between options where there is no outcome that avoids harm to all road
users.
Therefore, a full application of the ‘Safe and Sustainable System’ approach is still recommended, taking
into account the possibility of technology failures of autonomous vehicles, acting as a fallback solution.
The system should be built to tolerate human and machine errors, preventing death and serious injury in the
event of a collision. Safe and Sustainable System measures include central and nearside barriers that prevent
vehicles striking one another head-on and dedicated facilities for vulnerable road users that provide
separation such as cycle lanes.

17
As there will be a transitional period in which conventional vehicles, semi-autonomous, highly autonomous
and fully autonomous vehicles will share the road, drivers will need to have an understanding of the various
types of vehicles. The public will also need to be aware of the performance abilities and limitations of these
vehicles.
So far, it can be concluded that automated vehicle technology has mainly focused on the detection and
recognition of pedestrians and cyclists by the vehicle and although good progress has been made, many
difficulties are yet to be overcome. For example, this technology is not yet operational in adverse weather
conditions. Technology to reliably predict intentions and behavior of cyclists and pedestrians, so that the
automated vehicle can adjust its behavior is crucial for safe interaction between these vehicles and
pedestrians and cyclists. However, this is not straightforward as it can be very difficult for an automated
system to predict behavioral intentions of pedestrians and cyclists. The idea that pedestrians and cyclists
will respond differently to partly automated vehicles also cannot be ignored. The few studies that have
examined the behavior of pedestrians and cyclists in interaction with automated vehicles found that they
were fairly cautious and not per definition confident of its ‘skills’. Pedestrians and cyclist were found to
appreciate messages and or signals from indicating whether the vehicle had detected them and what it
intends to do (Road Safety Factsheet: Autonomous vehicles, 2018).
As vehicles become increasingly autonomous it will be essential that drivers understand the technology in
their vehicles, what it does, how to use it safely and the potential risks of misuse. Drivers should receive
vehicle familiarization training when they receive new vehicles, including the safe use of technology,
particularly if their previous vehicle did not have it. Drivers need to be alert and ready to take control of
their vehicle at any time and therefore must not engage in other tasks such as making phone calls or writing
texts or emails during driving time, as they are still in charge of the vehicle.
If used properly autonomous vehicles have enormous potential to reduce crashes and casualties, but if they
are not used properly, they can also increase risk, especially if drivers over-rely on the technology.

18
REFERENCES
Blanco, M., Atwood, J., Rusell, S., Trimble, T., McClafferty, J., & Perez, M. (2016). Automated Vehicle
crash rate comparison using naturalistic data. Virginia Tech Transportation Institute. Retrieved
March 31, 2019, from
https://vtechworks.lib.vt.edu/bitstream/handle/10919/64420/Automated%20Vehicle%20Crash%2
0Rate%20Comparison%20Using%20Naturalistic%20Data_Final%20Report_20160107.pdf?sequ
ence=1&isAllowed=y
Duran, D. R., Zalewski, J., Robinson, E., & Kornecki, A. J. (2013). Safety Analysis of Autonomous Ground
Vehicle Optical Systems: Bayesian Belief Networks Approach. Federated Conference on
Computer Science and Information Systems (pp. 1407-1413). Florida: IEEE.
Frost, & Sullivan. (2016). LiDAR: Driving the future of Autonmous NAvigation. San Jose: Velodyne Lidar.
Retrieved March 15, 2019, from https://velodynelidar.com/docs/papers/FROST-ON-LiDAR.pdf
Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human
interaction with automation. IEEE Trans. Syst., Man, and Cybernetics - PArt A, 30.
pendleton, S. D., Andersen, H., Du, X., Shen, X., Meghjani, M., Eng, Y. H., . . . Ang Jr., M. H. (2017).
Perception, Planning, Control, and Coordination for Autonomous Vehicles. MDPI.
doi:10.3390/machines5010006
Pendleton, S. D., Andersen, H., Du, X., Shen, X., Meghjani, M., Eng, Y. H., . . . Ang Jr., M. H. (2017,
February). Perception, Planning, Control, and Coordination for Autonomous Vehicles. (R. Parkin,
Ed.) Machines , 5(6).
(2018). Road Safety Factsheet: Autonomous vehicles. ROSPA.
(2018). Safer Roads with Automated Vehicles? International Transport Forum.
Shoettle, B. (2017). Sensor fusion: A comparison of sensing capabilities of human drivers and highly
automated vehicles. Ann Arbor.
Shoettle, B., & Sivak, M. (2015). A preliminary analysis of real world crashes involving self-driving
vehicles. University of Michigan. Ann Arbor: Transportation Research Institute.
Xiao, L., Dai, B., Liu, D. H., & Wu, T. (2015). CRF based Road Detection with Multi-Sensor Fusion. IEEE
Intelligent Vehicles Symposium (IV). 4, pp. 192-198. Seoul: IEEE.
3D Cameras in Autonomous Vahicles. (n.d.). Future Market Magazines.
Dawkins, T. (2019). Autonomous Cars 101: What Sensors Are Used in Autonomous Vehicles?
Heinonen, T. (2017). All that is wrong with autonomous car sensors and how do we fix the problems.
Heinonen, T. (2017). All that is wrong with autonomous car sensors and how do we fix the problems.
Quinn, J. (2017). Cameras: The Eyes of Autonomous Vehicles. Self Driving Cars.
Pathfinder, (July 27, 2018). Introduction to Radar. Understanding Radar for automotive (ADAS) solutions.
Retrieved from https://www.pathpartnertech.com/understanding-radar-for-automotive-adas-solutions/

19
Carpenter, S. (2018). Autonomous Vehicle Radar: Improving Radar Performance with Simulation.
Volume XII (1). Retrieved from https://www.ansys.com/about-ansys/advantage-magazine/volume-xii-
issue-1-2018/autonomous-vehicle-radar

Neal, P. (April 24, 2018). Sensors insights. LiDAR vs. RADAR. Retrieved from
https://www.sensorsmag.com/components/lidar-vs-radar

20

Das könnte Ihnen auch gefallen