Sie sind auf Seite 1von 13

Riyush Thakur

IR 2/ English 10 GT
1/22/19
Turn in #14

Annotated Source List

Coeckelbergh, M., Responsibility and the Moral Phenomenology of Using Self-Driving


Cars, Applied Artificial Intelligence 30(8) (2016), 748–757.

First, the researcher asserts two separate criteria needed to exercise liability; according to
Aristotle’s Nicomachean Ethics, the perpetrator must be in control and have prior knowledge of
any risk factors that may arise. However, responsibility can be characterized by an individual’s
relation to others rather than himself. On the road, drivers are primarily concerned with themselves
and their passengers, but they have an implicit responsibility to other independent drivers. The
drivers have an understood relation towards each other, meaning that one could be held liable for
the damages of the other if the perpetrator has control. The empirical application to self-driving
cars is the fact that drivers have a relation to the automated car, therefore they have a direct liability
even under fully automatic circumstances. However, in a moral framework, no individual has a
legally recognized responsibility to others. For instance, the case Ruiz v Foreman upheld the
precedent that a moral dilemma subordinates a safety issue. Ruiz voluntarily crashed his vehicle
into private property in order to escape a predictably worse crash, resulting in Ruiz’s acquittal.
Essentially, the capabilities of a car bear no weight in determining liability but rather in identifying
relations between independent bodies on the road, whether human or self-driving car.

The researcher tends to oversimplify his claims with seemingly rare hypothetical scenarios.
Instead, all of his arguments would flow more smoothly without an unnecessary clarification
between claims. As a result, the pape switches tones between thought-provoking and scholarly to
monotonous. Once the paper gets trimmed down, the researcher conveys more clarity and
substance. Regardless, the paper uses a multifarious selection of citations and case studies to
develop an ethical argument contingent on a self-driving cars relation to other functioning bodies.
Secondly, the author employed a copious use of rich vocabulary and sentence structure unlike any
previously read paper. Finally, the researcher humbly aggregates all of its points into one concise
commentary of anticipated obstacles likely to prevent the author’s thoroughly researched
conclusions from becoming reality.

Dylan LeValley, “Autonomous Vehicle Liability—Application of Common Carrier


Liability.”
http://seattle.la wreviewnetwork.com/files/2013/02/SUpraLeValley1.pdf

This testimony provides insight into the history of self driving cars and proposes regulation
to address the issue of liability. First the document discusses self driving airplanes to see if any
liability laws can be applied to cars. Unfortunately, the cases are very different. Airplanes in
autopilot require constant human interaction, and they usually aren’t designed to adapt to a
changing environment. In other words, a plane’s path is planned in advance because there is little
air traffic compared to a road where other cars are constantly driving together. Key differences
between cars and planes mean that liability laws can’t be applied to each other. Under the law, self
driving cars are categorized as common carriers, automatic transportation vehicles, and self driving
cars should be treated like manual cars. One problem with this approach is that manufacturers are
held liable for design defects, but defects are inherent in design because no level 5 exists.
Unequivocally, a driver has to demonstrate reasonable focus, for liability to fall on a manufacturer.
The article concludes that since self driving cars require human input, they should be treated as
normal cars.

This document outlined some basic standards for liability laws and then it surgically
defended its claims. However, the testimony seemed very biased as it was stricter towards faulty
driving than faulty designs. It also made assumptions that the driver is not paying attention. This
narrows the focus to accidents where the driver is always liable. Ultimately, self driving car
accidents should put more liability on the drivers since only so much control is in the hands of the
drivers. Self driving cars should not be classified as manual cars because a “design defect” can be
interpreted differently between self driving cars and manual cars. A Tesla Model X could fail to
follow its algorithm and the complete liability might still go to the driver when it should be shared
with the manufacturer. In an ideal scenario, if a driver is paying attention, the manufacturer should
be completely liable for the accident. Overall, the laws should place partial liability to separate
stakeholders according to the degree of their role in the car accident.

Fleetwood, Janet. “Public Health, Ethics, and Autonomous Vehicles.” American Journal
of Public Health, vol. 107, no. 4, Apr. 2017, pp. 532–37,

All nations are in a fierce race to develop the first ever fully functioning self driving car.
However, key distinctions must be made in order to cogently analyze the prevalent risk factors of
the technology. In a recent public health publication, key experts addressed the public interest and
ethics impugning further development of these cars. First, the publication makes an intriguing
statement: “Self driving cars have the potential of saving about thirty seven thousand lives per
year.” However, this goal should not preempt the need for legal regulations to ensure a steady
path towards full autonomy. More specifically, the publication discusses “forced choice
algorithms.” These are the choices seperating a partially autonomous car from a truly self driving
car: The ability to react in abrupt scenarios. If a car swerves over, a fully self driving car is expected
to defensively swerve over to avoid a collision, but no self driving car or imperfect human is
completely capable. As a result, the question becomes where should the line be drawn between
self driving and partially automatic in order to decide the point at which a car manufacturer is held
liable for a self driving car crash. Common sentiment and this publication agrees that the line
occurs when a system has one integrated system presiding over all self driving cars. This is a clear
and consistent line to draw, but it has a gray area. This means that a self driving car interface could
be infinitely close to autonomous, so the manufacturer wouldn’t be held liable.
This article concisely combined all past supporting viewpoints into one big coalition of
support. To expand on the gray area, if a manufacturer gets cheeky and tries to create a car with
identical capabilities to another self driving car without utilizing an integrated system, they could
be completely innocent for any car crash on the basis that they don’t have a completely integrated
system. However, competition should between different corporations would likely force out less
capable designs. Even in the worst case scenarios, a competent judge could justly declare liability
on the manufacturer for being infinitely close to the self driving standard. To sum up, the most
consistent line to draw between driver liability and manufacturer liability is when the car has a
fully integrated self driving software system presiding over all self driving cars.

Gerdes, J. C. and S. M. Thornton, Implementable Ethics for Autonomous Vehicles. In


Autonomous Driving, M. Maurer, J. C. Gerdes, B. Lenz and H. Winner (Eds.), Springer
Verlag, Berlin, Heidelberg, 2016, 87–102.

A study, by tracking the performance of self driving cars developed solely by Google,
found over 3450 fatalities in self driving car accidents while only one was reported as the fault of
Google. This disparity raises the question: “Were all of the cases correctly decided, or did Google
benefit from underdeveloped competition and safety laws?” In order to answer this question, the
study discusses the drawbacks of manual cars. The current traffic system relies on multiple
independent human decisions while one integrated system of self driving cars is arguably safer
and more efficient than human systems. Evidently, a report published by the MIT media lab claims
that, “In major cities, 40% of gasoline is used looking for parking.” Also, ninety three percent of
car crashes are the result of human error. These potential benefits outweigh certain concerns. Why
give up control over one’s own life to a machine? However, a recent demographic shift from baby
boomers, a generation where receiving a driver’s license was considered a rite of passage, to
millenials demonstrates a shift in attitude in favor of self driving cars. However, self driving cars
have a one sever limitation in their current design. While most research is focused on the creation
of sensor systems, self driving cars need to communicate with other cars to create a truly create an
autonomous system.

The study dives into the social, political, and economic changes that will inevitably surface
as a result of new technology. While the study wasn’t particularly focused on regulations, it
covered many relevant and interesting developments to consider when researching self driving
cars. Holistically, researching social implications and legal aspects could create an interesting
aggregate research topic.

Glancy, Dorothy J. “Autonomous and Automated and Connected Cars - First Generation
Autonomous Cars in the Legal Ecosystem.” Minnesota Journal of Law, Science and
Technology, vol. 16, 2015, p. 619,
https://conservancy.umn.edu/bitstream/handle/11299/174406/619%20Glancy.pdf;sequen
ce=1

Autonomous cars can be driven in either controlled environments or unknown terrain. Most
designs include a pre mapped image of all local road systems, so a self driving car doesn’t have
to adapt, and make decisions at the same time. However, this system emphasize liability on the
manufacturer since the expectation of these cars is preplanned driving vs adaptive driving.
Adaptive driving is always subject to error, and drivers are expected to assist the autopilot. The
roles of the drivers are significantly different, but aren’t precisely or legally defined. In order to
justify the claim: “ The driver should only pay 25% of the accident damages” a court standard
must declare the manufacturer liability for 75%. Secondly, this standard would be more consistent
with preplanned driving where the driver’s role isn’t constantly changing throughout the drive.
Adaptive driving requires a driver to make decisions at different times when the car might drive
erratically, hence the role of the driver is precisely unclear. However, it is possible to declare the
driver liable within a possible range. For example, a level 3 autonomous car could be responsible
for twenty to thirty percent of a car crash. The range would declare an absolute minimum or
maximum.

This scholarly journal provides a new perspective on judging liability. By arguing for
preset ranges of liability, a court case could limit error and expedite the verdict. However, this
system is relatively flawed since the overwhelming majority of self driving car crashes are caused
by human negligence. If liability is limited to a range, outlier cases could be mishandled.
Unfortunately, the journal makes its arguments by anticipating the development of self driving
cars in a very specific niche, so it’s possible that research takes a different direction than the
journal, nullifying its insights.Ultimately this approach would facilitate the majority of self driving
car accidents while requiring a judge to reasonably approach the outliers.

Goodall, N. J., Ethical Decision Making During Automated Vehicle Crashes, Transportation
Research Record: Journal of the Transportation Research Board 2424 (2014), 58–65.

Goodall comprehensively analyzes the algorithms needed to effectively manufacture


safe-driving cars. He conclusively claims that a 100% crash avoidance rate is impossible no
matter the design, pre-crash behavior incorporates a degree of moral behavior, and human ethics
can’t be encoded into computers. The researcher alludes to a study indicating that self-driving
cars would need to travel at least 725,000 miles free of human influence in order to claim
complete automation at the 99% confidence level. However, the study seems exaggerated
because it frames the conclusion in terms of absolute automation. The fact is that many other
studies conducted by The Department of Transport and other organizations definitively claim
that self-driving cars are likely to prevent about 90% of future crashes. Goodall’s data isn’t
necessarily wrong, but it’s phrased in a way to support his argument. Secondly, Goodall refers to
a novel concept: Asimov’s law. A set of rules proving a machine can’t fully adopt a rational
mindset based on the fact that automated systems are too literal. However, the researcher
skillfully juxtaposes Asimov’s law with an alternative: consequentialism. This concept dictates
that the automated system must choose the most convenient options in an irrevocable scenario
like an impending car crash. However, the self-driving car might not be able to properly
considers the likely prospects in a imminent car crash.

However, the researcher advocates for the implementation of a 2-phase deployment


process for the development of crash anticipation systems in the future. First, the researcher
proposes a unanimous foundational system of ethics. Basic ideas like injuries are preferable to
death or property damage is preferable to injury. However, a rational code can’t cover every
possible dangerous scenario on the road, creating the need for an artificial intelligence based
encouragement. Simply, the car performs with its foundational code in mind, but any experience
it can’t understand get reprogrammed in order to change its future behavior. In theory the
approach seems extremely methodical and likely, but in practice, such an approach seems
infeasible today considering time, money, and manpower constraints. However, Goodall’s paper
is intended as a roadmap for the future, making it a considerable strategy in the current
experimentation of any self-driving car.

Healey, Jennifer. If Cars Could Talk, Accidents Might Be Avoidable.


https://www.ted.com/talks/jennifer_healey_if_cars_could_talk_accidents_might_be_avoid
able. Accessed 22 Oct. 2018.

The speaker of this Ted Talk, Researcher Mrs. Jennifer Healey discusses the positive
implications of her research on self driving cars. First, she evokes the surprising statistic that
car accidents are the number one cause of death for people aged sixteen to nineteen.
Empirically, people must prioritize their attention: when switching lanes, people must take their
eyes off the road ahead of them and pay attention to the side lanes. People can’t perfectly micro
manage these situations, but an integrated database of sensors and interconnected self-driving
systems can perfectly make safe decisions without the pressure and distractions that humans
face. Secondly, she delves into a discussion about legal issues that must address self driving car
accidents. Like every source before this Ted Talk, she believes that partial levels of autonomy
should be treated like manual cars since drivers have a personal responsibility and ability to
ensure that an accident never occurs. The Unanimity of past sources confirms this belief that
partially autonomous cars should be treated like manual cars. However, the speaker concludes
that liability of her proposed integrated self driving systems should transfer all liability to the
manufacturer of the self driving technology.

In defense of this claim, the speaker declared,” Drivers should expect their car to all the
driving for them, so anything short of this expectation should fall on the manufacturer. This
source was perfect for finding a connection between loosely connected subtopics of self driving
cars. Holistically, this Ted Talk provided a convincing defense for why self driving cars should
operate under strict liability laws based on autonomous technology. This approach should be
thoroughly researched to see if it could provide effective liability laws for self driving cars.
Finally, the speaker is a prestigious researcher that could serve as an insightful advisor.

Hevelke and Nida-Rümelin, Responsibility for crashes of autonomous vehicles: An ethical


analysis, Science and Engineering Ethics 21(3) (2015), 619–630.

While most authors scrutinize the responsibilities of the individual drivers, Hevelke’s paper
encompasses that of manufacturers as well. The deceivingly simple solution is to hold practically
all manufacturers liable because they are “ultimately responsible for the final product.” However
omnipotent legislative protection disincentivizes manufacturers from improving the quality of
their cars. The polar opposite would be to direct liability to the driver on the basis that the driver
is the primary overseer of the entire vehicle unit. However, each extreme allows each stakeholder
to avoid legal ramifications in certain scenarios. The truth is that the driver and the self-driving car
each maintain at least a partial role in navigating through the road. Evidently, the researcher
advocates for a simple judgement excluding the excessively complex and subjective field of ethics.
The researcher presents a hypothetical scenario in which self-driving cars prevent 50% of all car
crashes, noting that the National Transportation Safety Board predicts that self-driving cars could
potentially save 93% of all future accidents. However, the distribution of lives saved changes. In
other words, a few lives are potentially sacrificed to save plenty more. In ethics the auspicious
scenario creates a controversy that the researcher believes is unnecessary.

Ultimately, Hevelke’s paper provided a direct approach to adjudicating liability on a simply


pragmatic level. The goal of avoiding ethics as a significant factor in liability allows courts to
focus on the cold facts: the capabilities of the car, the actions of the driver, and the established
precedents. Unfortunately, the Hevelke’s promising approach is extremely costly and inefficient.
With respect to manual cars, complex liability cases tend to be drawn out legal battles lasting
months. The advancement of self-driving cars presents more opportunities for an unnecessarily
prolonged court case. However, objective precedents have a very high chance of delivering justice.
Essentially, Helvelke challenges the preconceived notions of liability within ethics, supporting the
minority position: ethics should not be a relevant factor in liability cases.

Nebbia v. New York | LII / Legal Information Institute.


https://www.law.cornell.edu/supremecourt/text/291/502/. Accessed 25 Sept. 2018.

Nebbia vs New York is a landmark Supreme court case that went drastically differently
than Standard Oil Co. of New Jersey vs the USA. The article first provides context: As a response
to the Great Depression, New York set a minimum price for milk at 9 cents. At first, it seemed
like an extremely unprecedented action to restrict businesses. Nebbia, a store owner, sold milk at
a cheaper price, and the case eventually went to the courts. However, in this case the court ruled
that “government instituted price fixing” was constitutional. This decision seemed to contradict
the case where Standard Oil was broken up precisely for price fixing. The only difference was
that one was done by a corporation, and one was done by the government. However the majority
opinion by justice James Mcreynolds explains the verdict.

He claimed that the price controls were not “arbitrary, discriminatory, or demonstrably
irrelevant.” Since New York’s motive for price fixing was not to crush the competition, it was
allowed. The motive for Standard Oil was illegally securing a profit compared to the government
recovering its state from the Great Depression. However, motive is still a contentious gray area
in many antitrust cases. The conclusion declares that stakeholders, motives, and context all play
a role in these extremely subjective complicated court cases. While courts can enact foundational
regulations, they must closely scrutinize each and every case.

Preliminary Report Released for Crash Involving Pedestrian, Uber Technologies, Inc., Test
Vehicle. https://www.ntsb.gov/news/press-releases/Pages/NR20180524.aspx. Accessed
18 Nov. 2018.

On March 18, 2018, a self driving uber vehicle under the control of one human operator
crashed into a civilian while the operator was not damaged. As discerned in the police report, the
pedestrian was pushing a bicycle through a dimly illuminated street instead of the designated
crosswalk. Also, toxicology tests proved the presence of marijuana in his system. On the operator’s
side of the story, the car registered the “unknown object” 6 seconds before impact, and determined
the need for emergency braking 1.3 seconds before collision. Uber made it very clear that
automated emergency maneuvers are not enabled in order to prevent erratic vehicle behavior;
However, the operator should have handled the emergency. The preliminary report skews, like the
vast majority of self driving car accidents, in favor of the operator. The most likely verdict based
on the preliminary report and past verdicts would declare liability on the victim for poor choices
leading up to the crash and maybe the operator for failing to intervene. However, the car
technology would be innocent since a plethora of human decisions could have been made to
prevent the accident. However, the rebuttal would claim that all car accidents would inevitably
declare all manufacturers innocent regardless of the capabilities of the car. Moreover, the operator
or any flawed human that could have prevented the crash from happening would be liable when
the self driving car had a significant role in the crash. Above all, third party opinions agree on the
need for a car to have one master in charge of all major functions rather than vaguely dividing the
roles between a driver and a self driving car in order to prevent an avoidable crash from happening
and to easily evaluate liability.

The objectivity of the police report allows the formation of dissenting views on who should
be liable. This anecdotal report firmly supports the assertion that self driving car accidents in
preliminary stages should be assessed as manual cars. Universally, cars clearly aren’t expected to
make decisions while humans can prevent the majority of these crashes from happening in the first
place. Finally, this incident can be used later in a argumentative paper to elucidate the causes
behind the fact that the vast majority of car crashes are caused by human negligence.

Santoni de Sio, F., Killing by Autonomous Vehicles and the Legal Doctrine of Necessity,
Ethical Theory and Moral Practice 20(2) (2017), 411–429.

Santoni seeks to establish a moral framework in order to analyze multiple ways of


assessing legal liability. Santoni begins his paper on a foundation of “the doctrine of necessity.”
Essentially, self-driving cars might be forced into a scenario where they can’t save all lives and
property, but they can save some parties at the expense of others. In manual car crashes, drivers
are forced to make quick value judgements before an inevitable car crash, but the doctrine of
necessity is a legal precedent protecting stakeholders forced into irreconcilable scenarios.
Moreover, lawyers distinguish between two types of defenses: justifications and excuses.
Defendants conveying cases for necessity, the claim that prohibited actions like the vandalism of
private property, can be acquitted due to the extreme circumstances of their lawsuit regardless of
the damage they caused. The precedent of necessity is therefore a justification. Excuses are
instances where the defendant willingly does something illegally without culpability. For
example, a bank clerk coerced into giving a sum of money to a criminal at gunpoint.
Justifications permits an unlawful action, but excuses imply a premise of human fallibility, a
quality that’s not applicable to self-driving cars.

Santoni goes on to develop his arguments on the basis of philosophically consistent


landmark court cases like R v Dudley and Stephens and Ruiz v Foreman. The cases present
differing views on liability based on a universal ethical code, but Santoni never makes a clear
stance for any particular option. Instead, he summarized the benefits and drawbacks of as many
options as possible. As a result, his paper does not conclusively address the issue of liability.
However, the author examines very specific stakeholders and precedents of past cases that could
be blended and synthesized into a cohesive research paper. Finally, the paper’s substantially high
reading level required an extensive analysis unlike other annotations, providing valuable
technical knowledge on the subject of ethical liability standards.

School, Stanford Law. “Uber Self-Driving Cars, Liability, and Regulation.” Stanford Law
School, https://law.stanford.edu/2018/03/20/uber-self-driving-cars-liability-regulation/.
Accessed 1 Oct. 2018.

This article provides some basic criteria for evaluating self driving car crashes. This
scenario incorporates an Uber driver, so liability is determined differently. Under conventional
laws, if the driver is careless, Uber is responsible for the driver’s negligence. If the automatic
systems of the car fail to detect a pedestrian, the manufacturer is liable. However, Uber might
share some liability because they provide the service. Finally, if the victim is careless or
obscured by the dark, he is at partial fault. The article also goes on to advocate for consistent
accident laws in states with minimal regulation. This article gleans over some other stakeholders
like the manufacturer of the tires or the insurance company. This article was vague, but it started
to identify some of the relevant stakeholders in the situation.

This case provided pertinent information on the meticulous details that must be addressed
in a self driving car accident. While it’s impossible to study laws that encompass every accident
case out there, the popularity of self driving cars presents an opportunity to conduct research.
Ultimately, there were more questions than answers, but there are other resources like the FTC
website that act as standards for comparison. Also, this can be presented to potential advisors in
order to resolve confusion and appear interested in a specific topic.

“Self-Driving Cars Explained.” Union of Concerned Scientists, www.ucsusa.org/clean-


vehicles/how-self-driving-cars-work#.W7s7g2hKiM9.

Currently, there are no legally operating, fully autonomous cars, but many models have
different degrees of partial autonomy ranging to highly-independent prototypes. As a result, the
liability of a car crash should correspond to the level of autonomy in the car. There is already a
broad scale that categorizes levels of autonomy. Level 5 is defined as “completely capable of self-
driving in every situation.” Based on the rapid growth and competition of the self driving car
industry, level 4 cars, fully-autonomous in some driving scenarios, could be built and sold in the
next several years. Most cars in development map out their surroundings using a complex arrays
of sensors and lasers like a bat. Next, car follows an algorithm that gives the car instructions on
where to drive based on the mapped terrain. These algorithms attempt to prepare the car for any
situation, but there are too many possible situations for a computer to cover. Also, sometimes the
censors are uncertain about the actual environment, so the car needs partial human assistance.

This article started to connect the broader concepts of liability laws to how self driving cars
work. If the cars can be categorized into levels of autonomy, then why shouldn’t liability also be
categorized by the control of the driver vs the control of the car. It turns out that most crashes result
from human carelessness. No cars are level 5 so there is no excuse to rely solely on the algorithms
of the car. Evidently enough, most car crashes happen because people overestimate the capabilities
of their car. Ultimately, laws need to consider the driver’s carelessness and the capabilities of the
car. The article also indicated some of the current research going on. Most states have already
conceived laws and required more transparency from the car companies. For example, California
has required more data from car companies in order to optimize their designs. Currently, there
needs to be more research on how to calibrate regulations in order to minimize these car crashes.

“Sit, Stay, Drive: The Future of Autonomous Cars.” Science and Technology Law Review
| Vol 16 | No. 3. https://scholar.smu.edu/scitech/vol16/iss3/. Accessed 30 Oct. 2018.
Every state has imposed regulations that serve as precedent for manual cars. However, the
nature of self driving cars reaks precedent and requires different factors to consider. Contrary to
other scholarly articles, Dr. Duffy argues that, “ If an automobile behaved erratically due to a
glitch, all liability should be held on the owner.” This interpretation is reasonably based on the
implied condition that the car is fully autonomous. The owner’s expectation of the car’s
capabilities doesn’t match the performance of the car. These cases would be categorized as
uncontrollable, where no diligent action by the driver would have salvaged the situation. Secondly,
most situations involve a combination of negligence from the driver while the car isn’t fully
autonomous. Dr. Duffy agrees with the majority of state legislatures that the driver should be liable
for these accidents. People will be incentivized to rely on their own assiduity rather than depend
on the car, thus reducing the chance of an accident. This doctrine is most effective when cars aren’t
expected to cover the vast majority of the driving. However current research is directed at
promising research in order to make cars completely independent of human interference. The
question becomes where is the line drawn for a manufacturer to receive one hundred percent
liability. This article declares a completely autonomous car as the only place where a manufacturer
is completely liable on the basis that every other case can be resolved by human interference.

The writer of the article corroborated with many other sources about its opinion while
elucidating its views with logic. However, the article could have used statistics to support its broad
statements. Also the article does tackle an obscure factor: the legal definition and prosecution of
faulty technology. The autonomous car is intricately integrated with a computer, however the law
doesn’t recognize the computer as a legal entity. According to State Farm Mutual Automobile
Insurance Co. v. Bockhorst, the court declared that an insurance company must “honor a policy
that the company's computer system erroneously reinstated, even though the policy was ineligible
by the company's standards.” In other words, drivers were held liable for technology that they were
never aware of in the first place. Simply declaring “the car” doesn’t identify the party specifically
responsible for a car crash. Ultimately this article provided a objective baseline argument for its
policies and condition for the liability of a self driving car crash.

“Standard Oil Co. of New Jersey v. United States, 221 U.S. 1 (1911).” Justia Law,
https://supreme.justia.com/cases/federal/us/221/1/. Accessed 24 Sept. 2018.

The foundation of antitrust precedent was changed in this supreme court decision. First.
The article addresses context: Standard Oil became a monopoly that dominated the oil industry.
More importantly, they executed “price fixing tactics” to nullify any attempt to compete. The
Standard Oil trust, with the leverage of 17 other oil companies, could drop the price when there
was any competition, and it could raise their prices when there was no competition. These tactics
were open ended to different interpretations. Some could argue that they were restricting the
rights of consumers and competitors while others could claim that they were using their huge
power to crush the competition. At the time, the judges didn’t break up Standard Oil because it
would establish the precedent that the government could become more involved in free markets.
However, strong public opposition forced the courts to concede on the basis that Standard Oil
was rigging the competition and restricting the rights of consumers.

This article explained an example of a specific case where antitrust laws were violated.
The argument was the first major example of a court taking a stand against a monopoly. Standard
Oil argued that the court wanted it to compete, but not to beat their competition. However, the
article clarified that the Supreme Court wasn’t trying to restrict competition. Chief Justice
Edward White wrote that Standard Oil participated in “restraint to trade and commerce.” After
cross referencing the testimony with the FTC website, it became clear that Standard Oil was
actually using its power to completely restrict competition through “fixed market pricing” rather
than fairly winning the competition.

Tina, Bellon. “Liability and Legal Questions Follow Uber Autonomous Car Fatal Accident.”
Insurance Journal, 21 Mar. 2018,
https://www.insurancejournal.com/news/national/2018/03/20/483981.htm.

As a follow up to a previous annotation on a police report, this journal delves into the legal
ramifications whereas the police report details the self driving car crash. While the Police
Department claimed the crash, “difficult to avoid as a result of dark area and toxicity of
pedestrian,” Sergei Lemberg, a prosecutor who has pushed several lawsuits against
manufacturers, argues that the operator behind the vehicle could be held liable in court on the fact
that the operator failed to prevent of even mitigate potential effects of the crash as the car traveled
over forty miles per hour. Interestingly, Volvo, the manufacturer of the car, confirmed that the
software controlling the car was from another party. While Volvo has agreed to accept liability
for its vehicles mounted with foreign software, other manufacturers could use this fact to transfer
liability to another party. Also, the National Transportation safety board released an article in the
press responding to the incident claiming that they will,” adapt standards appropriately.”
Obviously the press release was vague considering the inconclusive nature of the court case, yet
it will likely adjust its next Automated Vehicle Safety Guide.

This objective journal, yet seemingly biased, gathered opinions from experts with
ambivalent opinions while discussing their merit. This source also used some of the complex key
terms prevalent in other articles proving to be a sophisticated reading. In context, each legal expert
representing a different party used different terminology and claims to systematically shift blame.
For example, the prosecutor mentioned negligence claims while a third party lawyer mentions,
indemnification agreements, or cases where third parties such as the software designer implicitly
assumes liability for a corporation. On the contrary, the author seemed to shy away from inputting
her own ideas as analysis. If she gave her own analysis in a separate section, she could have
explained her own position while explaining the complicated technical issues determining the
underlying case.

The Future of Self-Driving Cars – Richmond Journal of Law and Technology.


https://jolt.richmond.edu/2017/04/10/the-future-of-self-driving-cars/. Accessed 7 Nov.
2018.

This journal outlines its opinion for change in liability laws. In early level 1 and 2 phases,
self driving cars will primarily be judged by the same laws that apply to conventional material.
The next wave of self driving cars must be catered to drive in environments with manual cars. For
example, the next generation of self driving cars might yield adaptive self driving cars connected
under a single integrated unit. If this is the case, the journal argues one hundred percent liability
on the owner. Another factor to consider is the expanded oversight from the federal government.
If every state had conflicting regulations, the federal government might need to handle any disputes
between states inevitably complicating the issue further. This system would prioritize federal
legislation as a baseline safety standard. States could incorporate the federal laws as their own to
provide consistency. The DOT, the federal organization governing transportation, has already
planned to take this approach. They have publicly claimed the need to, “illustrate the government’s
embrace of car safety technology after years of hesitation.”

The main topic of the journal was current legislation and progress. It also directed my
research towards certain unexpected developments to consider. However, the journal’s purpose
was to advocate for change, hindering the neutral credibility of the journal. Ultimately, it
elucidated the need for federal intervention based on reason: the states might have conflicting
legislation. This premises support a compelling argument for 100% liability on the manufacturer:
The risk of advertising deception to a driver’s attention and the failure of governments to
compromise. Ironically, a journal’s purpose is objective while this one was similar to an opinion
piece. However, it went against the opinions of my past sources and will provide a starting point
for researching new dissenting articles.

“The State of Self-Driving Car Laws across the U.S.”


https://www.brookings.edu/blog/techtank/2018/05/01/the-state-of-self-driving-car-laws-
across-the-u-s/. Accessed 14 Oct. 2018.

While self driving cars could advance the efficiency and affordability of transportation, as
supported by the last study, self driving car research has to address certain drawbacks in current
design. Sensor technology is not capable of making decisions and adapting to its environment as
effectively as humans can. Humans have the unique ability to proactively anticipate driving
scenarios while a computer can only react to its environment. For example, if a human sees a ball
roll on the road, they can anticipate that someone is going to retrieve the ball while a computer
can’t anticipate that scenario. Also, the initial cost of self driving technology can discourage
consumers from buying the product. A commonly used LIDAR system in google cars costs about
seventy thousand alone without factoring in the costs of designing all other exterior systems. A
simple manual car today costs about one fourth the cost of an autonomous self driving car.
However, these drawbacks are temporary as research is in the progress of overcoming these
challenges.

This article gives a direction for research. The liabilities of self driving cars can be negated
by researching specific aspects of self driving cars. The sensor systems would work much better
if they were ubiquitous and universally connected to other cars. This would require legal mandates
or cooperation among different car manufacturers. Another important research topic could be
accident prevention. Since ninety three percent of all accidents are attributed to human error, self
driving cars can potentially reduce the number of total accidents by reducing human error. Finally,
self driving cars are independent of one another, so research can be focused towards the
coexistence of self driving cars in the traffic environment.

“Who is Responsible when a Self Driving Car has an Accident”, 29 Jan. 2018,
https://futurism.com/automation-replace-staggering-number-workers-major-cities.

The research topic “antitrust laws” is too broad to research, so focus has been diverted to a
fairly new subsection of business laws: self driving cars. This article addressed a new case where
a self driving car gets into an accident. First, it gave context: when a manual car crashes, the driver
is liable because he is in control, assuming that the manufacturer and the tire company didn’t make
a mistake. However, in this case an automatic self-driving Model S from Tesla crashed into a fire
truck. Since the driver isn’t in control, he can argue that Tesla is liable since they integrated the
autonomous driving systems of the car. This case raises the question: “Who is responsible for the
crash.” The vehicle definitely requires human assistance, but others argue that the claim
“autopilot” misleads a customer about the capabilities of the car. The uncertainty of the case
requires more research to be done.

An article focused on one specific self-driving car accident raised a specific research
question. Right now the question has evolved into, “Who is responsible for a self driving car
crash.” In order to answer the question, different subtopics like the mechanics of the car must be
researched. Also this topic builds on some of the ideas in the previous articles. These cases aren’t
simple, because of different stakeholders, motives, and liabilities. These complex cases are decided
by the controls of the specific situation. Sometimes, there is faulty manufacturing, faulty tires, or
the car is only semi-automatic making the case even more complicated. Finally, the article
highlights that current self driving car accidents are decided exactly like manual cars even though
the car has to maintain some degree of autonomy that should separately be held liable for a car
accident.

Das könnte Ihnen auch gefallen