Sie sind auf Seite 1von 108

Modeling Legal Argument

Dr. Thomas F. Gordon


Fraunhofer FOKUS, Berlin
April, 2008

Outline

Introduction to Argumentation Theory

Argument from Ontologies

Argument from Rules

Argument from Cases

Schedule

Monday, May 16, 15:00-18:00 (3 hr)


Introduction to Argumentation Theory (1.5 hr)
Practice Session: Argument Reconstruction and Modeling (1.5 hr)

Tuesday Morning, May 17, 10:00-12:30 (2.5 hr)


Argument from Ontologies (1.25 hr)
Argument from Rules (1.25 hr)

Tuesday Afternoon, May 17, 14:00-16:00 (2 hr)


Practice Session: Modeling Legislation (2 hr)

Wednesday, May 21, 16:00-18:00 (2 hr)


Argument from Cases (1 hr)
Practice Session: Case-Based Reasoning (1 hr)

Introduction to Argumentation Theory

Dr. Thomas F. Gordon

Fraunhofer FOKUS, Berlin


April, 2008

Examples of Practical Problems, Small and Large

Deciding where to go for dinner.

Designing a legal knowledge representation language.

Deciding whether to republish Danish cartoons depicting Mohammed.

Deciding where to build a new airport.

Deciding how best to end the war in Iraq.

Deciding how to reduce global warming.

Legal Problems are Practical Problems

Legal Assessment
Deciding whether a citizen is entitled to social benefits
Determining tax obligations
Deciding a criminal case

Legal Planning
Estate planning
Tax planning
Drafting contracts
Legislative policy development
Legislative drafting

Conditions Typical During The Making of Practical Decisions

Both too much and not enough information available.

Resources are limited, e.g. time, persons, money.

Expected value of the outcome is not high enough to warrant the development of a
computer program or knowledge-base. ad hoc problems.

Opinions differ about the truth, relevance or value of available information.

Arguments can be made both pro and con any proposed solution.

Reasoning is defeasible: further information can cause some alternative to become


preferable.

Value judgments are at least as important as facts or knowledge. (ethical, legal,


political, business, aesthetic issues). What makes one solution better than another?

Many persons are affected. Conflicts of interest are inevitable. Negotiation required.

Argumentation Research

Interdisciplinary field: philosophy, communications studies, computer science, artificial


intelligence

Modern study of argumentation began with Stephan Toulmins Uses of Argument in


1958

Aims to provide a comprehensive, normative theory of logic, dialectic and rhetoric for
practical reasoning

Also known as Informal Logic


Not because it does not use formal or computational models.
But because the acceptability of some proposition at issue does not depend only on its logical
form
Rather, argumentation is contextual; acceptability depends on specific reasoning conventions
of the application field or domain.

Philosophical Roots of Argumentation Theory

The ancient Greeks recognized and studied several


normative sciences :
Logic the study of inference relations
Rhetoric - the study of effective communication
Dialectic - the study of the norms and methods for resolving
conflicting views, ideas and opinions

Logic was understood broadly, including what we now call,


defeasible, nonmonotonic or presumptive reasoning, as
well as deductive and inductive forms of inference.

The study of presumptive inference and dialectic has been


largely neglected since then. In the first half the twentieth
century, especially, the field of formal, mathematical logic
focused on deductive inference relations.

Carneades
(c. 213 - c. 128 B.C.)

Argumentation Tasks

Participant
Select
Moves

Authority
Present/
Visualize
Arguments

<uses>

<uses>

<uses>

Moderate
Dialogues

<uses>

Apply
Protocols

<uses>

Decide
Issues

Rhetorical Layer

<uses>

<uses>

Manage
Commitments

<uses>
<uses>

Moderator

Reconstruct
Arguments

Construct
Arguments

<uses>

<uses>

Dialectical Layer

Evaluate &
Compare
Arguments

Apply
Schemas

<uses>

Manage
Knowledge
(KBS)

Logical Layer

A Dictionary Definition of Dialectic


1. the art of investigating or discussing the truth of opinions.
2. inquiry into metaphysical contradictions and their solutions.

the existence or action of opposing social forces, concepts, etc.


The ancient Greeks used the term dialectic to refer to various methods of reasoning
and discussion in order to discover the truth. More recently, Kant applied the term to
the criticism of the contradictions that arise from supposing knowledge of objects
beyond the limits of experience, e.g., the soul. Hegel applied the term to the process
of thought by which apparent contradictions (which he termed thesis and antithesis)
are seen to be part of a higher truth (synthesis).
source: The New Oxford Dictionary (emphasis added)

Core Dialectical Idea: Opposition (Contradiction) and its Resolution

Opposing arguments (pro vs con)

Opposing interests (proponent vs. opponent)

Opposing ideas (e.g. thesis vs. antithesis, resolved by synthesis)

What is an Argument? (Informally)

Arguments link a set of premises to a conclusion.


The conclusion and each premise are declarative statements.
The premises (are intended to) support the conclusion; provide reasons for accepting or
believing the conclusion.

Examples:
Socrates is a man, therefore Socrates is mortal.
John is 75 years old, therefore John is old.

Premises may be of different kinds, play different roles. The classical theory of syllogism,
e.g., distinguished major and minor premises:
major premise a generalization, e.g. all men are mortal.
minor premise a specific fact, e.g. Socrates is a man.

Argument vs. Proof

The premises of an argument provide reasons to accept the conclusion, but the
conclusion need not be a necessary logical consequence of the premises.

Arguments can be defeasible. The conclusion of an argument is not necessarily true,


but may be only presumptively true. Adding premises to an argument can cause it to
fail to support the conclusion. (cf. nonmonotonicity) Example:
In John is 75 therefore John is old, suppose we add the premise John is a tortoise.

Premises required to make the argument deductively valid may be missing or implicit.
Example:
In Socrates is a man, therefore Socrates is mortal., the major premise, All men are mortal
is implicit.

Further Examples

Inductive Arguments. It rained yesterday and today, therefore


it will rain tomorrow.

Abductive Arguments. It is wet outside this morning,


therefore it rained last night.

General Rules with Exceptions. The train leaves every day at


9:15, therefore it will leave today at 9:15. But today is a
holiday.

Open Texture Concepts. Vehicles are not allowed in the


park, therefore baby carriages are not allowed. But are baby
carriages to be considered vehicles in this context?
None of these are deductively valid, but they may be good
arguments nonetheless.

Argumentation Schemes

Generalize the concept of an inference rule to cover presumptive as well as deductive


and inductive forms of argument.

Are conventional patterns of argument.

Come with a set of critical questions for evaluating and challenging arguments.

Useful for several purposes:


Recognizing, classifying or identifying an argument as an instance of some scheme;
Critically evaluating an argument, using critical questions of the scheme;
Methods for constructing, generating or inventing new arguments.

Many schemes are field dependent (domain specific)


Legal argumentation is legal because of its special purpose, legal argumentation schemes
and procedures.

Catalog of Argumentation Schemes

About 60 argumentation schemes have been identified by Douglas Walton and his
colleagues.

Work on classifying schemes is ongoing research. (Taxonomy or ontology)

Examples
Argument from Expert Opinion
Argument from Popular Opinion
Argument from Analogy
Argument from Correlation to Cause
Argument from Consequences
Argument from Sign
Argument from Verbal Classification

Theories of Validity

Relational Theories. The relationship between the premises and conclusion of an


argument is sufficient for determining whether or not the argument is valid. Examples
Classical Logic: The argument (inference) is deductively valid iff the conclusion is a necessary
(logical) consequence of the premises.
Nonmonotonic Logics: The argument is valid iff the conclusion is a defeasible consequence of
the premises. (Nonmonotonic logics vary in how they define the defeasible consequence
relation.)

Dialectical Theories. An argument is valid only if it furthers the goals of the


dialogue in which it is used. Validity can depend on how the argument is used in a
process.

Doug Waltons New Dialectic

Validity can depend on the context of


the argument in a dialog
Who made the argument?
In what role?
When? (which move)
In what kind of dialog?

Reference: Walton, The New


Dialectic, 1998

Reconstruction of Deductively Valid Arguments


Defendants of classical logic note that
non-deductive arguments (often) can
often be reconstructed as deductively
valid arguments.
But many questions remain, such as:
1. May only deductively valid
arguments be asserted in dialogue? If
not, what norms do apply?
2. Which party has the burden to reveal
hidden premises? When must this be
done? Does this depend on the dialog
type?

Waltons Dialogue Typology

Some Legal Dialog Types

Courts
Pleading

Dialogs between attorneys and their


clients

Trial

Tax and estate planning

Appellate Argument

Drafting contracts

Public Administration
Claims processing
Citizen consultation and participation in legislative
processes

Legislature
Policy development
Legislative drafting

The Process of Reconstructing and Evaluating Arguments


1. Select natural language texts to analyze
2. Create a key list of statements in the texts.
3. Identify arguments in the text, associating their premises and conclusions with
statements in the key list.
4. Classify arguments, using argumentation schemes as patterns
5. Use these schemes to help reveal implicit premises in the arguments.
6. Assign burden of proof and proof standards to the statements at issue.
7. Accept or reject statements which are assumed to be true of false, respectively, and not
at issue.
8. Evaluate the acceptability of statements at issue, using the proof assignments and
standards.

Legal Knowledge Interchange Format (LKIF)

XML formats for representing and interchanging legal knowledge

Developed in the European ESTRELLA project <http://www.estrellaproject.org>

Covers
Terminology (ontologies)
Rules
Precedent cases
Arguments

Statements

Form: <subject> <predicate> <object/value>


Examples
Sally is a woman.

woman
is a

Sally has husband Joe.


Joe is a man.

Sally

man

Joe has wife Sally.

A set of such statements can be visualized as a


directed graph (see figure).

wife husband
John

is a

LKIF XML Grammar for Statements (Simplified)

Grammar
Statement = element s {
attribute id { xsd:ID }?,
attribute summary { xsd:string }?
}

Example key list of statements in this format


<s id=s1 summary=Sally is a woman./>
<s id=s2 summary=John is a man./>
<s id=s3 summary=Sally has husband John/>
<s id=s4 summary=John has wife Sally./>
<s id=s5 summary=John has age 35./>

LKIF XML Grammar for Arguments (simplified)

Argument = element argument {


attribute id { xsd:ID },
attribute direction { "pro" | "con" }?,
Premise*, Conclusion
}

Premise = element premise {


attribute polarity { "positive" | "negative" }?,
attribute role { text }?,
attribute statement { xsd:IDREF }
}

Conclusion = element conclusion { attribute statement { xsd:IDREF } }

Example Argument in LKIF XML Format


<?xml version="1.0" encoding="UTF-8"?>
<?oxygen RNGSchema="file:LKIF.rnc" type="compact"?>
<lkif>
<s id="s1" summary="All men are mortal."/>
<s id="s2" summary="Socrates is a man."/>
<s id="s3" summary="Socrates is mortal."/>
<argument-graph>
<argument id="arg1" direction="pro">
<premise role="major" statement="s1"/>
<premise role="minor" statement="s2"/>
<conclusion statement="s3"/>
</argument>
</argument-graph>
</lkif>

Practice Session 1. Argument Reconstruction and Modeling

Bring additional
dialogs to use, in case
we have time left.
Legal examples?

Reconstruct the arguments of the following dialog* and represent them in LKIF XML
Helen (1): A problem with tipping is that sometimes it is very difficult to know how much to tip.
Bob (1): Its not so difficult. If youve got excellent service, give a tip. Otherwise dont tip.
Helen (2): But how much should one tip?
Bob (2): Just use your common sense.
Helen (3): Common sense is often wrong, isnt it? What kind of criterion is that?
Bob (3): Like most things in life, if you want to do something good, you have to use common sense.
Helen (4): With tipping, common sense leaves too much open to uncertainty. Because of this uncertainty, both
the tipper and the receiver can be uncomfortable. It the tip is too low, the receiver is uncomfortable. It the tip is
too high, the tipper is uncomfortable.
Bob (4): A lot of students depend on tips to help pay for their college education. A college education is a good
thing. Discontinuing tipping would mean fewer students could afford college.
Helen (5): Thats no problem. All we need to do is raise the minimum wage.
Bob (5): That might put a lot of restaurants out of business, resulting in job losses for students and others.

* source: Douglas Walton, Fundamentals of Critical Argumentation, 2006, pg. 3

Argument from Ontologies

Dr. Thomas F. Gordon


Fraunhofer FOKUS, Berlin
April, 2008

What are Ontologies?

In philosophy, ontology is the study of conceptions of reality and being. That is,
ontology, like biology, is a research field. Some questions addressed:
What is existence?
What is an object?
How do objects retain their identify as they change?

In computer science, an ontology is an advanced kind of entity-relationship data


model.
Ontologies in CS are formal models of concepts and relations, including a set of logical formulas, called
terminological axioms.
Ontologies are used to standardize the semantics of data models, to facilitate the interchange of data among
programs, abstracting away syntactic and other details.

Example Ontology: LKIF Core Legal Concepts

Utility of Ontologies

Legislation, regulations, precedent cases and other sources of norms are expressed in
natural language, making use of legal (and nonlegal) terms.

Terms in natural language are overloaded: one term may be used to mean different
concepts (or relations) in different contexts. Conversely, several terms may be used
for the same concept (synonyms).

Formal ontologies provide a precise way to model concepts and relations and to
associate natural language terms to their intended meanings.

Ontology

Terms of Natural
Language
three
agent

time

systematize

boat
contract

boat
time

triangle
offer
house

contract
house

denote
denote

model
(rationalism)

Abstract Ideas and


Concepts
number
geometry
contract

freedom
corporation

model
(positivism)
Things in the Real World

Relation of Ontologies to Theories

Ontologies are part of a theory of some domain (e.g. law of contracts)

A theory of a domain is a (possibly infinite) set of statements about the domain.

An axiomatization of a theory consists of


A formal language (L): A set of function and predicate symbols and syntactic rules for forming sentences from
these symbols.
A finite set of axioms (A): A finite set of statements from which the entire theory can be derived using some set
of inference rules.

Types of Axioms (Examples)

Terminological axioms (T-Box): structures concepts and relations (ontology)

Assertional axioms (A-Box) : assert facts about instances of concepts.

Normative axioms: classify instances of concepts using legal concepts (permitted or


obligatory actions, whether an agreement is a contract, etc.)

Interpretation axioms: map concepts from one ontology to another, e.g. nonlegal
concepts to legal concepts. (cf. subsumption) Example: Is a skateboard a vehicle
in the sense of traffic law?
Note: It is not always easy to classify axioms. Many ontologies include axioms which
arguably are not terminological.

Theory
(infinite)

Axioms

Ontology

Points to Remember

An ontology is but one part of the axiomatization of a theory (knowledge base)

Like all theories, ontologies can be critically evaluated or challenged:


Consistent?
Validity, with respect to the concepts of the relevant community?
Coherence?
Practical utility?

Example Ontology Argumentation Scheme


Scheme for Argument from Verbal Classification [Walton 2006, pg. 129]

Premises
Individual Premise. x has property F
Classification Premise. For all y, if y has property F, then y has property G

Conclusion
x has property G

Critical Questions
Is the classification premise based on a definition that is subject to doubt?

Argument from Ontology as Kind of Argument from Theory

Argument from ontology is a special kind of argument from theory, using only the
terminological axioms of the theory

Deductive Inference
The deduction of a proposition p, from the axioms of a theory. Denoted: T p
The deduction is valid if and only if p is necessarily true when all of the axioms in T are true. (logical entailment)
Denoted: T p

Argument from theory


derivability premise: T p
theory premise: all the propositions in T are true in the intended domain.
conclusion: p

Critical Questions
Even though p is necessarily true if T is true, the argument can be challenged by questioning the minor premise. Is
the theory T true, coherent, etc. Thus the conclusion of a deductive argument is, like all arguments, only
conditionally and presumptively true.

Relation of Ontologies to Rules

Recall: ontologies are the terminological axioms of some theory.

Theories are axiomatized using logic.

There are many logics to choose from: propositional logic, predicate logic, deontic
logic, etc.

The language of most logics has some kind of conditional (if then )
statement. In propositional and predicate logic, e.g., the material conditional,
denoted A B.

Such conditionals are often called rules. Rules of this kind can be used to formalize
ontologies.

But laws and regulations are rules of another kind. (more on this later)

Formalizing Ontologies

Any logic can be used, but ...

Since only the terminological axioms of a theory need to be represented, special


purpose description logics have been developed for this purpose.

Possible Advantages
Decidability
Efficiency (more formally, in some cases tractability)
Ease of use
Similar to familiar data modeling methods in software engineering: Object-oriented programming, EntityRelationship models, Unified Modeling Language (UML).
Can be visualized graphically

Description Logic Overview (Simplified)


Description Logic

Predicate Logic

Meaning

Example

CD
where C and D are classes
(concepts)

D(x) C(x)

Cs are Ds.
C is a subclass of D.

Peguins are birds.

QP
where Q and P are properties
(roles)

P(x,y) Q(x,y)

Qs of x are Ps of x.
Q is a subproperty of P.

The mother of a person is a


parent of the person.

R.C

C(y) R(x,y)

Every R of x is a C.
The range of R is C.

The mother of a person is a


woman.

CR.D

C(x) R(x,y) D(y)

Objects which have an R


which is a D is a C.

Persons who own a home in


Bel Air are wealthy.

CDE

E(x) C(x) D(x)

Instances of both C and D are


also instances of E.

Anything which is male and


human is a man.

CDE

D(x) C(x)
E(x) C(x)

Instances of C are also


instances of both D and E.

Every woman is human and


female.

Services Provided by Description Logic Reasoners

Satisfiability
Can any object be an instance of some concept C? Is the concept consistent? Is it logically possible for some object to be
an instance of this concept?

Subsumption
Is every C necessarily a D?
Not the same as subsumption in the law, which ask whether the facts of a case can be subsumed under some legal
term. Example: Is a baby carriage a vehicle in the sense of the traffic code?

Instance Checking
Is some object an instance of a given concept. Is x a C?

Retrieval
Find all objects which are instances of some given concept. What are the members of C?

Realization
For a given object, find all concepts it instantiates. What is x?

LKIF Core Ontology of Legal Concepts


Module Dependencies

Norm
LegalAction
Time
Top

Mereology

Expression
Process

Place

Action

Role

TimeModification
Rules

LegalRole

http://www.estrellaproject.org/lkif-core/lkif-core.owl

Core

Protg Demo

Argument from Rules

Dr. Thomas F. Gordon


Fraunhofer FOKUS, Berlin
April, 2008

Many Kinds of Rules

Law & Ethics


Regulations, Statutes
Principals
Morals, Conventions

Logic
Material Implications. For example:

x. man(x) mortal(x)

Inference rules. For example:


P Q, P
Q

(modus ponens)

Computer Science

Production rules

Rewrite rules

Grammar rules

Here, we mean rules in the legal sense.

Features of Rules

Rules are reified objects with properties, e.g. date of enactment.

Rules are subject to exceptions.

Rules can conflict.

Some conflicts can be resolved using rules about rule priorities, e.g. lex superior.

Rules can be excluded from being applicable by other rules

Rules can be invalid. Deleting invalid rules from the KB is not an option.

There is much consensus in AI and Law about these features [Gordon 1993; Hage
1993; Prakken & Sartor, 1996]

Scheme for Arguments from Rules

Premises
r is a legal rule with conditions a1, , an and conclusion c.
Each ai in a1, ..., an is presumably true.

Conclusion
c is presumably true.

Critical Questions
Does some exception to r apply?
Is some assumption of r not met?
Is r a valid legal rule?
Does some rule excluding r apply in this case?
Does some conflicting rule of higher priority apply in this case?

LKIF XML Grammar for Statements (Full Version)

Grammar
Statement = element s {
attribute id { xsd:ID }?,
attribute summary { xsd:string }?,
attribute src { xsd:anyURI }?,
((text* & Statement*)*)? }

Examples
<s summary=?x is movable.>movable ?x</s>
<s>goods ?x</s>

Example of a rule with an exception


9-105h states that movable things are good, except for money.
In LKIF XML:
<rule id="s9-105h">
<body>
<s>movable ?x</s>
<unless><s>money ?x</s></unless>
</body>
<head>
<s>goods ?x</s>
</head>
</rule>

Rule Grammar (LKIF XML)


Rule = element rule {
attribute id { xsd:ID }?,
attribute strict { "no" | "yes" }?,
(Literal+ | Implies)
}
Literal = Statement | Not
Not = element not { Statement }
Implies = (Head, Body) | (Body, Head)
Head = element head { Literal+ }
Body = element body { Or | Condition+ }
Or = element or { (Condition | And)+ }
And = element and { Condition+ }
Condition = Literal
| element if { attribute role { text }?, Literal }
| element unless { attribute role { text }?, Literal }
| element assuming { attribute role { text }?, Literal }

Statements about Statements

The obligation of X to support Y is excluded from 1601 BGB


<s>excluded s1601-BGB <s>obligated-to-support ?x ?y</s>

1601 BGB applies to the obligation of John to support Susan


<s>applies s1601-BGB <s>obligation-to-support John Susan</s></s>

Exclusionary Rules
9-105-h-i excludes money from the definition of goods in 9-105h
<rule id="s9-105-h-i">
<body>
<s>money ?x</s>
</body>
<head>
<s>excluded s9-105-h <s>goods ?x</s></s>
</head>
</rule>

Example of negation and reasoning about validity


A repealed rule is not valid.
<rule id="repeal">
<body>
<s>repealed ?r1</s>
</body>
<head>
<not><s>valid ?r1</s></not>
</head>
</rule>

Resolving rule conflicts with priority rules


The principal of lex posterior states that later rules have priority over earlier rules.
<rule id="lex-posterior">
<body>
<s>enacted ?r1 ?d1</s>
<s>enacted ?r2 ?d2</s>
<s>later ?d1 ?d2</s>
</body>
<head>
<s>prior ?r2 ?r1</s>
</head>
</rule>

Practice Session 2. Modeling Legislation

Model the following family relations in OWL, using Protege


father, mother, parent, grandparent, ancestor, descendent, relative, brother, sister, sibling

Model the following rules, roughly based on German family law, in LKIF XML.
1601 BGB (Support Obligations) Relatives in direct lineage are obligated to support each other.
1589 BGB (Direct Lineage) A relative is in direct lineage if he is a descendent or ancestor. For example, parents, grandparents and
great grandparents are in direct lineage.
1741 BGB (Adoption) For the purpose of determining support obligations, an adopted child is a descendent of the adopting
parents.
1590 BGB (Relatives by Marriage) There is no obligation to support the relatives of a spouse (husband or wife), such as a motherin-law or father-in-law.
1602 BGB (Neediness) Only needy persons are entitled to support by family members. A person is needy only if unable to support
himself.
1603 BGB (Capacity to Provide Support) A person is not obligated to support relatives if he does not have the capacity to support
others, taking into consideration his income and assets as well as his own reasonable living expenses.
1611a BGB (Neediness Caused by Own Immoral Behavior) A needy person is not entitled to support from family members if his
neediness was caused by his own immoral behavior, such as gambling, alcoholism, drug abuse or an aversion to work.
91 BSHG (Undue Hardship) A person is not entitled to support from a relative if this would cause the relative undue hardship.

Argument from Cases

Dr. Thomas F. Gordon


Fraunhofer FOKUS, Berlin
May, 2008

Arguments from Cases

TAXMAN II [McCarty & Sridharan 1981] First to model argument from theories, using
prototypes and deformations of concepts in cases.

HYPO [Ashley & Rissland, 1990] Modeled arguments from analogy with factor
comparison

CABARET [Skalak & Rissland, 1991] Used cases to broaden and narrow the
interpretation of rules

GREBE [Branting 1991] - Used rules to match cases and cases to satisfy open-textured
concepts in rules.

CATO [Aleven & Ashley 1997] - Introduced factor hierarchies to support arguments from
downplaying and emphasizing case distinctions.

Bench-Capon & Sartor [2003] used social values to construct theories of cases.

HYPO [Ashley & Rissland 1990]

Represented cases as sets of factors and dimensions.


Factor: boolean property
Dimension: scalar property (e.g. Degree to which a trade secret has been revealed.)
Dimensions were dropped in later models based on HYPO, e.g. CATO

Modeled 3-Ply Arguments:


Move 1. Proponent.
Argument 1. Cite analogous case.

Move 2. Respondent.
Argument 2. Distinguish cited case.
Argument 3. Cite more on point counterexample case.

Move 3. Proponent
Argument 4. Distinguish the counterexample.
Kevin Ashley and Edwina Rissland

HYPO Argumentation Schemes

Cite Analogous Case


premise. The precedent case C1 and the current case C2 have factors
favoring party P in common.
premise. C1 was decided in favor of party P.
conclusion. C2 should be decided in favor of party P.

Distinguish Analogous Case (Example)


premise. F, a factor favoring P in the precedent case C1, is not in the current case C2.
premise. C1 was decided in favor of party P
conclusion. The precedent case C1 does not apply to the current case C2

HYPO Trade Secrets Example

CASE Yokana ()
F7 Brought-Tools ()
F10 Secrets-Disclosed-Outsiders ()
F16 Info-Reverse-Engineerable ()
CASE American Precision ()
F7 Brought-Tools ()
F16 Info-Reverse-Engineerable ()
F21 Knew-Info-Confidential ()

F10 ()
F9 ()

Yokana ()
F7 ()
F16 ()

Mason (?)
F1 ()

American
Precision ()

F21 ()

F6 ()

CFS
= plaintiff
= defendant

F15 ()
F19 ()
F18 ()

CASE Mason (CFS, Undecided)


F1 Disclosure-in-Negotiations ()
F6 Security-Measures ()
F15 Unique-Product ()
F16 Info-Reverse-Engineerable ()
F21 Knew-Info-Confidential ()
Moves:
1. Cite Yokana
2. Distinguish Yokana,
Cite American Precision
3. Distinguish American Precision

HYPO Preference Order on Arguments On Pointedness

A precedent C1 is more on point than a precedent case C2 if and only if C1 has


more factors in common with the current case than C2

Let F1 be the factors of C1


F2 be the factors of C2 and
F3 be the factors of the current case.

Then C1 is more on point than C2 iff F1 F3 F2 F3.

HYPO Claim Lattice

CABARET [Skalak & Rissland, 1991]

Uses cases to reinterpret rules.

Broadening Scheme. Broadens a rule by removing an antecedent.

premise. Rule R states If S1, S2 Sn then S..


premise. In case C the antecedents S2 Sn, but not S1, were held to be true.
premise. S was held to be true in case C.
conclusion: Rule R is (actually) If S2 Sn then S.

Narrowing Scheme. Narrows a rule by adding an antecedent.


premise. Rule R states If S2 Sn then S.
premise. In case C all of S2 Sn were held to be true.
premise. But S was held not to be true in case C.
premise. S1 is not true in case C.
conclusion. Rule R is If S1, S2 Sn then S.

David Skalak

GREBE [Branting 1991]

Uses rules to match cases and cases to satisfy open-textured concepts in rules. Modeled
cases using semantic networks. (cf. ontologies, description logic).

The rationales for case decisions (ratio decidendi) were represented


as reduction graphs. Only the arguments pro the decision of a case
were modeled.

GREBE could construct arguments from these rationales, as well as


parts of rationales.

Karl Branting

GREBE Argumentation Schemes

Case Elaboration. (Uses a rule to increase the number of matching cases)


premise. Rule R1 states If S1 then S2..
premise. S1 is true in current case C1 (but perhaps not in S2).
premise. S2 is true in the precedent case C2.
conclusion. S2 is true in both C1 and C2. (partial match)

Term Reformulation. (Uses a precedent case to prove a rule condition.)


premise. Rule R1 states If S1 then S2..

(S1 uses an open-textured predicate.)

premise. S1 is true in precedent case C1.


premise. C1 is similar to the current case C2. (by matching semantic networks)
conclusion. S1 is true in the C2 ... and thus also S2, using R1.

CATO [Aleven & Ashley, 1997]

Used factor hierarchies to model facts of cases.

Factor hierarchies enabled HYPO to be extended with models of two additional


argumentation schemes:
Downplaying a distinction
Emphasizing a distinction

Vincent Aleven

Example: Downplaying a Distinction

Distinguishing an Analogous Case (Reminder)


premise. F, a factor favoring P in the precedent case C1, is not in the current case.
premise. C1 was decided in favor of party P.
conclusion. C1 does not apply to the current case.

Downplaying a Distinction
premise. F1, a factor favoring P in the precedent case C1, is not in the current case.
premise. C1 was decided in favor of party P.
premise. F2 is a factor in the current case.
premise. F1 and F2 both have parent F3, favoring P, in the factor hierarchy.
conclusion. C1 does apply to the current case.

Example: C1 involved bribery. C2 involved deception. Both involved illegal means.

Emphasizing a Distinction

Basic idea: not only does the current case have a factor not in the precedent, but this
factor can be shown, using the factor hierarchy, to provide a reason for not following
the precedent.

Emphasizing a Distinction
premise. C1 was decided in favor of party P.
premise. F1 is a factor in the current case but not C1.
premise. F1 favors the opponent of P.
conclusion. The current case should be decided in favor of the opponent of P.

Arguments from Rationales


Rationale - The reasons or arguments grounding a decision, such as a decision to
enact a legal rule or decide a legal case in particular way.

Representing and Reusing Explanations and Legal Precedents [Branting, 1989]

Rationales and Argument Moves [Loui & Norman, 1995]

Case-Based Reasoning in the Law A Formal Analysis of Reasoning by Case


Comparison [Roth, 2003]

Rationales and Argument Moves [Loui & Norman, 1995]

Idea
Use rationales of rules and case decisions to expose implicit assumptions and open these
assumptions up to challenge.

Example
Let R1 be the rule: if vehicle then not allowed in park
If the rationale of R1 is:
[if vehicle then (normally) privately used vehicle and
if privately used vehicle then not allowed in park]
and we know if tank then not privately used vehicle
then conclude R1 does not apply to tanks. (undercutter)

Case-Based Reasoning in the Law


A Formal Analysis of Reasoning by Case Comparison [Roth 2003]

Modeled the rationale of case decisions as argument graphs (pros and cons).

Brantings GREBE model, by comparison, modeled only the reasons pro the decision of
a case in its reduction graphs.

Like GREBE, Roths system modeled argumentation schemes from case rationales, as
well as parts of case rationales.

But by including both pro and con arguments in the rationales, Roth was able to identify
and model several additional cased-based argumentation schemes.

Roths model generalizes CATO [Aleven 1996] by supporting arguments about


whether a factor is present in the current case and
whether or not some factor is a reason pro or con some other factor.

Both of these are fixed in the CATO model and not subject to debate.

Argument From Theories

A theory is a set of generalizations which explains some set of cases.

A case is pair <F, D>, where F is a set of propositions representing the facts of the case
and D is a set of propositions representing conclusions or decisions of the case to be
explained by the theory.

A theory T explains a case <F, D> if for every d D it is the case that T F d.

Scheme for Argument from Theory


premise. T is theory of a set of precedent cases.
premise. F are the facts of the current case.
premise. T F p
conclusion. p

If there are multiple, competing theories, prefer the most coherent theory.

TAXMAN II [McCarty 1981]


Thorne McCarty

Modeled the arguments of the majority and dissenting opinions of a famous US


Supreme Court case, Eisner v. Macomber, about whether a stock dividend is taxable
income.

Since concepts are open-textured, they are not defined, but modeled by a
prototype, i.e. a typical example, and deformations, i.e. changes to the prototype.

Arguments were generated using theory construction, as follows:


Given a set of precedent cases, C, construct a sequence of deformations of the prototype,
through the positive examples in the precedent cases, to the facts of the current case.
Note: The theory of the precedent cases is represented by the prototype and this sequence of
deformations, not by generalizations or rules.

Prototypes and Deformations Model of Theory Construction

C2
Case 1

C1
Prototype

C3
Case 2

C4
Current
Case

Goal: To find a theory represented as a sequence of deformations of


the facts of a prototype case, C1, in which some goal proposition, p,
was decided to hold, to the facts of the current case, C4.
The respondent will try to construct a competing theory for p, by
deforming a prototype case in which p was held.

Using Social Values to Construct Theories [Bench-Capon & Sartor 2003]

Giovanni Sartor and


Trevor Bench-Capon,
with Carole Hafner

Structure of Theories in Bench-Capon & Sartors Model


A theory is a six-tuple <Cfds, Fds, R, Rpref, V, Vpref>, where:

Cfds is a set of case models, where each case is modeled as a set of factors (as in
HYPO),

Fds is a set the factors used to model the cases.

R is a set of rules.

Rpref is a partial order on rules (modeling rule preferences).

V is a set of values.

Vpref is a partial order on sets of values (modeling value preferences).

Theory Construction Operators Examples

Include a case

Include a factor

Merge factors

Broaden a rule

Rule preferences from cases

Rule preferences from value preferences

Ordering Theories by their Coherence

Idea: When multiple theories explain some body of cases, prefer the most coherent.

Coherence factors
Consistency with precedents
Explanatory power (completeness of coverage of precedents)
Simplicity (cf. Occams razor)
Yield acceptable results, given value preferences
Lack of arbitrariness (e.g. unjustified preferences)
Difficulty of application and administration. (cf. preference for bright-line rules)

Few computational models of coherence yet, but see:


Bench-Capon & Sartor: A Quantitative Approach to Theory Coherence, 2001.

Argument Schemes for


Legal Case-Based Reasoning

Adam Z Wyner and Trevor JM Bench-Capon


Department of Computer Science
University of Liverpool

Overview

Case Comparison
Partitioning factors with respect to a pair of cases

Argument schemes for reasoning from a precedent


Schemes
Assumptions
Counter examples

Comparing Cases by Partitioning Factors

pro-plaintiff
new
case

P3
P1
P6

precedent

pro-defendant

P5
P2
P4

Partitions:

P1. common P factors

P2. common D factors

P3. P factors in CC but not in PC

P4. D factors in PC but not in CC

P5. D factors in CC but not in PC

P6. P factors in PC but not in CC

Using Partitions to Analogize, Distinguish and Downplay

pro-plaintiff

pro-defendant

new
case

P3

P5

P1 and P2 factors are used to


match cases and argue by
analogy.

P1

P2

P5 and P6 factors are used to


distinguish the PC from the CC,
and weaken the argument by
analogy.

P3 and P4 factors are used to


downplay distinctions based on
P5 and P6 factors

P6
precedent

P4

Find for P

AS6

P4 Factors

P1 Factors

P2 Factors

AS5

AS1

P3 Factors

Preference for P1 over P2

CC Weaker
Exception

AS2

P1 Factors

P2 Factors

Outcome

AS4

PC Stronger
Exception

Substituting
P4 Factors
Exception

AS3

Substituting
P3 Factors
Exception

P6 Factors

Cancelling
P4 Factors
Expcetion

P5 Factors

Cancelling
P3 Factors
Exception

AS1: Claiming a Preference

P Factors Premise:
P1 are reasons for P.

D Factors Premise:
P2 are reasons for D.

Factors Preference Premise:


P1 was preferred to P2 in PCi.

CC Weaker Exception:
The priority in PCi does not decide CC.

Claim
Decide CC for P.

AS1: Claiming a Preference

P Factors Premise:
P1 are reasons for P.

D Factors Premise:
P2 are reasons for D.

Factors Preference Premise:


P1 was preferred to P2 in PCi.

CC Weaker Exception:
The priority in PCi does not decide CC.

Claim
Decide CC for P.

Factors in Current case

AS1: Claiming a Preference

P Factors Premise:
P1 are reasons for P.

Factors in Current case

D Factors Premise:
P2 are reasons for D.

Factors Preference Premise:


P1 was preferred to P2 in PCi.

CC Weaker Exception:
The priority in PCi does not decide CC.

Claim
Decide CC for P.

Shown with AS2

AS1: Claiming a Preference

P Factors Premise:
P1 are reasons for P.

Factors in Current case

D Factors Premise:
P2 are reasons for D.

Factors Preference Premise:

Shown with AS2

P1 was preferred to P2 in PCi.

CC Weaker Exception:
The priority in PCi does not decide CC.

Claim
Decide CC for P.

One type of Distinction


Extra D factor in CC
Shown with AS4

AS2: Citing a Precedent

P Factors Premise:
P1 are reasons for P.

D Factors Premise:
P2 are reasons for D.

Outcome Premise:
PCi was decided for P.

PC Stronger Exception:
The preference is not established in Pci.

Claim
P1 was preferred to P2 in PCi.

AS2: Citing a Precedent

P Factors Premise:
P1 are reasons for P.

D Factors Premise:
P2 are reasons for D.

Outcome Premise:
PCi was decided for P.

PC Stronger Exception:
The preference is not established in Pci.

Claim
P1 was preferred to P2 in PCi.

Factors in Past case

AS2: Citing a Precedent

P Factors Premise:

Factors in Past case

P1 are reasons for P.

D Factors Premise:
P2 are reasons for D.

Outcome Premise:
PCi was decided for P.

PC Stronger Exception:
The preference is not established in Pci.

Claim
P1 was preferred to P2 in PCi.

Past case favors P

AS2: Citing a Precedent

P Factors Premise:

Factors in Past case

P1 are reasons for P.

D Factors Premise:
P2 are reasons for D.

Outcome Premise:

Past case favors P

PCi was decided for P.

PC Stronger Exception:
The preference is not established in Pci.

Claim
P1 was preferred to P2 in PCi.

Another type of Distinction


Extra P factor in PC
shown with AS3

AS3: Distinguishing P Factors

P6 Factors premise
P factors in PC not in CC

Substituting Factors Exception


P Factors in CC with the same parent

Canceling Factors Exception


D Factors in PC with same parent

Claim
The preference is not established in Pci

AS3: Distinguishing P Factors

P6 Factors premise
P factors in PC not in CC

Substituting Factors Exception


P Factors in CC with the same parent

Canceling Factors Exception


D Factors in PC with same parent

Claim
The preference is not established in Pci

Down playing

AS4: Distinguishing D Factors

P5 Factors premise
D factors in CC not in PC

Substituting Factors Exception


D Factors in PC with the same parent

Cancelling Factors Exception


P Factors in CC with same parent

Claim
The priority in PC does not decide CC

AS4: Distinguishing D Factors

P5 Factors premise
D factors in CC not in PC

Substituting Factors Exception


D Factors in PC with the same parent

Cancelling Factors Exception


P Factors in CC with same parent

Claim
The priority in PC does not decide CC

Downplaying

AS5 and AS6 Arguments from Factors Strengthening the Analogy


Factors for the P in CC but not the PC (Partition P3) and factors for the D in the PC
but not the CC (Partition P4), strengthen, a fortiori, the argument from the analogy
to the PC.
AS5: Unused P factors in CC but not PC
AS6: Unused D factors in PC but not CC

Assumptions

We have shown the schemes only with premises and exceptions. But there are also
some assumptions that are being made.

Applicability That the precedent is applicable to the current case. This may depend
on jurisdiction, level of court etc.

Equal Strength of Factors This is required to justify substitution and cancellation.

Counter Examples

A counter example is a different precedent which argues for the defendant. There are
two types.
1. The same P and D factors are common, but the outcome is different. Argument con the preference: attacks claim
of AS2 that P factors of CC outweigh D factors of CC.
2. Different P and D factors are in common and the outcome is different. Argument con the decision: attacks claim of
AS1 that case should be decided for P.

Additional plaintiff precedents may provide counter examples to these


counterexamples

Carneades Demo

Practice Session 3. Case-Based Reasoning


Given the P and D factors of a set of cases (following)
1. Construct the partitions (P1 to P6) for some selected pairs of cases.
2. Select some case from the set to be the current case, e.g. Gardner. Consider the remaining
cases to be precedents.
3. Use the CBR argumentation schemes (AS1 to AS6) to construct arguments from the
precedents for the plaintiff and the defendant in the current case.

Table of Case Factors


Factor
Id
F1

Factor Name

Side Parent

Disclosure in Negotiations

F2
F10

Bribed Employee
Secrets Disclosed to Outsiders

P
D

F12

Outsider Disclosures Restricted


Unique Product
Information Reverse Engineered
Used Deception
Disclosure in Public Forum

F15
F25
F26
F27

P
D
P
D

Efforts to Maintain Secrecy


Questionable Means
Info Known and Available
Info Known and Available
Valuable Product
Questionable Means
Questionable Means
Info Known and Available

Table of Cases

Case Name
Gardner
Hafner
McCarty
Verheij
Prakken
Ashley
Bench-Capon
Wyner

P Factors
F15
F2, F15
F15, F26
F15
F12, F15
F2, F15
F15
F15

D Factors
F1
F1
F1
F1, F10
F1, F10
F1, F25
F1, F25
F1, F27

Task 1. Construct partitions by completing this table

CC/PCi
Hafner/Gardner
Gardner/Bench-Capon
Verheij/Gardner
Gardner/Hafner
McCarty/Hafner
Gardner/Hafner2
Prakken/Gardner
Verheij/Wyner

P1
F1

P2

P3

F1

F26
-

F15
F1
F15
F15

F1
F1

P4
F25
-

P5
F10
F10

P6
F2
F2
-

Das könnte Ihnen auch gefallen