Beruflich Dokumente
Kultur Dokumente
Harvard University
i w
om.
g i
Ce
h tc
nte
P h
rW
e i
ork
r n
sho
fo g
p:
r a
Se
m n
pt4
a d
,1
n R
97
c o
e u
in
Nick McKeown
Balaji Prabhakar
Tutorial Outline
Introduction:
What is a Packet Switch?
Switching Fabrics:
How does the packet get there?
Output Scheduling:
When should the packet leave?
Introduction
What is a Packet Switch?
Basic Architectural Components Some Example Packet Switches The Evolution of IP Routers
Admission Control
Congestion Control
Routing Switching
Reservation
Control
Policing
Output Scheduling
Datapath:
per-packet processing
Forwarding Table
3. Output Scheduling
Forwarding Decision
Forwarding Table
Forwarding Decision
Forwarding Table
Forwarding Decision
Copyright 1999. All Rights Reserved 5
Edge Router
Introduction
What is a Packet Switch?
Basic Architectural Components Some Example Packet Switches The Evolution of IP Routers
ATM Switch
Lookup cell VCI/VPI in VC table. Replace old VCI/VPI with new. Forward cell to outgoing interface. Transmit cell onto link.
Ethernet Switch
Lookup frame DA in forwarding table.
If known, forward to correct port. If unknown, broadcast to all ports.
Learn SA of incoming frame. Forward frame to outgoing interface. Transmit frame onto link.
IP Router
Lookup packet DA in forwarding table.
If known, forward to correct port. If unknown, drop packet.
Decrement TTL, update header Cksum. Forward packet to outgoing interface. Transmit packet onto link.
10
Introduction
What is a Packet Switch?
Basic Architectural Components Some Example Packet Switches The Evolution of IP Routers
11
First-Generation IP Routers
Shared Backplane
CPU
Buffer Memory
CP L U I ine nt er fa M ce em or y
DMA
DMA
DMA
Line Interface
MAC
Line Interface
MAC
Line Interface
MAC
12
Second-Generation IP Routers
CPU
Buffer Memory
DMA
DMA
DMA
13
Third-Generation Switches/Routers
Switched Backplane
Li Line LiIn n LiILiInnetneteef n e r LiILiInnetneterf rfa ace nt er a c LI CP Initnnetne erf fac ce e In e erf ac e Ue rf a e tr a c fa ce e M ce em or y
CPU Card
14
Fourth-Generation Switches/Routers
Clustering and Multistage
1 2 3 4 5 6
1 2 3 4 5 6 7 8 9 10 111213 1415 16
13 14 15 16 17 18
25 26 27 28 29 30
7 8 9 10 11 12
19 20 21 22 23 24
31 32 21
15
Packet Switches
References
J. Giacopelli, M. Littlewood, W.D. Sincoskie Sunshine: A high performance self-routing broadband packet switch architecture, ISS 90. J. S. Turner Design of a Broadcast packet switching network, IEEE Trans Comm, June 1988, pp. 734-743. C. Partridge et al. A Fifty Gigabit per second IP Router, IEEE Trans Networking, 1998. N. McKeown, M. Izzard, A. Mekkittikul, W. Ellersick, M. Horowitz, The Tiny Tera: A Packet Switch Core, IEEE Micro Magazine, Jan-Feb 1997.
Copyright 1999. All Rights Reserved 16
Tutorial Outline
Introduction:
What is a Packet Switch?
Switching Fabrics:
How does the packet get there?
Output Scheduling:
When should the packet leave?
17
Forwarding Table
3. Output Scheduling
Forwarding Decision
Forwarding Table
Forwarding Decision
Forwarding Table
Forwarding Decision
Copyright 1999. All Rights Reserved 18
Forwarding Decisions
ATM and MPLS switches Bridges and Ethernet switches
Associative Lookup Hashing Trees and tries Caching CIDR Patricia trees/tries Other methods Direct Lookup
IP Routers
Packet Classification
19
VCI
Memory
(Port, VCI)
Address
Data
20
Forwarding Decisions
ATM and MPLS switches Bridges and Ethernet switches
Associative Lookup Hashing Trees and tries Caching CIDR Patricia trees/tries Other methods Direct Lookup
IP Routers
Packet Classification
21
Advantages:
Simple
Search Data
48
Disadvantages
Slow High Power Small Expensive
Hit?
log2N
Address
22
Address
48
Hashing Function
16
Data
Search Data
Memory
Hit?
log2N
Address
23
#2 #2
#3
#4
Hashing Function
16
CRC-16
#1
{
#3
Associated Data
Hit?
log2N
Address
Linked lists
Copyright 1999. All Rights Reserved
#1
#2
24
E R = 1 1 + ------------------------------- -2 1 M 1 1 --- N
Where: ER = Expected number of memory references M = Number of memory addresses in table N = Number of linked lists = M N
25
Disadvantages
Non-deterministic lookup time Inefficient use of memory
26
log2N
N entries
010
111
27
1111, ptr
0000, 0 1111, ptr
000011110000
111111111111
28
D i ( (1 D i 1 )N ( 1 D 1 i ) N)
N
Di D i 1(1 D i 1 )
Where: D = Degree of tree L = Number of layers/references N = Number of entries in table E n = Expected number of nodes E w = Expected amount of wasted memory
Degree of # Mem # Nodes Total Memory Fraction 6 Tree References (x10 ) (Mbytes) Wasted (%)
2 4 8 16 64 256 48 24 16 12 8 6 1.09 0.53 0.35 0.25 0.17 0.12 4.3 4.3 5.6 8.3 21 64 49 73 86 93 98 99.5
29
Forwarding Decisions
ATM and MPLS switches Bridges and Ethernet switches
Associative Lookup Hashing Trees and tries Caching CIDR Patricia trees/tries Other methods Direct Lookup
IP Routers
Packet Classification
30
Caching Addresses
Slow Path
CPU
Buffer Memory
Fast Path
DMA
DMA
DMA
31
Caching Addresses
LAN: Average flow < 40 packets Huge Number of flows
100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0%
WAN:
IP Routers
Class-based addresses
IP Address Space
Class A Class B Class C D
212.17.9.4
33
IP Routers
CIDR
Class-based: A
0
D
232 -1
Classless:
65/8
128.9.0.0 128.9/16 2
1 6
142.12/19
232 -1
128.9.16.14
Copyright 1999. All Rights Reserved 34
IP Routers
CIDR
128.9.19/24 128.9.25/24 128.9.16/20 128.9.176/20 128.9/16
232 -1
IP Routers
Metrics for Lookups
Prefix 128.9.16.14
65/8 128.9/16 128.9.16/20 128.9.19/24 128.9.25/24 128.9.176/20 142.12/19
Port
3 5 2 7 10 1 3
36
IP Router
Lookup
H E A D E R
Dstn Addr
Forwarding Engine Next Hop Computation Forwarding Table Destination Next Hop -------------------
Next Hop
Incoming Packet
DVMRP:
Incoming Interface Check followed by (S,G) lookup
IPv6
128-bit destination address field Exact address architecture not yet known
38
OC192 10 Gbps
Source: http://www.telstra.net/ops/bgptable.html
Copyright 1999. All Rights Reserved 40
Ternary CAMs
Associative Memory Value 10.0.0.0 10.1.0.0 10.1.1.0 10.1.3.0 10.1.3.1 Mask 255.0.0.0 255.255.0.0 255.255.255.0 255.255.255.0 255.255.255.255 R1 R2 R3 R4 R4
Next Hop
Priority Encoder
41
Binary Tries
0 1 Example Prefixes a) 00001 b) 00010 c) 00011 d) 001 e) 0101 f) 011 g) 100 h) 1010 i) 1100 j) 11110000
42
d e a b c
g h i
Patricia Tree
0 1 Example Prefixes a) 00001 b) 00010 c) 00011 d) 001 Skip=5 e) 0101 f) 011 j g) 100 h) 1010 i) 1100 j) 11110000
d e a b c
g h i
43
Patricia Tree
Disadvantages Many memory accesses May need backtracking Pointers take up a lot of space Advantages General Solution Extensible to wider fields
Avoid backtracking by storing the intermediate-best matched prefix. (Dynamic Prefix Tries) 40K entries: 2MB data structure with 0.3-0.5 Mpps [O(W)]
Copyright 1999. All Rights Reserved 44
Level 29
45
33K entries: 1.4MB data structure with 1.2-2.2 Mpps [O(log W)]
Copyright 1999. All Rights Reserved 47
1 0 0 0
1 1 0 0 0 1
48
13
Multi-bit Tries
16-ary Search Trie 0000, ptr
0000, 0 1111, ptr
1111, ptr
0000, 0 1111, ptr
000011110000
111111111111
51
Compressed Tries
Only 3 memory accesses
L8
L16 L24
52
Number
Prefix length
Next Hop
142.19.6 14
142.19.6.14
24
54
Next Hop
128.3.72
128.3.72.44
24
base
Pointer
offset
44
55
(i
)entries
0 N
N+M
56
Various compression schemes can be employed to decrease the storage requirements: e.g. employ carefully chosen variable length strides, bitmap compression etc.
Copyright 1999. All Rights Reserved 57
IP Router Lookups
References
A. Brodnik, S. Carlsson, M. Degermark, S. Pink. Small Forwarding Tables for Fast Routing Lookups, Sigcomm 1997, pp 3-14. B. Lampson, V. Srinivasan, G. Varghese. IP lookups using multiway and multicolumn search, Infocom 1998, pp 1248-56, vol. 3. M. Waldvogel, G. Varghese, J. Turner, B. Plattner. Scalable high speed IP routing lookups, Sigcomm 1997, pp 25-36. P. Gupta, S. Lin, N.McKeown. Routing lookups in hardware at memory access speeds, Infocom 1998, pp 1241-1248, vol. 3. S. Nilsson, G. Karlsson. Fast address lookup for Internet routers, IFIP Intl Conf on Broadband Communications, Stuttgart, Germany, April 1-3, 1998. V. Srinivasan, G.Varghese. Fast IP lookups using controlled prefix expansion, Sigmetrics, June 1998.
58
Forwarding Decisions
ATM and MPLS switches Bridges and Ethernet switches
Associative Lookup Hashing Trees and tries Caching CIDR Patricia trees/tries Other methods Direct Lookup
IP Routers
Packet Classification
59
Policy-based Routing
Route all voice traffic through the ATM network
60
Packet Classification
H E A D E R
Forwarding Engine Packet Classification Classifier (Policy Database) Predicate Action -------------------
Action
Incoming Packet
Copyright 1999. All Rights Reserved
61
Fie 2 ld
15 2.163.8 1/ 3 0.1 2 15 2.163.0 / 16 .0
Fie k Ac ld tion
UDP TCP A1 A2
15 68 .0/ 16 2.1 .0
15 2.0.0.0/ 8
ANY
An
Given a classifier with N rules, find the action associated with the highest priority rule matching an incoming packet.
Copyright 1999. All Rights Reserved 62
Geometric Interpretation in 2D
Field #1 Field #2
R7 R6
Data
P2
Field #2
R3 R5 R4
P1
Field #1
63
Proposed Schemes
Pros
Se ntial que Evaluation Small storage, scales well with number of fields Te rnary CAMs Single cycle classification Grid of Trie Small storage requirements and s (Srinivasan e fast lookup rates for two fields. t al[Sig com m Suitable for big classifiers 98])
Cons
Slow classification rates Cost, density, power consumption Not easily extendible to more than two fields.
64
Cons
Large memory requirements. Suitable without caching for classifiers with fewer than 50 rules. Large memory bandwidth required. Comparatively slow lookup rate. Hardware only.
Bil-level Parallelism Suitable for (Lakshman and multiple fields. Stiliadis[Sigcomm 98])
65
Cons
Large preprocessing time.
Hierarchical Intelligent Cuttings (Gupta and McKeown[HotI 99]) Tuple Space Search (Srinivasan et al[Sigcomm 99])
Grid of Tries
0 0 1 0
Dimension 1
0 0 1 1 0 1 0
R4
0 0 0 1
R1 R2
0 1
Dimension 2 R7
67
R3
R5
R6
Grid of Tries
Disadvantages Static solution Not easy to extend to higher dimensions Advantages Good solution for two dimensions
20K entries: 2MB data structure with 9 memory accesses [at most 2W]
Copyright 1999. All Rights Reserved 68
1 1 0 0
R4 R3 R1 R2
69
512 rules: 1Mpps with single FPGA and 5 128KB SRAM chips.
Copyright 1999. All Rights Reserved 70
2 =2
T
1 2
Memory Memory
Action
2S = 2128
264
224
2 =2
T
1 2
F4
Fn
71
Packet Classification
References
T.V. Lakshman. D. Stiliadis. High speed policy based packet forwarding using efficient multi-dimensional range matching, Sigcomm 1998, pp 191-202. V. Srinivasan, S. Suri, G. Varghese and M. Waldvogel. Fast and scalable layer 4 switching, Sigcomm 1998, pp 203-214. V. Srinivasan, G. Varghese, S. Suri. Fast packet classification using tuple space search, to be presented at Sigcomm 1999. P. Gupta, N. McKeown, Packet classification using hierarchical intelligent cuttings, Hot Interconnects VII, 1999. P. Gupta, N. McKeown, Packet classification on multiple fields, Sigcomm 1999.
72
Tutorial Outline
Introduction:
What is a Packet Switch?
Switching Fabrics:
How does the packet get there?
Output Scheduling:
When should the packet leave?
73
Switching Fabrics
Output and Input Queueing Output Queueing Input Queueing
Scheduling algorithms Combining input and output queues Other non-blocking fabrics Multicast traffic
74
Forwarding Table
3. Output Scheduling
Forwarding Decision
Forwarding Table
Forwarding Decision
Forwarding Table
Forwarding Decision
Copyright 1999. All Rights Reserved 75
Interconnects
Two basic techniques
Input Queueing Output Queueing
Interconnects
Output Queueing
Individual Output Queues 1 2 N 1 2
Memory b/w = (N+1).R
Copyright 1999. All Rights Reserved
N
77
Output Queueing
The ideal
2 1 1 2 2 1 2 1 1 2 1
2 1
78
Output Queueing
How fast can we make centralized shared memory?
5ns SRAM Shared Memory
1 2 N
200 byte bus
Copyright 1999. All Rights Reserved
5ns per memory operation Two memory operations per packet Therefore, up to 160Gb/s In practice, closer to 80Gb/s
79
Switching Fabrics
Output and Input Queueing Output Queueing Input Queueing
Scheduling algorithms Other non-blocking fabrics Combining input and output queues Multicast traffic
80
Interconnects
Input Queueing with Crossbar
Memory b/w = 2R
Scheduler
Data In
configuration
Data Out
81
Input Queueing
Head of Line Blocking
Delay
Load
58.6%
100%
82
83
84
85
Input Queueing
Virtual output queues
86
Input Queues
Virtual Output Queues
Delay
Load
100%
87
Input Queueing
Memory b/w = 2R
Scheduler
88
Input Queueing
Scheduling
Input 1 A1,1(t) Q(1,1) Matching, M Output 1 D1 (t) A1 (t)
Q(1,n)
?
Input m Q(m,1) Am (t) Q(m,n) Output n Dn(t)
89
Input Queueing
1 2 3 4
7 2 4 2 5 2
1 2 3 4
Request Graph
Input Queueing
Scheduling
Maximum Size
Maximizes instantaneous throughput Does it maximize long-term throughput?
Maximum Weight
Can clear most backlogged queues But does it sacrifice long-term throughput?
91
Input Queueing
Scheduling 1 2 1 2
Copyright 1999. All Rights Reserved
1 2 1 2
92
Input Queueing
Longest Queue First or Oldest Cell First
Weight
={
1 10
}
1 2 3 4
100%
1 2 3 4
1 1 1
10
1 2 3 4
Maximum weight
1 2 3 4
93
Input Queueing
Why is serving long/old queues better than serving maximum number of queues?
When traffic is uniformly distributed, servicing the maximum number of queues leads to 100% throughput. When traffic is non-uniform, some queues become longer than others. A good algorithm keeps the queue lengths matched, and services a large number of queues.
Avg Occupancy
Uniform traffic
Avg Occupancy
Non-uniform traffic
VOQ #
Copyright 1999. All Rights Reserved
VOQ #
94
Input Queueing
Practical Algorithms Maximal Size Algorithms
Wave Front Arbiter (WFA) Parallel Iterative Matching (PIM) iSLIP
95
Requests 1 2 3 4
Copyright 1999. All Rights Reserved
Match 1 2 3 4 1 2 3 4 1 2 3 4
96
Requests
Match
97
98
Requests
Match
99
Input Queueing
Practical Algorithms Maximal Size Algorithms
Wave Front Arbiter (WFA) Parallel Iterative Matching (PIM) iSLIP
100
1 2 3 4
Requests
1 2 3 4 1 2 3 4
1 2 3 4
Grant
1 2 3 4 1 2 3 4
1 2 3 4 1 2 3 4
1 2 3 4 1 2 3 4
101
Accept/Match
1 2 #2 3 4
1 2 3 4
1 2 3 4 1 2 3 4
1 2 3 4 1 2 3 4
1 2 3 4
Accept/Match
102
103
104
105
106
Input Queueing
Practical Algorithms Maximal Size Algorithms
Wave Front Arbiter (WFA) Parallel Iterative Matching (PIM) iSLIP
107
iSLIP
#1
1 2 3 4
Requests
1 2 3 4 1 2 3 4
1 2 3 4
Grant
1 2 3 4 1 2 3 4
1 2 3 4 1 2 3 4
1 2 3 4 1 2 3 4
108
Accept/Match
1 2 #2 3 4
1 2 3 4
iSLIP
Properties
Random under low load TDM under high load Lowest priority to MRU 1 iteration: fair to outputs Converges in at most N iterations. On average <= log2N Implementation: N priority encoders Up to 100% throughput for uniform traffic
109
iSLIP
110
iSLIP
111
iSLIP
Implementation
Grant Grant
1 1 log2N
Accept Accept
2
log2N
State
Decision
Grant
Accept
log2N
112
Switching Fabrics
Output and Input Queueing Output Queueing Input Queueing
Scheduling algorithms Other non-blocking fabrics Combining input and output queues Multicast traffic
114
115
116
117
Self-Routing Network
000 001 010 011 100 101 110 111
Switching Fabrics
Output and Input Queueing Output Queueing Input Queueing
Scheduling algorithms Other non-blocking fabrics Combining input and output queues Multicast traffic
119
Speedup
Context
input-queued switches output-queued switches the speedup problem
120
Speedup: Context
M e m o r y M e m o r y
A generic switch
Output-queued switches
Main problem
- Requires high fabric speedup (S = N)
Input-queued switches
Big advantage
- Speedup of one is sufficient
Main problem
- Cant guarantee delay due to input contention
A Comparison
Memory speeds for 32x32 switch
Output-queued
Line Rate 100 Mb/s 1 Gb/s 2.5 Gb/s 10 Gb/s Memory BW 3.3 Gb/s 33 Gb/s 82.5 Gb/s 330 Gb/s Access Time Per cell 128 ns 12.8 ns 5.12 ns
1.28ns
Input-queued
Memory BW 200 Mb/s 2 Gb/s 5 Gb/s 20 Gb/s Access Time 2.12 s 212 ns 84.8 ns 21.2 ns
124
125
Numerical Methods
- use actual and simulated traffic traces - run different algorithms - set the speedup dial at various values
126
The findings
Very tantalizing ...
- under different settings (traffic, loading, algorithm, etc) - and even for varying switch sizes
127
Using Speedup
1 2
1 2
128
Intuition
Bernoulli IID inputs Speedup = 1 Fabric throughput = .58
Bernoulli IID inputs Speedup = 2 Fabric throughput = 1.16 I/p efficiency, = 1/1.16 Ave I/p queue = 6.25
Copyright 1999. All Rights Reserved 129
Intuition (continued)
Bernoulli IID inputs Speedup = 3 Fabric throughput = 1.74 Input efficiency = 1/1.74 Ave I/p queue = 1.35 Bernoulli IID inputs Speedup = 4 Fabric throughput = 2.32 Input efficiency = 1/2.32 Ave I/p queue = 0.75
130
Issues
Need hard guarantees
- exact, not average
Robustness
- realistic, even adversarial, traffic not friendly Bernoulli IID
131
?
Speedup << N
132
133
Algorithm - MUCF
134
MUCF
The algorithm
- Outputs try to get their most urgent packets - Inputs grant to output whose packet is most urgent, ties broken by port number - Loser outputs for next most urgent packet - Algorithm terminates when no more matchings are possible
135
Bill
John
Pedro
Monica
Maria
136
An example
Observation: Only two reasons a packet doesnt get to its output - Input contention, Output contention - This is why speedup of 2 works!!
Copyright 1999. All Rights Reserved 137
138
Other results
To exactly emulate an NxN OQ switch
- Speedup of 2 - 1/N is necessary and sufficient (Hence a speedup of 2 is sufficient for all N) - Input traffic patterns can be absolutely arbitrary - Emulated OQ switch may use a monotone scheduling policies - E.g.: FIFO, LIFO, strict priority, WFQ, etc
139
What gives?
Complexity of the algorithms
- Extra hardware for processing - Extra run time (time complexity)
Implementation (contd)
Matching process
- A variant of the stable marriage problem - Worst-case number of iterations for SMP = N2 - Worst-case number of iterations in switching = N - High probability and average approxly log(N)
142
Other Work
Relax stringent requirement of exact emulation
- Least Occupied O/p First Algorithm (LOOFA) Keeps outputs always busy if there are packets By time-stamping packets, it also exactly mimics - Disallow arbitrary inputs E.g. leaky bucket constrained Obtain worst-case delay bounds
143
Switching Fabrics
Output and Input Queueing Output Queueing Input Queueing
Scheduling algorithms Other non-blocking fabrics Combining input and output queues Multicast traffic
145
Multicast Switching
The problem Switching with crossbar fabrics Switching with other fabrics
146
Multicasting
2
147
Copy networks
Method 2
Use copying properties of crossbar fabric
No fanout-splitting: Easy, but low throughput
149
Performance of an 8x8 switch with and without fanout-splitting under uniform IID traffic
Copyright 1999. All Rights Reserved 150
Placement of residue
Key question: How should outputs grant requests? (and hence decide placement of residue)
151
152
Residue
1 2 3 4 5 Output ports
153
Residue Concentrated
1 2 3 4 5 Output ports
154
Replication by recycling
Main idea: Make two copies at a time using a binary tree with input at root and all possible destination outputs at the leaves.
b a x e y d c x a y b x c y e d
155
Recycle
Scaleable to large fanouts. Needs resequencing at outputs and introduces variable delays.
156
157
Tutorial Outline
Introduction:
What is a Packet Switch?
Switching Fabrics:
How does the packet get there?
Output Scheduling:
When should the packet leave?
158
Output Scheduling
What is output scheduling? How is it done? Practical Considerations
159
Output Scheduling
Allocating output bandwidth Controlling packet delay
scheduler
160
Output Scheduling
FIFO
Fair Queueing
161
Motivation
FIFO is natural but gives poor QoS
bursty flows increase delays for others hence cannot guarantee delays
162
163
164
Delay guarantees
Theorem
If flows are leaky bucket constrained and all nodes employ GPS (WFQ), then the network can guarantee worst-case delay bounds to sessions.
165
Practical considerations
For every packet, the scheduler needs to
classify it into the right flow queue and maintain a linked-list for each flow schedule it for departure
167
But...
WFQ is still very hard to implement
classification is a problem needs to maintain too much state information doesnt scale well
168
169
Diff Serv
A framework for providing differentiated QoS
set Type of Service (ToS) bits in packet headers this classifies packets into classes routers maintain per-class queues condition traffic at network edges to conform to
class requirements May still need queue management inside the network
Copyright 1999. All Rights Reserved 170
flow control in integrated services networks: the single node case, IEEE Trans. on Networking, June 1993.
- A. Parekh, R. Gallager, A generalized processor sharing approach to
flow control in integrated services networks: the multiple node case, IEEE Trans. on Networking, August 1993.
- M. Shreedhar, G. Varghese, Efficient Fair Queueing using Deficit Round
172
173
174
Global Synchronization
175
176
177
178
179
maxth
q
qavg
minth
if qavg < minth : admit every packet else if qavg <= maxth : drop an incoming packet with p = (qavg - minth )/(maxth - minth ) else if qavg > maxth : drop every incoming packet
180
181
182
183
184
185
186
Tutorial Outline
Introduction:
What is a Packet Switch?
Switching Fabrics:
How does the packet get there?
Output Scheduling:
When should the packet leave?
187
Admission Control
Congestion Control
Routing Switching
Reservation
Control
Policing
Output Scheduling
Datapath:
per-packet processing
188
Forwarding Table
3. Output Scheduling
Forwarding Decision
Forwarding Table
Forwarding Decision
Forwarding Table
Forwarding Decision
Copyright 1999. All Rights Reserved 189