Sie sind auf Seite 1von 71

STORAGE ARCHITECTURE/

GETTING STARTED:
SAN SCHOOL 101
Marc Farley
President of Building Storage, Inc
Author, Building Storage Networks, Inc.
Agenda
Lesson 1: Basics of SANs
Lesson 2: The I/O path
Lesson 3: Storage subsystems
Lesson 4: RAID, volume management
and virtualization
Lesson 5: SAN network technology
Lesson 6: File systems
Lesson #1

Basics of storage networking


Connecting

Concentrator Dish

Router

Network Switch/hub
HBA or
NIC Bridge

Computer
System

VPN
Connecting

 Networking or bus technology


 Cables + connectors

 System adapters + network device drivers

 Network devices such as hubs, switches, routers

 Virtual networking

 Flow control

 Network security
Storing
Host Volume Storage
Mirroring
Software Manager
Software
Device
Software Drivers

Storage Storage Command and Transfer Protocol


Protocol (wiring, network transmission frame)

Storage
Devices Tape Disk
Drives Drives
RAID
Subsystem
Storing

Device (target) command and control


 Drives, subsystems, device emulation

Block storage address space


manipulation (partition management)
 Mirroring
 RAID
 Striping
 Virtualization
 Concatentation
Filing
C:\directory\file Database
Object

User/Application View User/Application View

Logical Block Mapping


(Storage)
Filing

 Namespace presents data to end users and


applications as files and directories (folders)
 Manages use of storage address spaces

 Metadata for identifying data


 file name
 owner
 dates
Connecting, storing and filing as a
complete storage system

Filing Connecting
Wiring Storing

HBA or Cable Cable

NIC
Network Switch/hub
Storing function in
Disk Drive
an HBA driver

Computer System
NAS and SAN analysis

NAS is filing over a network


SAN is storing over a network
NAS and SAN are independent technologies
They can be implemented independently
They can co-exist in the same environment
They can both operate and provide services to the same
users/applications
Protocol analysis for NAS and SAN

NAS
Filing

SAN Storing

Network Wiring
Connecting
Integrated SAN/NAS environment
NAS
NAS ‘Head’
Server
+
NAS Server SAN
SAN Initiator SAN
Client System
“NAS Head”
Target
Storage

Filing
Filing Storing
Storing

Wiring
Connecting Connecting
Wiring
Common wiring with NAS and SAN
NAS ‘Head’
NAS Server
NAS Head SAN
Client System
Target
Storage

Filing
Filing Storing
Storing

Wiring
Connecting
Lesson #2

The I/O path


Host hardware path components

Memory Processor Memory System Storage


Bus I/O Bus Adapter
(HBA)
Host software path components

Application Multi-
Operating Filing Cache Volume Device
Pathing
System System Manager Manager Driver
Network hardware path components

Switches, hubs, routers, bridges, gatways


Cabling Port buffers, processors
Fiber optic Backplane, bus, crossbar, mesh, memory
Copper
Network software path components

Access and Fabric Flow Virtual


Security Services Routing Control Networking
Subsystem path components

Network Access and Cache Resource Internal Bus


Ports Security Manager or Network
Device and media path components

Disk drives Tape drives Tape Media Solid state devices


The end to end I/O path picture

Memory Filing Cache Volume Multi- Memory System Device Storage


App Processor Operating Driver Adapter
System Manager Manager Pathing Bus I/O Bus
System (HBA)

Access and Flow Virtual


Cabling Network Systems Fabric Routing Networking
Security Control
Services

Subsystem
Access Cache Resource Internal Bus Disk Tape
Network Poirt
and Security Manager or Network drives drives
Lesson #3

Storage subsystems
Generic storage subsystem model

Controller (logic+processors)

Access control Storage


Network Resources
Ports Resource manager

Internal Bus
Cache Memory or Network Power
Redundancy for high availability

 Multiple hot swappable power supplies


 Hot swappable cooling fans

 Data redundancy via RAID

 Multi-path support
 Network ports to storage resources
Physical and virtual storage

Exported Exported
storage storage Physical Physical
storage storage
device device
Exported Exported
storage storage

Physical Physical
Exported Exported storage storage
storage storage
device device

Exported Exported Subsystem Controller Hot


storage storage Resource Manager Spare
(RAID, mirroring,
etc.)
Device
SCSI communications architectures
determine SAN operations

 SCSI communications are independent of connectivity


 SCSI initiators (HBAs) generate I/O activity
 They communicate with targets
• Targets have communications addresses
• Targets can have many storage resources
• Each resource is a single SCSI logical unit (LU) with a universal
unique ID (UUID) - sometimes referred to as a serial number
• An LU can be represented by multiple logical unit numbers (LUNs)
• Provisioning associates LUNs with LUs & subsystem ports

 A storage resource is not a LUN, it’s an LU


Provisioning storage

SCSI LU
LUN 0 UUID A
Physical
storage
LUN 1
Port S1 devices
SCSI LU
LUN 1 UUID B

Port S2 LUN 2
Physical
storage
LUN 2 SCSI LU
UUID C
devices
Port S3
LUN 3

LUN 3
Physical
Port S4 SCSI LU
LUN 0 UUID D storage
devices
Controller functions
Multipathing

LUN X Path 1
SCSI LU
UUID A

MP SW
LUN X
Caching

Exported Exported
Volume Volume
Controller Cache Manager

Exported Exported
Volume Volume

Read Caches Write Caches

1. Recently Used 1. Write Through (to disk)


2. Read Ahead 2. Write Back (from cache)
Tape subsystems

Tape
Drive

Tape
Drive
Tape Subsystem Controller

Tape
Drive

Tape
Drive
Tape Slots Robot
Subsystem management

Now
Management station
browser-based
with network mgmt software

SMIS Ethernet/TCP/IP

Out-of-band management port

In-band
Exported
management Storage Subsystem Storage
Resource
Data redundancy

2n
Duplication

Parity

n+1
Difference
-1

d(x) = f(x) – f(x-1) f(x-1)


Duplication redundancy with mirroring

Host-based Within a subsystem

I/O PathA
Mirroring
I/O Path Operator

Terminate I/O & regenerate new I/Os


I/O PathB
Error recovery/notification
Duplication redundancy with remote copy

Host

Uni-directional
(writes only)

A B
Point-in-time snapshot
Subsystem Snapshot

Host

A B C
Lesson #4

RAID, volume management


and virtualization
RAID = parity redundancy

Duplication 2n

Parity

n+1
Difference
-1

d(x) = f(x) – f(x-1) f(x-1)


History of RAID

Late 1980s R&D project at UC Berkeley


 David Patterson
 Garth Gibson
(independent)
Redundant array of inexpensive disks
• Striping without redundancy was not defined (RAID 0)

Original goals were to reduce the cost and


increase the capacity of large disk storage
Benefits of RAID

● Capacity scaling
● Combine multiple address spaces as a single virtual
address

● Performance through parallelism


● Spread I/Os over multiple disk spindles

● Reliability/availability with redundancy


● Disk mirroring (striping to 2 disks)
● Parity RAID (striping to more than 2 disks)
Capacity scaling
Storage Storage Storage Storage
extent 1 extent 2 extent 3 extent 4
Combined extents
1 - 12

Storage Storage Storage Storage


Exported extent 5 extent 6 extent 7 extent 8

RAID
disk
Storage Storage Storage Storage
volume RAID
Controller
extent 9 extent10 extent11 extent12
(1 address) (resource
manager)
Performance

RAID controller (microsecond performance)

1 6
2 3 4 5

Disk Disk Disk Disk Disk Disk


drive drive drive drive drive drive
Disk drives (Millisecond performance)
from rotational latency and seek time
Parity redundancy

 RAID arrays use XOR for calculating parity


Operand 1 Operand 2 XOR Result
False False False
False True True
True False True
True True False

 XOR is the inverse of itself


 Apply XOR in the table above from right to left
 Apply XOR to any two columns to get the third
Reduced mode operations

 When a member is
missing, data that is
accessed must be
reconstructed with xor

 An array that is
reconstructing data is said
to be operating in reduced
mode XOR {M1&M2&M3&P}

 System performance
during reduced mode
operations can be
significantly reduced
Parity rebuild
RAID Parity Rebuild
 The process of recreating data on a replacement member
is called a parity rebuild
 Parity rebuilds are often scheduled for non-production
hours because performance disruptions can be so severe

XOR {M1&M2&M3&P}
RAID 0+1, 10
Hybrid RAID: 0+1
RAID Controller

DiskDisk DiskDisk DiskDisk DiskDisk DiskDisk


drivedrive drive
drive drive
drive drive
drive drive
drive
1 2 3 4 5
Mirrored pairs of striped members
Volume management and virtualization

 Storing level functions


 Provide RAID-like functionality in host systems
and SAN network systems

 Aggregation of storage resources for:


 scalability
 availability
 cost / efficiency
 manageability
OS kernel File system

Volume management

Volume Manager
 RAID & partition
management
 Device driver layer HBA drivers
between the kernel
and storage I/O
drivers HBAs
Server system
Volume managers can use all
available connections and
resources and can span multiple
SANs as well as SCSI and SAN
resources

Virtual Storage

SCSI disk resource


Volume
manager SCSI HBA
SCSI Bus
SAN disk
HBA drivers resources
SAN cable
SAN HBA SAN Switch
SAN storage virtualization

 RAID and partition management in SAN systems


 Two architectures:
• In-band virtualization (synchronous)
• Out-of-band virtualization (asynchronous)
In-band virtualization
Exported virtual storage

SAN
virtualization
I/O Path system

System(s),
switch or Disk
router subsystems
Out-of-band virtualization Virtualization
management
system

 Distributed volume
management Virtualization
agents
 Virtualization agents
are managed from a
central system in the
SAN
Disk
subsystems
Lesson #5

SAN networks
Fibre channel

• The first major SAN networking technology


• Very low latency
• High reliability
• Fiber optic cables
• Copper cables
• Extended distance
• 1, 2 or 4 Gb transmission speeds
• Strongly typed
Fibre channel
A Fibre Channel fabric presents a consistent interface
and set of services across all switches in a network

Host and subsystems all 'see' the same resources

SAN
Storage SAN
Storage SAN
Storage
Target Target Target
Subsystem Subsystem Subsystem
Fibre channel port definitions

● FC ports are defined by their network role


● N-ports: end node ports connecting to fabrics
● L-ports: end node ports connecting to loops
● NL-ports: end node ports connecting to fabrics or loops
● F-ports: switch ports connecting to N ports
● FL-ports: switch ports connecting to N ports or NL ports in
a loop
● E-ports: switch ports connecting to other switch ports
● G ports: generic switch ports that can be F, FL or E ports
Ethernet / TCP / IP SAN technologies

 Leveraging the install base of


Ethernet and TCP/IP networks
 iSCSI – native SAN over IP
 FC/IP – FC SAN extensions over IP
iSCSI

 Native storage I/O over TCP/IP


 New industry standard
 Locally over Gigabit Ethernet
 Remotely over ATM, SONET, 10Gb Ethernet

iSCSI
TCP

IP
MAC
PHY
iSCSI equipment

 Storage NICs (HBAs)


 SCSI drivers

 Cables
 Copper and fiber

 Network systems
 Switches/routers
 Firewalls
 FC/IP
 Extending FC SANs over TCP/IP networks
 FCIP gateways operate as virtual E-port connections
 FCIP creates a single fabric where all resources appear
to be local

OneTCP/IP
fabric FCIP
FCIP
Gateway LAN, MAN Gateway
E-port or WAN E-port
SAN switching & fabrics

 High-end SAN switches have latencies of 1 - 3


µsec
 Transaction processing requires lowest latency
 Most other applications do not

 Transaction processing requires non-blocking


switches
 No internal delays preventing data transfers
Switches and directors

 Switches
 8 – 48 ports
 Redundant power supplies
 Single system supervisor

 Directors
 64+ ports
 HA redundancy
 Dual system supervisor
 Live SW upgrades
SAN topologies

Star
• Simplest
• single hop

 Dual star
• Simple network
+ redundancy
• Single hop
• Independent or integrated
fabric(s)
SAN topologies

N-wide star
• Scalable
• Single hop
• Independent or
integrated fabric(s)

 Core - edge
• Scalable
• 1 – 3 hops
• integrated fabric
SAN topologies

Ring
• Scalable
• integrated fabric
• 1 to N÷2 hops

 Ring + Star
• Scalable
• integrated fabric
• 1 to 3 hops
Lesson #6

File systems
File system functions

Name space

Access control

Metadata

Locking

Address space management


Storing
Filing
Think of the storage address space as a sequence
of storage locations (a flat address space)

1 2 3 4 5 6 7 8 9
10 11 12 13 14 15 16 17 18
19 20 21 . . . . 26 27
28 29 . . . . . 35 36
37 38 . . . . . 44 45
46 . . . . . . 53 54
55 . . . . . . 62 63
64 . . . . . . 71 72
73 . . . . . . 80 81
82 83 84 85 86 87 88 89 90
Superblocks
 Superblocks are known addresses used to find
file system roots (and mount the file system)

SB

SB
 File systems must have a known and
Filing and Scaling
dependable address space
 The fine print in scalability - How does the filing function
know about the new storing address space?

1 2 3 4 5 6
7 8 9 10 11 12
13 14 15 16 17 18
19 20 21 22 23 24
25 26 27 28 29 30
31 32 33 34 35 36
37 38 39 40 41 42

Storing
Storing 1 2 3 4 5

Filing 6
11
7
12
8
13
9
14
10
15
16 17 18 19 20
21 22 23 24 25

Das könnte Ihnen auch gefallen