Beruflich Dokumente
Kultur Dokumente
Agenda
Exadata Software Architecture Exadata Storage Layout Exadata Storage Scale-Out Architecture
Exadata Architecture
RAC Database DB Server DB Server DB Instance DBRM ASM DB Instance DBRM ASM
Enterprise Manager
InfiniBand Switch/Network
OEL OEL
CELLSRV
CELLSRV
CELLSRV
Data Data
Data
Network Fabric
Data Data
SGA
Data
IO Client IO Layer ASM layer
?
Network Fabric
Meta data
Meta data
Data Data
IO Client IO Layer cellsrv libcell ASM layer
Data
Network Fabric
libcell linked with DB/ASM talks to cellsrv iDB protocol is born Multiple threads in cellsrv Threads perform asynchronous IO to disks and network 7
Meta data
Meta data
Data Data
IO Client IO Layer cellsrv libcell ASM layer
Data
Network Fabric
Meta data
Meta data
Data Data
IO Client IO Layer cellsrv libcell ASM layer
Data
/etc/oracle/cell/network /etc/oracle/cell/network-config
Local IP
cellinit.ora
Network Fabric
Meta data
Meta data
Data Data
IO Client IO Layer cellsrv libcell ASM layer
Data
Cells
cellip.ora
/etc/oracle/cell/network /etc/oracle/cell/network-config
Local IP
cellinit.ora
Network Fabric
cellip.ora on database/ASM host maintains list of cells New cells can be added to cellip.ora dynamically
10
Meta data
Meta data
Data Data
dskm IO Client IO Layer cellsrv libcell ASM layer
Data
Cells
diskmon
cellip.ora
/etc/oracle/cell/network /etc/oracle/cell/network-config
Local IP
cellinit.ora
Network Fabric
Master diskmon (diskmon) starts with CSS and communicates with cellsrv Slave diskmon (dskm) is part of every instance and communicates with master diskmon Handles cell failures, IO fencing, IO resource management plan propagation
11
Meta data
Meta data
Data Data
dskm IO Client
cellcli
Data
Cells
cellip.ora
cellsrv ms
/etc/oracle/cell/network /etc/oracle/cell/network-config
Local IP
cellinit.ora
Network Fabric
Cellcli allows user interaction and configuration Management Server (MS) displays and manages creation and deletion of griddisks, changes in hardware, SNMP traps, alerts, email, metrics etc
12
Meta data
Meta data
Data Data
dskm IO Client
cellcli
Data
Cells
cellip.ora
cellsrv ms
rs
/etc/oracle/cell/network /etc/oracle/cell/network-config
Local IP
cellinit.ora
Network Fabric
Restart Server (RS) monitors CELLSRV and MS, Backup RS monitors core RS RS monitors for process aliveness, memory usage etc
13
Meta data
Meta data
Data Data
dskm IO Client
cellcli
Data
Cells
cellip.ora
cellsrv ms
adrci
libcell rs
ADR
/etc/oracle/cell/network /etc/oracle/cell/network-config
Local IP
cellinit.ora
Network Fabric
Trace files and alert logs in Automatic Diagnostic Repository on cell alert.log (from RS and CELLSRV), ms-odl.log, ms-odl.trc, rs*trc, svtrc*.trc
14
Meta data
Meta data
Data Data
dskm IO Client
cellcli
Data
Exadata cells
cellip.ora
cellsrv ms
adrci
libcell rs
ADR
/etc/oracle/cell/network /etc/oracle/cell/network-config
Local IP
cellinit.ora
Infiniband Fabric
Infiniband fabric requires RPMs from OFED/OEL, RedHat 5.1, OEL 5.1 Exadata Storage Server works only with 11.1.0.7 Database/ASM
15
Meta data
Meta data
Data
cellsrv ms
adrci
dcli
rs
ADR
ssh/cellcli
Network Fabric
EM Plugin Provides central location for metrics and alerts across cells No agent runs on cell dcli allows user to run commands across cells
16
17
Physical disks map to a Cell Disks Cell Disks partitioned into one or multiple Grid Disks ASM diskgroups created from Grid Disks Transparent above the ASM layer
Physical Disk
ASM disk
18
Cell Disks
Cell Disk
Exadata Cell Exadata Cell
Cell Disk is the entity that represents a physical disk residing within a Exadata Storage Cell
19
Grid Disks
Grid Disk
Exadata Cell
Exadata Cell
Grid Disk is the entity allocated to ASM as an ASM disk Minimum of one Grid Disk per Cell Disk Can be used to allocate hot, warm and cold regions of a Cell Disk or to separate databases sharing Exadata Cells
20
One for the active, or hot portion, of the database and a second for the cold or inactive portion
ASM striping evenly distributes I/O across the disk group ASM mirroring is used protect against disk failures
ASM mirroring is used protect against disk failures ASM failure groups are used to protect against cell failures
22
23
Scale-Out Architecture
ASM
The database, ASM, and Exadata Cells each play a role in Oracles scale-out storage architecture Responsibilities are placed in the optimal location
DB Business Data Protection ASM Reliable Storage Pool Exadata Cell Database Intelligent Storage
Exadata
24
Grid Disks
Expose storage as collections of intelligent network disks called Grid Disks to ASM and the DB iDB is conceptually similar to iSCSI but has extensive DB intelligence iDB is layered on top of ZDP network protocol
To ensure full scale-out benefits, cells never communicate with each other
Cross cell operations are implemented in ASM or DB Cell independence ensures no performance bottlenecks, and no cascading failures Key to scalability architecture
25
Cell appliance design eliminates storage configuration missteps and administrative overhead Simple provisioning
Cell grid disks are automatically made visible to ASM No OS level LUNs or mount points to setup and manage
Cross cell ASM mirroring is automatically configured for grid disks ASM failure group topology Multiple Grid disks per physical disk allow multiple ASMs to use the same cells
26
ASM Disk ASM Disk ASM Disk ASM Disk ASM Disk
manager Flexible data distribution (striping) Mirroring Automatic data re-balancing Free
ASM
DB File #2
DB File #1
ASM manages storage in megabyte allocation units Each DB file consists of a set of allocation units The location of a files allocation units are individually tracked by ASM ASM evenly spreads allocation units across all cells and disks in the grid All disks evenly utilized Optimal performance
28
ASM
DB File #2
DB File #1
ASM migrates a fraction of the allocation units to the new cell Online and transparent to application Minimal data movement to new or removed cells
New Cell
DB File #1
ASM
DB File #2
ASM implements mirroring at the allocation unit level Primary and mirror copy of allocation units are placed on separate storage cells Automatically remirrors across all remaining cells when a disk or array fails Failure of disk or array is transparent to database
DB File #1
Fast mirror resync
ASM
DB File #2
Writes replayed when cell begins responding No need to remirror all the unchanged data Benefits
Fast recovery from transient failures e.g Cell crash or temporary hang Can be used for planned maintenance Cell software or component upgrade
All single points of failure eliminated by the Exadata Storage architecture Hardware Assisted Resilient Data (HARD) built in to Exadata Storage
Archiving and corruption protection Can be used with Oracle Secure Backup (OSB) or third party tape backup software
Enterprise Manager
Enterprise Manager Grid Control Plug-in to monitor & manage Exadata Storage Cells
Comprehensive CLI
Local Exadata Storage cell management Distributed shell utility to execute CLI across multiple cells
33
34