Beruflich Dokumente
Kultur Dokumente
Agenda
Hardware Strategy
SuperCluster
T4
Solaris
Linux
Storage
Agenda
Hardware Strategy
SuperCluster
T4
Solaris
Linux
Storage
SPARC SuperCluster
Overview
SPARC SuperCluster
Best for Oracle, Runs All existing Workloads
SPARC T4 Compute Pool
10 World Records over IBM and
HP accross every tier
Virtualization
InfiniBand
Enterprise Manager
up to 80 % reduction of
downtime due to proactive
critical application
patching
Solaris 11
Cloud Provisioning in seconds
Unmatched Scalability
SPARC SuperCluster
Hardware Stack Half Rack
Compute
2 * T4-4 nodes, each with
4 * T4 processors @ 3.0 GHz
1 TB memory
6 * 600GB internal SAS Disks
2 * 300GB SSDs
4 * Infiniband HBAs + 4 * 10 GbE NICs
Network
3 * Sun DataCenter InfiniBand Switch 36-port Switches
GbE Management Switch
Storage
3 * Exadata Storage Servers
Optional Exadata Storage Server Expansion Rack
Shared Storage
ZFS Storage Appliance 7320 with 40 TB of disk capacity
Data Migration
Optional FCAL HBA to connect to existing SAN storage
9
SPARC SuperCluster
Software Stack
Operating System
Oracle Solaris 11 for Oracle Database
and Exalogic Software
Oracle Solaris 11 or Oracle Solaris 10
for other applications
Virtualization
Oracle VM Server for SPARC Logical
Domains and Containers/Zones
(including Branded Zones)
Management
Oracle Enterprise Manager 11g Ops
Center, and optionally Enterprise
Manager Grid Control (for Oracle DB
and Applications)
Clustering
Oracle Solaris Cluster (optional)
Oracle Clusterware (for Oracle DB)
Database
Oracle Database 11gR2 to leverage
Exadata Storage Servers
Other databases with external storage
Middleware
Oracle WebLogic Server with optional
Exalogic Elastic Cloud Software
Applications
Oracle, ISV and customer applications
qualified on Solaris 10 or Solaris 11
10
11
SPARC SuperCluster
Fastest application performance
PeopleSoft performance improved up to 8x
JD Edwards performance improved up to 8x
Leading Efficiency
Deploy 10x faster than IBM or HP
12
SPARC SuperCluster
in Conclusion
Agenda
Hardware Strategy
SuperCluster
T4
Solaris
Linux
Storage
14
15
Performance
Compatibility
Binary compatibility with SPARC V9 ISA and Solaris 10/11
Built in virtualization (OVM for SPARC)
16
17
18
19
20
Software in Silicon
Increased Performance
21
SPARC T4 systems
Fastest Performing SPARC Servers Ever
Single thread performance increase over SPARC T3
servers
Same throughput as T3 Systems
SPARC T4
5x per thread performance
3.0 GHz
8 Cores, 64 Threads
Dynamic Threading
Out of Order Execution
2 On Chip Dual-Channel DDR3 Memory Controllers
2 On Chip 10 GbE Networking
2 On Chip x8 PCIe gen2 I/O Interfaces
18 On Chip Crypto functions
Balanced high-bandwidth interfaces and internals
Co-engineered with Oracle software
23
SPARC T4
Dynamic threading
Core resources are shared between the active threads
Load-buffers, store-buffers, pick-queue, working-register-file, reorder-buffer, etc.
24
Agenda
Hardware Strategy
SuperCluster
T4
Solaris
Linux
Storage
25
Solaris 11
Your applications just run
Seamless Scaling with Hardware: 9 NEW World Records
Dynamic threading - Next Gen I/O - Terabyte of Memory - 10,000s of threads
26
Oracle Solaris 11
Designed for next-generation hardware
27
Oracle Solaris 11
Designed for Secure Cloud Deployment
28
29
30
31
32
33
Agenda
Hardware Strategy
SuperCluster
T4
Solaris
Linux
Storage
34
38
Btrfs
Data integrity T-10
Xen
OCFS2
FedFS
Transcendent Memory
Infiniband and RDS
39
40
41
42
Innovating Linux
Upcoming Features : better Linux for All
Dtrace
Transcendent memory via leancache
Modern, Advanced file system: Btrfs
Data integrity from Applications to
Disk
Resource isolation cgroups
OS isolation Linux Containers
Built in virtual switch
43
Agenda
Hardware Strategy
SuperCluster
T4
Solaris
Linux
Storage
44
45
46
47
SINGLE
CONTROLLER
S7320
S7420
Scales to 100TB
DUAL OR SINGLE
CONTROLLERS
Scales to 200TB
Scales to ~5TB Flash
ACTIVE-ACTIVE
CONTROLLERS
Scales to over 1PB
Virtual Pool
file, volume, and
data services
Storage Pool
read flash, write
flash, and drives
NFS
IB
CIFS
iSCSI
ComThin
press Dedup Provision
Snap
Mirror
FC
Encrypt
Replication
Read
Intensive
Write
Intensive
I/O
I/O
Storage
49
Performance
Automates storage tiering (HSPs)
Eliminates distinct file and volume management
Concurrent block and file I/O, with shared data services
Data Integrity
Entire I/O path validated before data stored
Eliminates potential for bit rot, phantom writes, etc.
Analytics
Comprehensive and precise file-level view
50
SAS-2 architecture
15K drives
20 PCIe Gen2 slots
64 cores
1TB of DRAM
Multi-tiered automation
New with
Gen. 3
600GB
15K
SAS-2
DRAM
Read Flash
Write Flash
15K SAS-2
7K SAS-2
1TB
DRAM
2TB
7K
SAS-2
5TB
READ
FLASH
600GB
15K
SAS-2
51
New with
Gen. 3
Oracle
NetApp
Oracle
137,000
68,035
$2.99
NetApp
$7.48
DATA SERVICES
Fibre Channel
iSCSI
Infiniband over IP/RDMA
iSER
SRP
NFS v3 and v4
CIFS
HTTP
WebDAV
FTP
ZFS NDMP v4
MANAGEMENT
Browser and CLI Interface
Management Dashboard
Hardware/component view
Role-based Access
Control
Phone Home
Event and Threshold
based Alerting
Dtrace Analytics
Scripting
Workflow Automation
Advanced Networking
DFS Standalone
Namespace
Source Aware Routing
53
8 Control Units
Up to 832 drives
Up to 1.6PB Capacity
192GB Cache
128 RAID Controllers
SATA, FC, or SSD Storage
Classes
55
56
57
58
58
Function
Quality of Service
Modular
Architecture
Distributed RAID
Benefits
Applications are assigned I/O resources
according to their business value and not
relegated to first come first served
Increases overall efficiency and utilization
of the storage system
Maximum performance/utilization
regardless of size of configuration.
Ability to grow and rebalance the storage
pool based on changing business
environments
Ensures predictable performance scaling
with capacity add.
Higher reliability by localizing the drive
rebuild process to the storage enclosure
and reducing RAID rebuild window
59
Coach
Tier 3
QoS is more than where you sit, its also the priority
and class of service you get
How fast you board and exit
The number of attendants per passenger
The seat size and leg room
The entertainment selections
60
Medium
Virtual
Virtual
Machine
Machine
2
1
Low
Virtual
Machine
3
The
Align
Virtual
4
5
1
3
2
6
Business
Server
7
8
9
10 Value Of The
Application
FIFO
Queue
Premium
Medium
Virtual
Machine
1
Low
Virtual
Machine
2
Virtual
Machine
3
Virtual Server
10
Premium
Priority
Queue
Medium
Priority
Queue
Low Priority
Queue
To I/O
Performance
Typical
Levels
Multi-Tier Array
61
Data
Type
Storage
Class
Priority
Access Bias
I/O Bias
Control files
SSD or FC
High / Premium
Mixed
Mixed
Database Index
SSD
High
Mixed
Mixed
Database Tables
SATA
Medium
Mixed
Mixed
Temporary files
SATA
Medium
Mixed
Mixed
FC
High
Sequential
Write
SATA
Low
Sequential
Write
62
Bricks
Slammer
Slammer
Pilot
63
64
65
66
Create up to 64 Physical
Domains in a Single Axiom
67
Synchronous Replication
Asynchronous Replication
1-to-many, Many-to-1 System Replication
Multi-hop Replication
69