Sie sind auf Seite 1von 70

Hardware and Operating Systems

Peter van Spaandonk

Agenda
Hardware Strategy
SuperCluster
T4
Solaris
Linux
Storage

Transforming the Technology Stack

Engineering Level Optimization Matters

Engineered Systems and Appliances

Agenda
Hardware Strategy
SuperCluster
T4
Solaris
Linux
Storage

SPARC SuperCluster
Overview

General purpose, flexible, integrated, optimized and


scalable server to run database, middleware &
applications
Compute, storage, network
Compute : Scalable compute nodes, flash
Storage : DB storage and network connectivity to general purpose
storage
Network : IB backplane, 10 GbE for external connectivity

Optimizations and Integration


Exadata Storage Servers & Oracle DB 11gR2
Exalogic Elastic Cloud Software
Ops Center & Enterprise Manager Grid Control
7

SPARC SuperCluster
Best for Oracle, Runs All existing Workloads
SPARC T4 Compute Pool
10 World Records over IBM and
HP accross every tier

Exadata Storage Cells


1M IOPS, 32 GB/s query
throughput

Exalogic Elastic Cloud


10x Java Performance

Integrated ZFS Storage


2x faster and 2x better price
performance of NetApp

Virtualization

Near zero virtualization


overhead

InfiniBand

5-8x the speed of current


networks

Enterprise Manager
up to 80 % reduction of
downtime due to proactive
critical application
patching

Solaris 11
Cloud Provisioning in seconds
Unmatched Scalability

SPARC SuperCluster
Hardware Stack Half Rack
Compute
2 * T4-4 nodes, each with
4 * T4 processors @ 3.0 GHz
1 TB memory
6 * 600GB internal SAS Disks
2 * 300GB SSDs
4 * Infiniband HBAs + 4 * 10 GbE NICs

Network
3 * Sun DataCenter InfiniBand Switch 36-port Switches
GbE Management Switch

Storage
3 * Exadata Storage Servers
Optional Exadata Storage Server Expansion Rack

Shared Storage
ZFS Storage Appliance 7320 with 40 TB of disk capacity

Data Migration
Optional FCAL HBA to connect to existing SAN storage
9

SPARC SuperCluster
Software Stack
Operating System
Oracle Solaris 11 for Oracle Database
and Exalogic Software
Oracle Solaris 11 or Oracle Solaris 10
for other applications

Virtualization
Oracle VM Server for SPARC Logical
Domains and Containers/Zones
(including Branded Zones)

Management
Oracle Enterprise Manager 11g Ops
Center, and optionally Enterprise
Manager Grid Control (for Oracle DB
and Applications)

Clustering
Oracle Solaris Cluster (optional)
Oracle Clusterware (for Oracle DB)

Database
Oracle Database 11gR2 to leverage
Exadata Storage Servers
Other databases with external storage

Middleware
Oracle WebLogic Server with optional
Exalogic Elastic Cloud Software

Applications
Oracle, ISV and customer applications
qualified on Solaris 10 or Solaris 11

10

Standardized and Simple to Deploy


Servers, Storage, Networking, Software : Engineered Together

All engineered systems are the same


Delivered, Tested and Ready-to-Run
Highly Optimized
Highly Supportable
No unique configuration issues
Identical to configuration used by Oracle
Engineering

Runs existing database, middleware,


and custom applications
Supports past 30 years of Oracle DB capabilities

11

SPARC SuperCluster
Fastest application performance
PeopleSoft performance improved up to 8x
JD Edwards performance improved up to 8x

Superior flexibility and TCO


Runs all your applications
Lower costs with end-to-end integration

Leading Efficiency
Deploy 10x faster than IBM or HP

12

SPARC SuperCluster
in Conclusion

Is an ideal platform to support full-stack applications


Offers all the benefits of an Oracle Engineered
system
Integrated compute and storage with an InfiniBand backplane

Offers a logical upgrade path for existing SPARC


Customers
Delivers a compelling & highly competitive alternative
to IBM & HP
Superior performance at a significantly lower cost
Sysadmin savings on initial installation and ongoing support
Reduced Total Cost of Ownership
13

Agenda
Hardware Strategy
SuperCluster
T4
Solaris
Linux
Storage

14

Transforming the Technology Stack

15

SPARC/Solaris Server Trajectory


Enable both standalone / engineered systems

Performance

Best-of-Breed vs IBM power and Intel x86


High thread count systems, 64-10000+ threads per system
Very large memory footprint, multiple TBs per processor
Scalable system portfolio, 1-64 sockets
Increase application performance 2x every two years

New Application Accelerators


Hardware & Software engineered together
E.g. Cryptography, Compression, Copy Engines for fast RAC scaling, Support for
Oracle Native Data Types, Memory Versioning

Compatibility
Binary compatibility with SPARC V9 ISA and Solaris 10/11
Built in virtualization (OVM for SPARC)
16

SPARC Server Strategy


Foundation for Mission Critical Computing

17

2011 Oracle SPARC server portfolio

18

2010 SPARC Server Roadmap


Maximizing Results

19

2011 SPARC Server Roadmap


Maximizing Results

20

SPARC Future Work


2x Performance Improvement Every 2 Years

Software in Silicon

Security: Enhanced cryptography


Oracle numbers Arithmetic Acceleration
Hardware Decompression
In Memory Columnar Database Acceleration
Memory Versioning
Low Latency Clustering

Increased Performance

Higher core frequency


Multiple pipelines per core
Increased core counts per chip
Larger caches
More memory bandwidth

21

SPARC T4 systems
Fastest Performing SPARC Servers Ever
Single thread performance increase over SPARC T3
servers
Same throughput as T3 Systems

Built-in, no-cost virtualization


High-bandwidth and high-capacity I/O
Built-in, on-chip crypto (security) acceleration
Integrated 10GbE
Flash support 100GB & 300GB SSDs
Full range of blade and rack servers to cover workloads
from mission critical applications to consolidation, database,
and backoffice applications to web and network
infrastructure
22

SPARC T4
5x per thread performance

3.0 GHz
8 Cores, 64 Threads
Dynamic Threading
Out of Order Execution
2 On Chip Dual-Channel DDR3 Memory Controllers
2 On Chip 10 GbE Networking
2 On Chip x8 PCIe gen2 I/O Interfaces
18 On Chip Crypto functions
Balanced high-bandwidth interfaces and internals
Co-engineered with Oracle software
23

SPARC T4
Dynamic threading
Core resources are shared between the active threads
Load-buffers, store-buffers, pick-queue, working-register-file, reorder-buffer, etc.

Resources can be statically or dynamically allocated


Dynamic allocation enables higher throughput by seamlessly adjusting
resourcing based upon thread behavior
Applications scale better
Increased throughput especially on heterogeneous workloads

If a thread occupies a resource, it must release the resource in a timely


fashion
If not, thread is considered a hog and hardware limits the resources
availability to it
High and low watermarks for various core resources allocation/de-allocation
Upon reaching high watermark, thread resource allocation stalls until low watermark
reached

24

Agenda
Hardware Strategy
SuperCluster
T4
Solaris
Linux
Storage

25

Solaris 11
Your applications just run
Seamless Scaling with Hardware: 9 NEW World Records
Dynamic threading - Next Gen I/O - Terabyte of Memory - 10,000s of threads

Simplified Administration: 4x faster upgrades, 2.5x faster reboots


Provision in seconds - Fool-proof updates - Horizontal Scaling

Virtualization: Cloud provisioning in seconds, Near zero overhead


Virtualized network with QoS - Solaris 10 Zones - Built-in

Efficient Data Management: 5x compression


Integrated deduplication, compression - Infinite snapshots and clones - No cost replication

Advanced Protection: 3x faster encryption


Hardware Accelerated Encryption - Integrated auditing - Advanced user access controls

26

Oracle Solaris 11
Designed for next-generation hardware

27

Oracle Solaris 11
Designed for Secure Cloud Deployment

28

Oracle Solaris 11 Upgrade


Upgrade Time in Seconds

29

Oracle Solaris 11 Fast Reboot


Reboot Time to Login Prompt in seconds

30

Introducing: Enterprise Manager 11g Ops Center


Industrys First Converged Hardware Management Solution for
Sun

31

Comprehensive Management Capabilities


Across the lifecycle for the entire hardware infrastructure

32

Integrated Management and Telemetry


Up to 90% Reduction of Downtime Due to Proactive Critical
Application Patching

33

Agenda
Hardware Strategy
SuperCluster
T4
Solaris
Linux
Storage

34

Oracle Linux Overview


Unbreakable Enterprise Kernel is the default kernel
with Oracle Linux
Open source and free to download
Red Hat Compatible Kernel is also available
Latest release is Oracle Linux 6.1
Support pricing is same regardless of which kernel
you use
Oracle recommends using the Unbreakable
Enterprise Kernel
35

The Unbreakable Enterprise Kernel


Fast, modern, reliable
Used by Exadata and Exalogic for extreme
performance
Brings the latest Linux innovations to customers
Release 1 is based on kernel 2.6.32
Plus brand new optimizations from Oracle that are all
open source
Available in both Oracle Linux 5 and Oracle Linux 6
Existing applications run unchanged
36

The Unbreakable Enterprise Kernel : Modern


Bigger servers
Up to 4096 CPUs and 2 TB of memory
Up to 4 PB (petabyte) clustered volumes with
OCFS2
Advanced NUMA support
Power management
CPUs to stay in low power state when the system
is idle
Fine grained CPU and memory resource control
37

The Unbreakable Enterprise Kernel : Reliable


Data Integrity: Eliminates silent data corruption using
T10 spec; stops corrupt data from being written
Hardware Fault Management: Reduces system
crashes and improves system uptime
Diagnostics Tools: Improved tools
The Unbreakable Enterprise Kernel tracks
mainline Linux users get community and Oracle
enhancements faster

38

Ongoing contributions to Linux Kernel

Btrfs
Data integrity T-10
Xen
OCFS2
FedFS
Transcendent Memory
Infiniband and RDS

39

Ksplice Available for Oracle Premier Support


Customers
Ksplice transforms Oracle Linux updates into zero downtime updates
Linux servers within the customer environment connect to a Unbreakable
Linux Network to download and apply updates while the system is running
Customers can track the status of their servers via an intuitive web interface
and can integrate zero downtime updates into existing management tools
via an API

40

Traditional Update Approach


Disruptive, Downtime and Delays

41

We solve this problem in Oracle Linux


Zero downtime updates with Ksplice

42

Innovating Linux
Upcoming Features : better Linux for All

Dtrace
Transcendent memory via leancache
Modern, Advanced file system: Btrfs
Data integrity from Applications to
Disk
Resource isolation cgroups
OS isolation Linux Containers
Built in virtual switch

43

Agenda
Hardware Strategy
SuperCluster
T4
Solaris
Linux
Storage

44

Oracles Complete Storage Portfolio


Investing in Best of Breed Platform

45

Oracles Complete Storage Portfolio

46

Oracles Hybrid Columnar Compression (HCC)

47

Oracles NAS Storage


3rd generation ZFS Storage Appliances
S7120

SINGLE
CONTROLLER

S7320
S7420

Scales to 100TB

DUAL OR SINGLE
CONTROLLERS

Scales to 96GB Flash

Scales to 200TB
Scales to ~5TB Flash

ACTIVE-ACTIVE
CONTROLLERS
Scales to over 1PB

Enterprise Storage Software

Scales to ~6TB Flash

All Protocols: CIFS, NFS, InfiniBand, FC, iSCSI, WebDEV,


All Data Reduction Software: Thin provisioning, Compression, Deduplication
All Data Protection Software: Replication, Snap, Mirroring
Industry-leading Storage Analytics
Engineered integration with Oracle software: HCC, OEM, ASM, SAM, Oracle VM
48

ZFSs Storage Appliances


Architectural Overview
Application I/O

Virtual Pool
file, volume, and
data services

Storage Pool
read flash, write
flash, and drives

NFS

IB

CIFS

iSCSI

ComThin
press Dedup Provision
Snap

Mirror

FC

Encrypt

Replication

Read
Intensive

Write
Intensive

I/O

I/O

Storage

49

ZFS Storage Appliances

Performance
Automates storage tiering (HSPs)
Eliminates distinct file and volume management
Concurrent block and file I/O, with shared data services

Data Integrity
Entire I/O path validated before data stored
Eliminates potential for bit rot, phantom writes, etc.

Analytics
Comprehensive and precise file-level view

50

ZFS Storage Appliances

SAS-2 architecture
15K drives
20 PCIe Gen2 slots
64 cores
1TB of DRAM
Multi-tiered automation

New with
Gen. 3

600GB
15K
SAS-2

DRAM
Read Flash
Write Flash
15K SAS-2
7K SAS-2

1TB
DRAM

4 Write SSDs per Tray (max)

Built for Speed

2TB
7K
SAS-2

5TB
READ
FLASH

600GB
15K
SAS-2
51

$ per IOPS I/Os per Sec

ZFS Storage Appliances are 2x faster and


the price of NetApp

New with
Gen. 3

Oracle
NetApp
Oracle

137,000

68,035
$2.99

NetApp

$7.48

SPC 1 Results: Storage benchmark that


represents a typical database workload
* source: storage performance council. www.storageperformance.org. (NetApp 3270 vs. Oracle 7420)
52

Full Complement of Integrated Software


DATA PROTOCOLS

DATA SERVICES

Fibre Channel

Single, Double & Triple


Parity RAID( RAIDZ,
Z2,Z3)
Mirroring & Triple Mirroring
Hybrid Storage Pool
End-to-End Data Integrity
Snapshots
Quota(s)
In-line Dedup
Compression
Thin Provisioning
Antivirus via ICAP Protocol
Online Data Migration
Clustering
Hybrid Columnar
Compression
Remote Replication
(Optional)
Cloning (Optional)

iSCSI
Infiniband over IP/RDMA
iSER
SRP
NFS v3 and v4
CIFS
HTTP
WebDAV
FTP
ZFS NDMP v4

MANAGEMENT
Browser and CLI Interface
Management Dashboard
Hardware/component view
Role-based Access
Control
Phone Home
Event and Threshold
based Alerting
Dtrace Analytics
Scripting
Workflow Automation
Advanced Networking
DFS Standalone
Namespace
Source Aware Routing

53

ZFS Storage Appliances


Only NAS with application oriented analytics

Automatic real-time visualization


of application and storage
workloads
Customer use examples of ZFS analytics:
System Utilization: Biotech company pinpoints disk bottlenecks (high utilization % or high
IOPs) and under-use of disks
System Performance: Web Services company resolves client read performance issues by
correlating with specific storage write operations
Tuning: Finances Services company pinpoints partial block update issues not seen with
NetApp
Load balancing: US bank visualizes and rebalances system resources for critical file
systems
54

Oracles Pillar Axiom 600


5 Generations of Pillar Axiom Software
Pillar Axiom 600 ONE model that linearly
scales both capacity and performance
Up to 4 ACTIVE-ACTIVE
PILLAR AXIOM SLAMMERS
with 64 PILLAR AXIOM BRICKS
SINGLE ACTIVE-ACTIVE
PILLAR AXIOM SLAMMER
with ONE PILLAR AXIOM
BRICK
2 Control Units
12 Drives
3.6TB Capacity
48GB Cache

8 Control Units
Up to 832 drives
Up to 1.6PB Capacity
192GB Cache
128 RAID Controllers
SATA, FC, or SSD Storage
Classes

55

Oracles Pillar Axiom 600


5 Generations of Pillar Axiom Software
Features include
Patented Quality of Service (QoS) Feature
Protocols: FC, iSCSI, CIFS, NFS
Management: Multi-Axiom Management, Application Profiles,
Thin Provisioning, Storage Domains, Path Management
Software
Data Protection and Mobility: Replication, Volume Copy,
Clones
Engineered integration with Oracle software: HCC, OEM,
ASM, SAM, Oracle VM

56

Introducing the Pillar Axiom 600


The core value proposition
Pillar Axiom 600 : Oracles Axiom lowers IT costs and speeds up ROI with
advanced Quality of Service, simple data management, and industry leading
utilization and scalability.
Easy to Use Enterprise Storage: Data and storage services that can be
promoted or demoted on the fly to increase or decrease performance and
priority as business and application priorities change.
Scalable and Elastic: Ideal storage platform for virtual infrastructure projects,
IT data center consolidation projects, Oracle deployments, and bringing
business-critical applications, such as financial and OLTP, online with the
highest levels of performance without tradeoffs for capacity utilization.
Industry leading Efficiency: Utilization, Performance, and Protection tied to
unique service levels. Consolidate your applications on a single storage
platform.

57

Axiom 600 Terminology


Storage Controllers Pillar Axiom Slammer
(1 to 4 per Axiom)
Active-Active SAN or NAS control units
48GB Cache
SAN: 4x 8Gbps FC, 4x 4Gbps FC, 8x 1GbE iSCSI
NAS: 4x 10GbE, 8x 1GbE
Redundancy for High Availability

Storage Enclosure Pillar Axiom Brick


(1 to 64 per Axiom)
Dual RAID controllers
Embedded RAID-5 and RAID-10
SATA, Fibre Channel and SSD drives
Built in hot spare
Hot-swappable drives and RAID Controllers

Management Controller Pillar Axiom Pilot


(1 per Axiom)
Dual control units
Active/Standby - High Availability Cluster
(2) Hard drives for Configuration and Log Cache
(6) 10/100/1000 Base-T Ethernet

58

58

Top Technology Differentiators


Feature

Function

Quality of Service

Application prioritization and


contention management that enables
multiple applications to efficiently coexist on the same storage system

Modular
Architecture

Distributed RAID

Ability to dynamically scale both


performance and capacity by
independently adding Slammers (up
to 4) and Bricks (up to 64)

Achieves superior scalability and


performance even during drive
rebuilds by moving RAID local to
storage enclosures

Benefits
Applications are assigned I/O resources
according to their business value and not
relegated to first come first served
Increases overall efficiency and utilization
of the storage system
Maximum performance/utilization
regardless of size of configuration.
Ability to grow and rebalance the storage
pool based on changing business
environments
Ensures predictable performance scaling
with capacity add.
Higher reliability by localizing the drive
rebuild process to the storage enclosure
and reducing RAID rebuild window
59

Quality of Service Model


First Class Business Class
Tier 1
Tier 2

Coach
Tier 3

QoS is more than where you sit, its also the priority
and class of service you get
How fast you board and exit
The number of attendants per passenger
The seat size and leg room
The entertainment selections
60

Axiom Architecture : Breaking the FIFO model


Premium

Medium

Virtual
Virtual
Machine Machine
2
1

Low

Virtual
Machine
3

The
Align
Virtual
4
5
1
3
2
6
Business
Server
7
8
9
10 Value Of The
Application
FIFO

Queue

Premium

Medium

Virtual
Machine 1

Low

Virtual
Machine 2

Virtual
Machine 3

Virtual Server

10

Premium
Priority
Queue

Medium
Priority
Queue

Low Priority
Queue

To I/O
Performance
Typical
Levels
Multi-Tier Array
61

Application Aware Profiles


Oracle Database Platform Example
LUN Performance Profile

Data
Type

Storage
Class

Priority

Access Bias

I/O Bias

Control files

SSD or FC

High / Premium

Mixed

Mixed

Database Index

SSD

High

Mixed

Mixed

Database Tables

SATA

Medium

Mixed

Mixed

Temporary files

SATA

Medium

Mixed

Mixed

Online Redo Log Files

FC

High

Sequential

Write

Archive Log Files

SATA

Low

Sequential

Write

62

Modular Scaling with Modular Components

Bricks
Slammer
Slammer
Pilot

63

True Performance Scalability : Proof

64

Scale-up and Scale-out Design

Linear scaling of bandwidth and performance by adding Slammers


Linear scaling of performance and capacity by adding Bricks

65

Pillar Axiom 600 : Distributed RAID

66

Extending QoS : Pillar Axiom Storage


Domains
No Data Co-mingling :
Isolate Application Data or
Workloads to Physical
location
Separate User Groups or
Departments to Physical
Location
Separate Protocols (NAS
or SAN) to Physical Drive
Grouping Location

Create up to 64 Physical
Domains in a Single Axiom

67

Pillar Axiom MaxRep for Replication for SAN


Offers
Different topologies with a single easy to use interface

Synchronous Replication
Asynchronous Replication
1-to-many, Many-to-1 System Replication
Multi-hop Replication

Application Consistent Recovery


Microsoft Applications
Oracle Databases
Virtual Server environments

Local or Remote Continuous Protection and Recovery


WAN Optimization Features
Bandwidth Throttling
Encryption
Compression
68

Transforming the Technology Stack

69

Want to know more?


Contact: peter.vanspaandonk@uptime.be

Das könnte Ihnen auch gefallen