Beruflich Dokumente
Kultur Dokumente
0.20
Scales to 8 sockets
using directory
Minimizes latency
Avoids congestion
Maximize bandwidth
Optimize
Systems
Multiply
Performance
SPARC
T5
Scale
Efficiently
Advance
Power
Management
Maximizes peak
performance
Manages thermal and
current loads
Scales elastically
SPARC T5-1B
SPARC T5-4
SPARC T5-2
5Copyright 2012, Oracle and/or its affiliates. All rights reserved.
SPARC T5-8
SPARC T5 Servers
Product Line Overview
Processor
SPARC T5-1B
SPARC T5-2
SPARC T5-4
SPARC T5-8
SPARC T5 3.6GHz
SPARC T5
3.6GHz
SPARC T5
3.6GHz
SPARC T5
3.6GHz
16, 128
32, 256
64, 512
128, 1024
16
32
64
128
256GB
512GB
2TB
4TB
16 LP x8 PCIe 3.0,
4 x 10GbE ports
16 LP x8 PCIe 3.0,
4 x 10GbE ports
Rack 5RU
Rack 8RU
Form Factor/RU
8 LP x8 PCIe 3.0,
4 x 10GbE ports
Blade
Rack 3RU
T5-1B Board
BoBs
REM
T5 CPU
PCIe
Switch
Service
Processor
16 DDR3
DIMMs
8Copyright 2012, Oracle and/or its affiliates. All rights reserved.
T5-2 Chassis
Memory
Risers
T5 CPU
Service
Processor
T5-2 Front
Locator
LED/Button
Power
Button
Fault LED
Status
LED
Over Temp
Indicator
Status LEDs
DISK 5
DISK 4
DISK 3
DISK 2
DISK 1
DISK 0
RFID/Serial
Number
HD-15 VGA
Port
2x USB 3.0
Ports
DVD
T5-2 Rear
HD-15 VGA
Port
SP Network 10/100
4x 10GbE
Ports
2x USB 3.0
PCIe 8
System Rear
Indicators
PCIe 7
AC0
PCIe 6
PCIe 5
PCIe 4
PCIe 3
PCIe 2
PCIe 1
PSU 0
PSU 1
AC1
SP Serial
Front
Processor
Module
PM 1
PM 0
Fault LED
Status LED
Power
Button
DISK 1
DISK 0
Over Temp
Indicator
Rear Fan/EM
Indicator
RFID/Serial
Number
DISK 3
DISK 2
DISK 5
DISK 4
PSU 0
SP Serial
Port
Main
Module
DISK 7
DISK 6
PSU 1
HD-15 VGA
Port
Main Module
(Entire Board)
PSU Status
LEDs
Fan Module 0
Fan Module 3
Fan Module 4
PCIe 16
PCIe 15
PCIe 14
PCIe 13
PCIe 12
PCIe 11
PCIe 10
PCIe 9
PCIe 8
PCIe 7
PCIe 6
PCIe 5
PCIe 4
PCIe 3
PCIe 2
PCIe 1
Rear I/O
Module
(RIO)
Fan Module 2
Fan Module 1
PCIe Carrier
Hot-Plug Button, LEDs
AC3 OK
LED
AC0 OK LED
AC3
System Rear
Indicators
AC0
SP Serial
4x 10GbE Ports
SP Network 10/100
Confidential Oracle Internal
2x USB 3.0
Front
DIMMs/BoB
PCIe Gen support
SPARC T3
SPARC T4
SPARC T5
1.65GHz
2.85GHz, 3.0GHz
3.6GHz
16
16
S2
S3
2
2 BoBs/memory controller
4 DIMMs/BoB
4 BoBs/CPU socket
2.0
4
2 BoBs/memory controller
2 DIMMs/BoB
8 BoBs/CPU socket
3.0
Feature
Form Factor
CPU
Memory
SPARC T5-4
5RU, 28 deep
4x SPARC T5
3.6 GHz (512 threads)
DDR3, 2TB MAX
64x Slots
Network
Internal Storage
Removable Media
Serial
PCI Express slots
Power Supply
Fans
4x 10GbE
Up to 8 x 2.5 SAS 3.0 or SSD, hot-plug
SPARC T4-4
5RU, 28 deep
4x SPARC T4
3.0 GHz (256 threads)
DDR3, 2TB MAX
64x slots
4 x 1GbE + 8x 10GbE (XAUI)
Requires 2 Separate QSFP Connectors
Up to 8 x 2.5 SAS 2.0, can use up to
4x SATA SSDs, hot-plug
32x DIMMs
16x per T5
CL routing
6
3
5
0 1
C2
C1
PM0
C2
C3
C4
C5
C6
0 1
C2
C0
Part #
Sw 0
Partition #
Switch 1
Partition #
Switch 2
C6
5
C1
2
3
C3
C4
C1
C5
C6
PM3
C6 C7
0 1 0 1
PMO
C0 C1
0 1 0 1
PM1
C2 C3
0 1 0 1
C0
C7
C0
Partition #
Switch 2
Part #
Sw 4
Part #
Sw 0
Partition #
Switch 1
Partition #
Switch 2
PFM1
PFM1
C1
C2
C3
C1
PM0
C4
C5
C6
C0
C7
C0
PMO
C0 C1
0 1 0 1
PFM1
PFM2
C1
PM0
C2
C3
2
Partition #
Switch 2
1
Part #
Sw 4
0
Part #
Sw 0
3
Partition #
Switch 1
C5
C6
C7
PMO
C0 C1
0 1 0 1
PFM1
PFM2
PFM3
MP8?
3
Partition #
Switch 2
C1
MP8?
3
C4
0
4
PFM2
PFM2
MP8?
3
C7
4 5
4
2
PM0
C2
C1
PM2
C4 C5
0 1 0 1
5
2
MP8?
0
C0
C7
PM1
4
2
PMO
C0 C1
0 1 0 1
C7
4 5
PFM2
C1
2
3
CL routing
PM3
C6
5
C1
C0
5
2
C5
C0
6
2
5 4
PM1
C7
4 5
PM2
C4
2
3
CL routing
PM3
C6
5
CL routing
PM3
3
Partition #
Switch 2
1
Part #
Sw 4
0
Part #
Sw 0
0
Partition #
Switch 1
0
Partition #
Switch 2
Partition #
Switch 2
Part #
Sw 4
CL routing
CL routing
PM1
6
C6
5
C0
C0
2
3
C1
C2
C3
C4
C1
PM0
PFM1
C7
4 5
C5
C6
C0
C7
C0
C1
PM0
C2
C3
Part #
Sw 0
3
Partition #
Switch 1
3
Partition #
Switch 2
C5
C6
C7
PMO
C0 C1
0 1 0 1
PFM1
MP4
3
Partition #
Switch 2
C1
MP4
0
C4
0
4
1
Part #
Sw 4
0
Part #
Sw 0
0
Partition #
Switch 1
0
Partition #
Switch 2
Partition #
Switch 2
Part #
Sw 4
BoB
BoB
BoB
CPU
DC/DCs
BoB
BoB
T5
CPU0
BoB
BoB
BoB
Host &
Data Flash
TPM
CPU
Debug
Port
PCIe
0
1
FPGA
Disk1
Disk2
x2
x1
LSI SAS
x8
x8
PCIe Gen3
PCIe Gen2
x8
x1
PCIe
Switch 0
DBG
PCIe
Switch 1
LPC
SP Module
Sideband Mgmt
x4
x8
x8
FEM0
PCI-EM1
1x4
1x4
Emulex
Pilot 3
USB 1.0
Hub Ctrl
USB
VGA
Serial
(HD15) (RJ45)
1x4
? here
22Copyright 2012, Oracle and/or its affiliates. All rights reserved.
USB 3.0
Host Ctrl
FEM1
Nalia
Niantic
Dual GigE
10/100/1000
NEM0 NEM1
x8
12C
PCI-EM0
MIDPLANE
Ethernet Mgmt
(to CMM)
DIMM
DIMM
DIMM
DIMM
DIMM
DIMM
DIMM
DIMM
DIMM
DIMM
DIMM
DIMM
DIMM
DIMM
DIMM
BoB
BoB
BoB
BoB
BoB
BoB
BoB
BoB
BoB
BoB
BoB
BoB
BoB
BoB
BoB
BoB
T5-0
T5-1
CPU
Debug
Port
PCIe
CPU
DC/DCs
CPU
DC/DCs
SAS/SATA
IO Controller
HDD0
HDD0
x1
x4
PCIe
Switch 0
HDD0
FRUID
Sideband Mgmt
x8
USB 3.0
Host
SATA DVD
USB 3.0
Hub
DBG
SP Module
PCIe
Switch 1
x8
x8
HDD0
x1
x4
x8
Host &
Data Flash
x8
x4
SAS/SATA
IO Controller
HDD0
x8
x4
HDD0
TPM
FPGA
x8
x8
CPU
Debug
Port
PCIe
DRAM
SPI
Flash
NAND
USB 2.0
Hub
USB
USB
VGA
Service
Processor
Internal USB
VGA
FAN BOARD
REAR IO
Slot 8 (8)
Slot 7 (8)
Slot 6 (8)
Slot 5 (8)
Slot 4 (8)
Slot 3 (8)
Slot 2 (8)
USB3 VGA
Slot 1 (8)
REAR IO Board
USB2
USB0 USB1
VGA
DB15
Serial
Mgmt
Enet
Mgmt
10/100
SBP1
4-DISK BACKPLANE
DISKS 4-7
MB
Motherboard
2x USB (Gen3)
[VGA/DB15]
[Serial Mgmt]
HDD[0:3]
FRONT IO
VGA, SERIAL MGMT, USB
TO RIO
SAS 2308
SAS1
4-DISK BACKPLANE
DISKS 0-3
FIO + VGA
VIDEO
MUX
Serial
MUX
SP
SERVICE PROCESSOR
MM
Main Module
SBP0
TO RIO
x2 USB Ports
SAS 2308
SAS0
Enet Mgmt
NC-SI
FRONT USB
HOST CTRLLR.
MONITOR &
CONTROL
SW1
CPU
PCIE PORTS
1 2
REAR USB
MOST CTRLLR.
10GB NIC 0
1x USB (Gen2)
2x USB (Gen3)
4 5 6
SW3
SW2
CPU
PCIE PORTS
SW5
CPU
PCIE PORTS
7 8 9 10
CPU
PCIE PORTS
11 12 13 14
SW4
Net 1
Net 0
RIO
SW0
CLOCK SYNTH.
&
BUFFERS
Debug Conn
Debug Conn
DC-DC
CONVERTERS
SW6
CPU
PCIE PORTS
EB
PCI Express Backplane
10GB NIC 1
15 16
[VGA/DB15]
[Serial Mgmt]
[Enet Mgmt 10/100]
MP4, MP8
T5-4 or T5-8
Midplane
IOS
0
IOS
1
CLR
0
CLR
1
CLR
2
CLR
3
CLR
4
CLR
5
CLR
6
IOS
0
IOS
1
CLR
0
CLR
1
T5 (CM1)
MCU0
L0
FSR0
L1
FSR1
MCU1
L0
FSR2
L1
FSR3
CLR
3
CLR
4
CLR
5
CLR
6
T5 (CM0)
MCU2
L0
FSR4
CLR
2
L1
FSR5
MCU3
L0
FSR6
L1
FSR7
MCU0
L0
FSR0
L1
FSR1
MCU1
L0
FSR2
L1
FSR3
MCU2
L0
FSR4
L1
FSR5
MCU3
L0
FSR6
L1
FSR7
BoB1
BoB3
BoB6
BoB5
BoB3
BoB0
BoB5
BoB7
C C
0 1
C C
1 0
C C
0 1
C C
1 0
C C
0 1
C C
1 0
C C
0 1
C C
1 0
BoB0
BoB2
BoB4
BoB7
BoB2
BoB1
BoB4
BoB6
C C
0 1
C C
1 0
C C
0 1
C C
1 0
C C
0 1
C C
1 0
C C
0 1
C C
1 0
T5 PCIe Subsystem
Dual x8 PCI Express Gen 3 ports provide 32 GB/s peak b/w
Supports Atomic Fetch-and-Add, Unconditional-Swap and
Compare-and-Swap operations
Accelerates virtualized I/O with Oracle Solaris VMs
128k virtual function address spaces ensure direct SR-IOV access for all
logical domains
64-bit DVMA space reduces IO mapping overhead, improving network
performance
Guarantees fault and performance isolation among guest OS instances
T5 PCIe Progression
T4
T5
16 GBs
32 GBs
44 bit
48 bit
Transaction Id Identification
on MSI and MSI-X
No
Yes
No
Yes
No
PCIe Retimer
x16 Connector
(x8 electrical)
7
Config
ID
Native Config
(T5-4 only)
CPU0
CPU1
PM1
PFM
MP
Front USB x1
Debug Slot x1
x8 up.
Part. 0
Switch 6
SAS0 x8
Switch 5
NET1 x8
x8 up.
Part. 0
x8 up.
Part. 0
x8 up.
Part. 1
x8 up.
Part. 0
Switch 0
x8 up.
Part. 0
Switch 1
x8 up.
Part. 0
Switch 2
x8 up.
Part. 0
Switch 3
x8 up.
Part. 1
Switch 4
SP VGA x1
Rear USB x1
SAS1 x8
NET0 x8
x8 up.
Part. 1
Debug Slot x1
MP
EB
Dotted line
devices
reside
on RIO
PCI-Express
Low Profile
Hot Plug Slots
Slot #
8 lanes
CPU #
10
11
12
13
14
15
16
x8
c0
x8
c0
x8
c0
x8
c0
x8
c0
x8
c0
x8
c1
x8
c1
x8
c1
x8
c1
x8
c1
x8
c1
x8
c1
x8
c1
x8
xx
x8
xx
T5-4/8 Native 2-Socket Configuration with One Root Domain Block fill color identifies Root Domain ownership
Block outline color identifies association to PM
Single non-redundant Domain
Switch 2 Slots crossed to maintain consistent Slot population order
Second level Switch 6 is partitioned
Slots drawn in order from left to right as in the actual chassis
differently from other configs
Native Config
Config
ID
PM1
CPU0
CPU1
CPU2
CPU3
MP
Front USB x1
Debug Slot x1
x8 up.
Part. 0
Switch 6
SAS0 x8
Switch 5
NET1 x8
x8 up.
Part. 0
x8 up.
Part. 0
x8 up.
Part. 1
x8 up.
Part. 0
Switch 0
x8 up.
Part. 3
x8 up.
Part. 0
Switch 1
x8 up.
Part. 3
x8 up.
Part. 0
Switch 2
x8 up.
Part. 3
x8 up.
Part. 1
Switch 3
Switch 4
SP VGA x1
Rear USB x1
SAS1 x8
NET0 x8
x8 up.
Part. 1
Debug Slot x1
MP
EB
Dotted line
devices
reside
on RIO
PCI-Express
Low Profile
Hot Plug Slots
Slot #
8 lanes
CPU #
10
11
12
13
14
15
16
x8
c0
x8
c0
x8
c0
x8
c0
x8
c2
x8
c2
x8
c2
x8
c2
x8
c1
x8
c1
x8
c1
x8
c1
x8
c3
x8
c3
x8
c3
x8
c3
T5-4/8 Native 4-Socket Configuration with One Root Domain Block fill color identifies Root Domain ownership
Block outline color identifies association to PM
Single non-redundant Domain
Switch 2 Slots crossed to maintain consistent Slot population order
Second level Switch 6 is partitioned
Slots drawn in order from left to right as in the actual chassis
differently from other configs
Native Config
Config
ID
PM1
CPU0
CPU1
PM2
PFM
PM3
PFM
CPU6
CPU7
MP
Front USB x1
Debug Slot x1
x8 up.
Part. 0
Switch 6
SAS0 x8
Switch 5
NET1 x8
x8 up.
Part. 0
x8 up.
Part. 0
x8 up.
Part. 1
x8 up.
Part. 0
Switch 0
x8 up.
Part. 3
x8 up.
Part. 0
Switch 1
x8 up.
Part. 3
x8 up.
Part. 0
Switch 2
x8 up.
Part. 3
x8 up.
Part. 1
Switch 3
Switch 4
SP VGA x1
Rear USB x1
SAS1 x8
NET0 x8
x8 up.
Part. 1
Debug Slot x1
MP
EB
Dotted line
devices
reside
on RIO
PCI-Express
Low Profile
Hot Plug Slots
Slot #
8 lanes
CPU #
10
11
12
13
14
15
16
x8
c0
x8
c0
x8
c0
x8
c0
x8
c6
x8
c6
x8
c6
x8
c6
x8
c1
x8
c1
x8
c1
x8
c1
x8
c7
x8
c7
x8
c7
x8
c7
Native Config
Config
ID
PM1
CPU0
CPU1
CPU2
CPU3
PM2
PM3
PFM
CPU6
CPU7
MP
Front USB x1
Debug Slot x1
x8 up.
Part. 0
Switch 6
SAS0 x8
Switch 5
NET1 x8
x8 up.
Part. 0
x8 up.
Part. 0
x8 up.
Part. 1
x8 up.
Part. 1
x8 up.
Part. 0
Switch 0
x8 up.
Part. 1
x8 up.
Part. 3
x8 up.
Part. 0
Switch 1
x8 up.
Part. 2
x8 up.
Part. 3
x8 up.
Part. 0
Switch 2
x8 up.
Part. 2
x8 up.
Part. 3
Switch 3
x8 up.
Part. 1
Switch 4
SP VGA x1
Rear USB x1
SAS1 x8
NET0 x8
x8 up.
Part. 1
Debug Slot x1
MP
EB
Dotted line
devices
reside
on RIO
PCI-Express
Low Profile
Hot Plug Slots
Slot #
8 lanes
CPU #
10
11
12
13
14
15
16
x8
c0
x8
c2
x8
c0
x8
c2
x8
c6
x8
c6
x8
c3
x8
c6
x8
c1
x8
c1
x8
c1
x8
c1
x8
c3
x8
c7
x8
c7
x8
c7
Native Config 0
Config
ID
PM1
PM2
PM3
CPU0
CPU1
CPU2
CPU3
CPU4
CPU5
CPU6
CPU7
MP
Front USB x1
Debug Slot x1
x8 up.
Part. 0
Switch 6
SAS0 x8
Switch 5
NET1 x8
x8 up.
Part. 0
x8 up.
Part. 0
x8 up.
Part. 1
x8 up.
Part. 1
x8 up.
Part. 0
Switch 0
x8 up.
Part. 1
x8 up.
Part. 2
x8 up.
Part. 3
x8 up.
Part. 0
Switch 1
x8 up.
Part. 1
x8 up.
Part. 2
x8 up.
Part. 3
x8 up.
Part. 0
Switch 2
x8 up.
Part. 1
x8 up.
Part. 2
x8 up.
Part. 3
Switch 3
x8 up.
Part. 0
x8 up.
Part. 1
Switch 4
SP VGA x1
Rear USB x1
SAS1 x8
NET0 x8
x8 up.
Part. 1
Debug Slot x1
MP
EB
Dotted line
devices
reside
on RIO
PCI-Express
Low Profile
Hot Plug Slots
Slot #
8 lanes
CPU #
10
11
12
13
14
15
16
x8
c0
x8
c2
x8
c0
x8
c2
x8
c4
x8
c6
x8
c4
x8
c6
x8
c1
x8
c3
x8
c1
x8
c3
x8
c5
x8
c7
x8
c5
x8
c7
(Processor Module).
All PM's must be half populated (16 DIMMs) or fully populated (32
DIMMs).
If a PM is half populated DIMMs must be in Channel 0 only, see
Processor Module Block Diagram for DIMM Channel locations.
- Note that the above rules imply that both T5 (cpu) nodes on a PM are
Module).
All PM's must be half populated (16 DIMMs) or fully populated (32 DIMMs).
If a PM is half populated DIMMs must be in Channel 0 only, see Processor
same.
For T5-8 (8P config): PM0 and PM3 must be configured the same and PM1
same.
36Copyright 2012, Oracle and/or its affiliates. All rights reserved.
T5-4
(5U)
SSDs
SAS-2 HDDs
300GB @ 10K RPM
600GB @ 10K RPM
900GB @ 10K RPM
SATA SSDs
Disk LED's
Ready to
Remove
Fault
Status
100GB
300GB
SPARC T5 Processor
Features
16 S3 cores, 16-128 Strands @
3.6Ghz
Single or multi-threaded operation per
Integrated I/O
core
Double I/O bandwidth over T4
System scalability to 8 sockets
2 x8 Lane PCIe 3.0 @ 8GT/s
SPARC Core S3
System Scalability
1-8 Strand Dynamically Threaded
7 Coherence Ports for scalability to
Pipeline
ISA-based Crypto-acceleration
8S
Power Management
8MB Shared L3$
Dynamic Voltage Frequency Scaling
Downclock, Overclock
41Copyright 2012, Oracle and/or its affiliates. All rights reserved.
BoB
BoB
BoB
BoB
BoB
BoB
BoB
BoB
Memory
Control
Memory
Control
Memory
Control
Memory
Control
Coherence Unit
Coherence Unit
Coherence Unit
Coherence Unit
SPARC S3
Core
L3$ B0
L3$
B0
1MB,1
6-way
L3$ B2
1MB,1
6-way
L3$ B1
1MB,1
6-way
L3$ B3
1MB,1
6-way
L3$ B4
B0
L3$
B0
1MB,1
6-way
L3$ B6
1MB,1
6-way
L3$ B5
1MB,1
6-way
L3$ B7
1MB,1
6-way
FGU
Crypto
Coherency Links
12.8 Gbps per lane
- 12 lanes per link
Link 0
C0
C1
C2
C3
C4
C5
C6
C7
C8
C9
16 KB L1D$
IO
Subsystem
Link 1
128 KB L2$
16 KB L1I$
Link 2
Link 3
Link 4
Link 5
Link 6
T5 Processor Overview
SerDes
SerDes
MI/O
MI/O
SPARC
SPARC Pwr
PCIe Gen3
Core
Core
SPARC
SPARC
Core
Core
SPARC
SPARC
Core
Core
MCU
MCU
SPARC
SPARC
Core
Core
MCU
MCU
SPARC
SPARC
Core
Core
SPARC
SPARC
Core
Core
SPARC
SPARC
Core
Core
L3
L3 L3
L3 L3
L3 L3
L3
Cross
Cross Bar
Bar
L3
L3 L3
L3 L3
L3 L3
L3
Coherence
SPARC
SPARC
Core
Core
SPARC
SPARC
Core
Core
SPARC
SPARC
Core
Core
SPARC
SPARC
Core
Core
providing 80 GB/s BW
MCU
MCU
SPARC
SPARC
Core
Core
MCU
MCU
SPARC
SPARC
Core
Core
SPARC
SPARC
Core
Core
SPARC
SPARC
Core
Core
SerDes
SerDes
43Copyright 2012, Oracle and/or its affiliates. All rights reserved.
16 S3 cores @ 3.6GHz
SerDes
SerDes
SerDes
SerDes
SPARC
SPARC
Core
Core
SerDes
SerDes
scalability
Integrated 2x8 PCIe Gen 3
Advanced Power Management
with DVFS
inclusive refers to the fact a cached entry is always present in the next higher
level of cache)
Each core on SPARC T5 is capable of OoO execution, dual-issue of
S3 Core Recap
28nm port from 40nm T4
Out-of-order, dual-issue
High frequency achieved with 3.6GHz
16 stage integer pipeline
Dynamically threaded, one to eight strands
Accelerates 16 encryption algorithms and random number generation
IBM Power7
IBM Power7+
Intel Westmere/
Sandybridge
Operational Model
none
3 accelerators shared
across 8 cores
Userland
none
RSA, ECC
RSA, ECC
none
AES
AES
Message Digest /
Hash Functions
none
none
Supported
none
Supported
none
API Support
none
PKCS#11
Virtualization Support
Solaris Zones
Oracle VM for SPARC
none
??
Intel VT
T5 Servers
M6 Servers
S3 Core Overview
8-way threaded, dual-issue, OoO execution, in order commit
Dynamically threaded with hardware-optimized resource sharing
Support for Critical Threads
Deep pipeline for high frequency operation (3 GHz in 40 nm)
Balanced single-thread and multi-thread performance
- 5X better single-thread than SPARC T3 with equivalent multi-thread
performance
Enhanced instruction set to accelerate Oracle SW stack
- PAUSE, fused compare-branch
Integrated user-level cryptographic acceleration
- DES/3DES, AES, Kasumi, Camellia, MD5, SHA-1, SHA224/256/384/512,RSA, DSA, CRC32c
Foundation core for future technology / product nodes
64KB
64KB
1
2
16KB
8KB
2
4
4 per cycle
Yes
None
No
Yes
1 per cycle
No
SPU
Yes
Yes
2 per cycle
Yes (36 instr window)
ISA Based
Yes
Yes
Optimization, the core devotes all of its resources to the sole running strand.
Thus, that strand will run as quickly as possible
i.e., no other threads will be assigned to that core, given that the following
condition is met:
The system is not over-committed, defined as more runnable threads than
available CPUs
All threads are equal: no thread has special hardware properties, all threads
have identical hardware properties that potentially allow any thread to access
any hardware resource that it needs
Applicability
Opportunity
Current Status
Logwriter, LMS
Up to 30% improvement in
efficiency
JAVA (JVM)
Compiler threads, GC Up to 2x improvement for app Support for JVM and JAVA
and priority mapping
apps to be CT aware is
startup,
Smooth
GC
support
integrated in JDK7U4
Coherence
Solaris
S11U1 / S10U11
Up to 20% improvement in
throughput
Integrated in Coherence
version 3.7.1 Patch 1
system
T5
T5
T5
T5
T5
T
-
T5
T5
T5
T5
T5
T5
T5
T5
smaller configurations
Supports single lane failover
directory
Cache-to-cache line transfers between nodes
Dynamic congestion avoidance routes inter-node data around
congested links
T5 System Interconnects
2-Way
Dual Socket
1-Way
Single Socket
4-Way
6-Way
8-Way
61Copyright 2012, Oracle and/or its affiliates. All rights reserved.
DIMMS
M5/T5
M5/T5
DDR3-1066
Memory Bandwidth
16 DDR channels x 1066Gbps x
8 Bytes/DDR channel = 128GB/sec
DIMMS
M5/T5
DIMMS
M5/T5
DIMMS
POINT-TO-POINT
LOCAL
INTERCONNECT
M5/T5
DIMMS
M5/T5
M5/T5
M5/T5
DIMMS
Latency for T5
T5-2
T5-4
Local Memory
136ns
Remote Memory
209ns
Cache to Cache
127ns
146ns
T5-8
155ns
Scalability of T4 vs T5
By numbers
Feature
Link Bandwidth
T4 Snoopy Based
Coherence Protocol
8 node snoops will consume 25% Link B/W &
increases linearly w/more nodes
Address serialization is done at Home Node. Home
Node broadcasts snoop request to all nodes. All
nodes except the requesting node require to
participate the snoop operation and provide snoop
response back to requesting node.
Message broadcast and response consume a lot of
link bandwidth.
T5 Directory Based
Coherence Protocol
8 node directory based will consume 5% of Link B/W
Address serialization is done at Directory Node.
Directory Node keeps track of which node hold each
cache line. Eliminating the need for broadcasting,
and relieve the L3$ from unnecessary foreign snoop
operation.
Directory filter the snoops sent to the share nodes.
Allow link bandwidth to be used more efficiently.
L3$ need to participate every snoop request from any Only the L3$ from the selective node require to
L3$ Performance other node. The L3$ performance can be dropped
participate the foreign snoop operation. L3$ has less
due to lots of foreign snoop requests.
distraction from foreign snoop request.
Easy to scale to large number of processor
Scalability
Limited to small scale of system.
environment.
Memory Controller of T4 vs T5
T4 MCU
T5 MCU
6.4 Gb/s
12.8 Gb/s
Not Supported
L0s, L1
Memory Buffer
Intel Milbrook2 MB
Advanced In-house MB
DDR3 Protocol
Burst length of 4
Burst length of 8
DDR3 Speed
800/1066
1066
DDR3 Device
1Gb/2Gb
2Gb/4Gb
RAS
Definition of Terms
Hot-plug:
refers to the fact that a component can be plugged and unplugged without powering
down the platform. It applies to both hot swap and hot service.
Hot service:
refers to the ability to perform hot-plug operations, with the additional necessity of
some operator actions (invocation of a CLI or actuating a hot service button on the
component to be removed).
The system will notify the user when it is safe to remove the component.
Typical examples would be PCIe Express modules.
Hot swap:
refers to an operation where you walk up to the box, yank something out, replace it,
and you are done.
Typical examples here are a single RAID disk or a power supplies.
67Copyright 2012, Oracle and/or its affiliates. All rights reserved.
S11 FMA
Diagnosis engine on SP
Auto reconfigure on failure
Soft Error Rate Discrimination (SERD)
Bad page retirement
OS and SP watchdogs
FMA Component hot-upgradeable
Hypervisor
Enables software partitioning (LDoms)
virtualization and failure containment
Processor support for error clearing, correction
and collection
T5/M5 Processor
L1$ Tag, Status $ Data
Parity protection
Retry on error
L2$/L3$ Data
SEC/DED protection
Cache-line Sparing
L2$/L3$ Tags
SEC/DED protection
Inline Correction
Cache-line Sparing
L2$/L3$ Status & Directory
SEC/DED protection
Central Directory and Switch
Inline Correction
SEC/DED protection with in line correction
Architectural RegistersL2 Cache
Physical domain isolation
SEC/DED protection
CRC protected System Interconnect with message retry and
Precise Trap and
lane sparing
Hypervisor Correction and Retry
Deconfigurable directory chips, no loss of functionality,
minimized bandwidth loss
Unique to M5
** Post-RR
Redundant Scalability Switch Boards
System
Redundant SPs with automatic failover
Redundant clock boards
Diagnosis to the FRU level on first fault
Hot-plug processor/memory**
Power and Cooling
Advanced Power Management
Redundant hot-swap fans
Redundant hot-swap AC/DC
Dual grid power
System I/O
PCI-Express end-to-end CRC
PCI Express link retry
Hot-plug low profile PCI Express cards
Redundant, hot-plug boot disks
Alternate connections between M5 and IO
controllers
Memory
SDRAM Soft Errors
ECC Protection and Correction
Extended ECC Protection
4-bit Correction
Pin Steering
Channel Interconnect
CRC protection/Message Retry
Lane Sparing
Fault Management
Knowledge Articles in MOS
ILOM fdd Diagnosis
Faults and Alerts
No ALOM Compatibility
ILOM FMA Captive Shell
Sideband Service Processor Network Connection
New ILOM Fault Notification (SNMP Trap)
ASR Support
FMA on M5 ILOM also applies to T5 ILOM, except for M5 specific features
FMA's Fault Proxy is used to keep ILOM's fault manager in sync with Solaris' fault
manager. Both will display the sum of all faults in the system.
Faults can be repaired from either side.
Fault Proxy communicates via the Ethernet Over USB connection.
IO faults are still diagnosed by Solaris.
For faults which diagnose resources as unusable, ILOM will add those resources to
the DDB. Resources excluded on next host reset.
When faults are repaired, ILOM automatically updates the DDB. Bringing
components back online requires a host reset.
Runs at SP boot. Tests devices on the SP FRU and its Ethernet port.
Status stored and converted to ereports after ILOM boots.
Fault proxy
SP
ereports
hostd
hostd
FETD
FETD
ip-transprt
ip-transprt
LDC
LDC
Control Domain
ETM
ETM
ETM
ETM
faults
TCP/IP
TCP/IP
ereports
LDC
LDC
IO Domain
ETM
ETM
faults
ip-transport
ip-transport
ETM
ETM
LDC
LDC
ETM
ETM
IO ereports are forwarded from the SP to the control domain, and then on to any
relevant IO domain
Faults are proxied between the SP, the control domain and any IO domains to
The SP and the control domain can view and manage all faults in the system.
An IO domain can only view and manage faults local to the domain.
unconfigurable
A T5-8 will not operate with only 5 or 7 CPUs configured
One more CPU(s) must be chosen to be deconfigured
For an 8-way, if we fault a CPU, we will offline the other CPU on the same PM
For a 6-way, if we fault a CPU, we will also offline the other CPU on the same PM
ASR Support
SPARC T5 servers will be supported by ASR (Automatic Service
Request) at release
Continues use of sunHwTrapFaultDiagnosed SNMP notification
Telemetry for ILOM fdd diagnosis
Supports platform and FRU identity
Supports multi-suspect list