Beruflich Dokumente
Kultur Dokumente
Steve Nasypany
nasypany@us.ibm.com
Agenda
What This Presentation Covers
PowerVM Review
Monitoring Questions
Topology/System Aggregation
VIOS Summary
Metrics/Tools
CPU
Memory
IO
Shared Processor Pools are a subset or all of the physical CPUs in a system. Desire is
to virtualize all partitions to maximize exploitation of physical resources.
Shared Pool partitions run on Virtual Processors (VP). Each Virtual Processor maps to
one physical processing unit in capacity (a physical core in AIX).
Capacity is expressed in the form of a number of 10% CPU units, called Entitlement
Entitlement value for an active partition is a guarantee of processing resources
Sum of partition entitlements must be less than or equal to physical resources in shared pool
Desired:
Desired size of partition at boot time
Minimum:
Partition will start will less than desired, but wont start if Minimum not available
Maximum:
Dynamic changes to desired cannot exceed this capacity
Capped vs Uncapped
Capped:
Entitlement is a hard limit, similar to a dedicated partitions
Uncapped:
Capacity is limited by unused capacity in pool and number of Virtual Processors a
partition has configured
When the pool is constrained, Variable Capacity Weight setting provides automatic load
balancing of cycles over entitlement
4
Questions
How do we monitor for shared CPU pool constraints?
AIX provides metrics to show physical and entitlement utilization on
each partition
AIX optionally provides the amount of the shared pool that is idle when you are out of shared pool, you are constrained
What about Hypervisor metrics?
There are really no metrics that see into the hypervisor layer
Metrics like %hypv in AIX lparstat are dominated by idle time, as idle
cycles are ceded to they hypervisor layer
This presentation will help you with determining what performance metrics
are important at the partition and frame level
Questions
What product is best for interactive and short-term analysis of AIX
resources?
nmon provides access to most of the important metrics required for
benchmarks, proof-of-concepts and regular monitoring
CPU
vmstat, sar, lparstat, mpstat
Memory
vmstat, svmon
Paging
vmstat
Hdisk
iostat, sar, filemon
Adapter
iostat, fcstat
Network
entstat, netstat
Process
ps, svmon, trace tools (tprof, curt)
Threads
ps, trace tools (curt)
nmon Analyser & Consolidator provide free and simple trend reports
Not provided in AIX/nmon
Java/GC
must use java tools
Transaction times
application specific
Database
database products
nmon ODR
Currency/Naming
The majority of commercial products supporting PowerVM/AIX monitoring
provide access to all of the important virtualization metrics
Metric naming is a problem. In many cases, different implementations
display the same data, but have different naming conventions. Any
customer wanting to evaluate these products will have to have an AIX
specialist study the products metric definitions to map apples-to-apples.
Many products pre-package user interface views that aggregate these
metrics
But usage modes for recording, post-processing, capacity planning
can all vary
Many customers are using products in older dedicated modes and
are not aware of differences in monitoring virtualized systems
Customers interested in a particular solution should evaluate each product
for complete support. Most, if not all, of these products are available for trial
evaluation.
VIOS
KPX_memrepage_Info
KPX_vmm_pginwait_Info
KPX_vmm_pgfault_Info
KPX_vmm_pgreclm_Info
KPX_vmm_unpin_low_Warn KPX_vmm_pgout_pend_Info
KPX_Pkts_Sent_Errors_Info
KPX_Sent_Pkts_Dropped_Info
KPX_Pkts_Recv_Errors_Info KPX_Bad_Pkts_Recvd_Info
KPX_Recv_pkts_dropped_Info KPX_Qoverflow_Info
KPX_perip_InputErrs_Info
KPX_perip_InputPkts_Drop_Info
KPX_perip_OutputErrs_Info KPX_TCP_ConnInit_Info
KPX_TCP_ConnEst_Info
KPX_totproc_cs_Info KPX_totproc_runq_avg_Info
KPX_totproc_load_avg_Info KPX_totnum_procs_Info
KPX_perproc_IO_pgf_Info KPX_perproc_nonIO_pgf_Info
KPX_perproc_memres_datasz_Info
KPX_perproc_memres_textsz_Info
KPX_perproc_mem_textsz_Info KPX_perproc_vol_cs_Info
KPX_Active_Disk_Pct_Info
KPX_Avg_Read_Transfer_MS_Info
KPX_Read_Timeouts_Per_Sec_Info
KPX_Failed_Read_Per_Sec_Info
KPX_Avg_Write_Transfer_MS_Info
KPX_Write_Timeout_Per_Sec_Info
KPX_Failed_Writes_Per_Sec_Info
KPX_Avg_Req_In_WaitQ_MS_Info
KPX_ServiceQ_Full_Per_Sec_Info
KPX_perCPU_syscalls_Info KPX_perCPU_forks_Info
KPX_perCPU_execs_Info
KPX_perCPU_cs_Info
KPX_Tot_syscalls_Info
KPX_Tot_forks_Info
KPX_Tot_execs_Info
KPX_LPARBusy_pct_Warn
KPX_LPARPhyBusy_pct_Warn KPX_LPARvcs_Info
KPX_LPARfreepool_Warn KPX_LPARPhanIntrs_Info
KPX_LPARentused_Info KPX_LPARphyp_used_Info
KPX_user_acct_locked_Info KPX_user_login_retries_Info
KPX_user_idletime_Info
KVA_memrepage_Info
KVA_vmm_pginwait_Info
KVA_vmm_pgfault_Info
KVA_vmm_pgreclm_Info
KVA_vmm_unpin_low_Warn KVA_vmm_pgout_pend_Infov Networking
KVA_Pkts_Sent_Errors_Info KVA_Sent_Pkts_Dropped_Info
KVA_Pkts_Recv_Errors_Info KVA_Bad_Pkts_Recvd_Info
KVA_Recv_pkts_dropped_Info
KVA_Qoverflow_Info
KVA_Real_Pkts_Dropped_Info KVA_Virtual_Pkts_Dropped_Info
KVA_Output_Pkts_Dropped_Info KVA_Output_Pkts_Failures_Info
KVA_Mem_Alloc_Failures_Warn
KVA_ThreadQ_Overflow_Pkts_Info KVA_HA_State_Info
KVA_Times_Primary_Per_Sec_Info KVA_perip_InputErrs_Info
KVA_perip_InputPkts_Drop_Info KVA_perip_OutputErrs_Info
KVA_TCP_ConnInit_Info
KVA_TCP_ConnEst_Infov Process
KVA_totproc_cs_Info
KVA_totproc_runq_avg_Info KVA_totproc_load_avg_Info
KVA_totnum_procs_Info
KVA_perproc_IO_pgf_Info KVA_perproc_nonIO_pgf_Info
KVA_perproc_memres_datasz_Info
KVA_perproc_memres_textsz_Info KVA_perproc_mem_textsz_Info
KVA_perproc_vol_cs_Info
KVA_Firewall_Info
KVA_memrepage_Info
KVA_vmm_pginwait_Info
KVA_vmm_pgfault_Info
KVA_vmm_pgreclm_Info
KVA_vmm_unpin_low_Warn KVA_vmm_pgout_pend_Infov Networking
KVA_Pkts_Sent_Errors_Info KVA_Sent_Pkts_Dropped_Info
KVA_Pkts_Recv_Errors_Info KVA_Bad_Pkts_Recvd_Info
KVA_Recv_pkts_dropped_Info
KVA_Qoverflow_Info
KVA_Real_Pkts_Dropped_Info
KVA_Virtual_Pkts_Dropped_Info
KVA_Output_Pkts_Dropped_Info KVA_Output_Pkts_Failures_Info
KVA_Mem_Alloc_Failures_Warn
KVA_ThreadQ_Overflow_Pkts_Info KVA_HA_State_Info
KVA_Times_Primary_Per_Sec_Info KVA_perip_InputErrs_Info
KVA_perip_InputPkts_Drop_Info KVA_perip_OutputErrs_Info
KVA_TCP_ConnInit_Info
KVA_TCP_ConnEst_Infov Process
10
VIOS (cont)
KVA_totproc_cs_Info
KVA_totproc_runq_avg_Info KVA_totproc_load_avg_Info
KVA_totnum_procs_Info
KVA_perproc_IO_pgf_Info KVA_perproc_nonIO_pgf_Info
KVA_perproc_memres_datasz_Info
KVA_perproc_memres_textsz_Info
KVA_perproc_mem_textsz_Info KVA_perproc_vol_cs_Info
KVA_Firewall_Info
KVA_Active_Disk_Pct_Info KVA_Avg_Read_Transfer_MS_Info
KVA_Read_Timeouts_Per_Sec_Info
KVA_Failed_Read_Per_Sec_Info
KVA_Avg_Write_Transfer_MS_Info
KVA_Write_Timeout_Per_Sec_Info
KVA_Failed_Writes_Per_Sec_Info
KVA_Avg_Req_In_WaitQ_MS_Info
KVA_ServiceQ_Full_Per_Sec_Info
KVA_perCPU_syscalls_Info
KVA_perCPU_forks_Info
KVA_perCPU_execs_Info
KVA_perCPU_cs_Info
KVA_Tot_syscalls_Info KVA_Tot_forks_Info
KVA_Tot_execs_Info
KVA_LPARBusy_pct_Warn KVA_LPARPhyBusy_pct_Warn
KVA_LPARvcs_Info
KVA_LPARfreepool_Warn
KVA_LPARPhanIntrs_Info
KVA_LPARentused_Info
KVA_LPARphyp_used_Info KVA_user_acct_locked_Info
KVA_user_login_retries_Info
KVA_user_idletime_Info
HMC
KPH_Busy_CPU_Info
KPH_Paging_Space_Full_Info
KPH_Disk_Full_Warn
KPH_Runaway_Process_InfoThe
11
12
CA
http://www.ca.com
HP GlancePlus http://www.hp.com/go/software
Metron Athene http://www.metron-athene.com/index.htmll
Orsyp Sysload http://www.orsyp.com/products/software/sysload.html
Power Navigator
http://www.mpginc.com
SAP AIX CCMS Agents http://www.sap.com
Teamquest
http://www.teamquest.com/products-services/full-product-serviceslist/index.htm
13
System Topology
Processors
Memory
I/O
Partitions
Active partitions,
operating systems,
names and types
inactive partitions
System Aggregation
Virtualization allows and even encourages that a system contains multiple
independent partitions, all with resources
Ideally, monitoring tools will easily determine all of the active partitions on a
system and organize the data for those partitions together
Normally, this requires consulting with an HMC to identify the active
partitions on a system
With mobile partitions, the tools must accommodate the movement of
partitions between systems
If automatic detection of partitions to systems isnt possible, some means
of organizing them manually must be applied
Various products do automatic CEC aggregation
Nmon Analyzer and Consolidator allow manual aggregation
The most important metric for CPU pool monitoring is the Available Pool
Processor (APP) value. It represents the amount of a pool not currently
consumed, and can be retrieved from individual partitions or calculated by
aggregators
15
16
17
Metrics: CEC
CPU
Number of CPUs
Dedicated
Shared pool
Unallocated
Entitlement settings
Utilization %
Dedicated consumed
Pool consumed (alternatively, entitlement consumed or free)
Memory
Allocated, in use, computational, non-computational
Unallocated
IO
Aggregated adapter totals read, write, IOPS, MB/sec
18
19
20
Entitlement Consumed
Capped partitions can only go to 100%
Uncapped partitions can go over 100%
How far depends on the number of Virtual Processors (VP)
Active VP count defines # of physical cores a partition can consume
If a partition has an entitlement of 0.5 and 4 VPs, the maximum entitlement
consumed possible is 800% (4.0/0.5 = 8), and maximum physical consumption
is 4.0
22
busy
idle
Htc0
busy
Htc1
busy
100%
busy
100%
busy
POWER7 SMT2
Htc0
Htc1
busy
idle
Htc0
busy
Htc1
busy
POWER7 SMT4
~70%
busy
100%
busy
Htc0
busy
Htc1
idle
Htc2
idle
Htc3
idle
~65%
busy
23
0
1
2
POWER5 and POWER6 overstate utilization as the CPU utilization algorithm does not
account for how many SMT threads are active
One or both SMT threads can fully consume a physical core and utilization is 100%
On POWER7, a single thread cannot exceed ~65% utilization. Values are calibrated
in hardware to provide a linear relationship between utilization and throughput
When core utilization reaches a certain threshold, a Virtual Processor is unfolded and
work begins to be dispatched to another physical core
24
busy
Htc1
busy
~80% busy
Htc0
busy
Htc1
idle
Htc2
idle
Htc3
idle
Activate
Virtual
Processor
~50% busy
proc0
proc1
proc2
proc3
Primary
POWER6
Secondary
proc0
POWER7
proc1
proc2
proc3
Primary
Secondary
Tertiaries
26
27
This is reasonable presuming the workload benefits from SMT. This will not work
with single-threaded hog processes that want to consume a full core.
AIX 6.1 TL8 & AIX 7.1 TL2 offer an alternative VP activation mechanism known as
Scaled Throughput. This can provide the option to make POWER7 behavior more
POWER6 Like but this is a generalized statement and not a technical one.
28
29
Logical CPUs
Pool size
Entitlement
30
%user
%sys
%wait
lbusy
app
vcsw phint
%hypv hcalls
-----
----
-----
---
---- -----
----- ------
84.9
2.0
0.2
12.9
0.40
99.9
27.5
1.59
521
13.5
2093
86.5
0.3
0.0
13.1
0.40
99.9
25.0
1.59
518
13.1
490
physc
Shows the number of physical processors consumed. For a capped partition this
number will not exceed the entitled capacity. For an uncapped partition this number
could match the number of processors in the shared pool; however, this my be
limited based on the number of on-line Virtual Processors.
%entc
Shows the percentage of entitled capacity consumed. For a capped partition the
percentage will not exceed 100%; however, for uncapped partitions the percentage
can exceed 100%.
app
Shows the number of available processors in the shared pool. The shared pool
psize is 2 processors. Must set Allow performance information collection. View
the properties for a partition and click the Hardware tab, then Processors and
Memory.
Shared
Mode
Only
Available Pool
31
Available Pool
Local LPAR
Physical Used
Interval: 10
Thu Jul 28 17:04:57 2006
Memory (GB)
Processor
Monitored :24.6
Monitored :1.2
Shr Physical Busy: 0.30
UnMonitored:
UnMonitored: Ded Physical Busy: 2.40
Available :24.6
Available : UnAllocated:
0
UnAllocated: Hypervisor
Consumed
: 2.7
Shared
:1.5
Virt. Context Switch: 632
Dedicated : 5
Phantom Interrupts :
7
Pool Size : 3
Avail Pool :2.7
APP = Pool Size Host
OS M Mem InU Lp Us Sy Wa Id PhysB Ent %EntC Vcsw PhI
Shared Physical Busy
-------------------------------------shared--------------------------ptoolsl3
A53 c 4.1 0.4 2 14 1 0 84
0.08 0.50 15.0 208
0
ptoolsl2
A53 C 4.1 0.4 4 20 13 5 62
0.18 0.50 36.5 219
5
ptoolsl5
A53 U 4.1 0.4 4
5 0 0 95
0.04 0.50
7.6 205
2
------------------------------------dedicated------------------------ptoolsl1
A53 S 4.1 0.5 4 20 10 0 70
0.30
ptoolsl4
A53
4.1 0.5 2 100 0 0 0
2.00
ptoolsl6
A52
4.1 0.5 1
5 5 12 88
0.10
AIX 5.3 TL-08 topas supports
35
36
37
38
39
40
(values in bytes)
Key Points:
Computational (avm)
Paging Rates
# vmstat -I 1
Scanning Rates
System configuration: lcpu=2 mem=912MB
kthr
-------r b p
1 1 0
1 1 0
3 0 0
1 1 0
1 2 0
1 2 0
kthr
Memory
Page
41
memory
page
----------- -----------------------------avm
fre fi
fo pi po fr
sr
139893 2340 12288
0
0
0
0
0
139893 1087 4503
0
8 733 3260 126771
139893 1088 9472
0
1 95 9344 100081
139893 1087 12547
0
6
0 12681 13407
140222 1013 6110
1 39
0 6169
6833
139923 1087 6976
0 31
2 7062
7599
faults
cpu
------------ ----------in
sy cs us sy id wa
200 25283 496 77 16 0 7
415 9291 440 82 15 0 3
191 19414 420 77 20 0 3
207 25762 584 71 21 0 7
160 15451 471 83 11 0 5
183 19306 544 79 14 0 7
The number of threads blocked waiting for a file system I/O operation to complete.
The number of threads blocked waiting for a raw device I/O operation to complete.
avm
The number of active virtual memory pages, which represents computational memory requirements. The maximum avm
number divided by number of real memory frames equals the computational memory requirement.
fre
The number of frames of memory on the free list. Note: A frame refers to physical memory vs. a page which refers to virtual
memory.
fi / fo
File pages In and File pages Out per second, which represents I/O to and from a file system.
pi / po
Page Space Page In and Page Space Page Out per second, which represents paging.
fr / sr
The number of pages scanned sr and the number of pages stolen (or freed) fr. The ratio of scanned to freed represents
relative memory activity. The ratio will start at 1 and increase as memory contention increases. Examine Sr # of pages to
steal Fr pages. Note: Interrupts are disabled at times when lrud is running.
memory pages
Total Real Memory
lruable pages
Memory addressable by lrud
free pages
Free List
memory pools
One lrud per memory pool
pinned pages
maxpin percentage
minperm percentage
maxperm percentage
numperm percentage
file pages
JFS-only, or reports client if no JFS
numclient percentage
% file cache if JFS2 only
maxclient percentage
client pages
JFS2/NFS
42
pin
in use
size
233472
262144
inuse
125663
54233
free
107809
pin
108785
work
67825
79725
pers
0
536
clnt
0
4442
lpage
40960
0
virtual
140123
# of pinned Frames
Virtual: Computational
Working (or computational) memory = 140123
%Computational = virtual/size = 140213 / 233472 = 60%
Pers (or JFS file cache) memory = 536
Clnt (or JFS2 and NFS file cache) memory = 4442
43
44
Computational
Run Queue + Process Context Switches
45
Computational%
nmon Analyser cannot graph computational rates over 100% (physical paging)
2013 IBM Corporation
Memory
Dedicated
Shared
RAM
Paging
Total Size
(vmstat v memory
svmon size)
Total Size
(lsps -a)
Physical consumed
Entitlement consumed
(lparstat, vmstat, sar)
Computational
(vmstat avm, vmstat v,
svmon virtual)
In Use
(lspa a, svmon)
Available Pool
(lparstat)
Cache
(vmstat v client,
svmon pers + clnt)
User/System/Idle/Wait
(vmstat, lparstat, sar)
Run Queue
(vmstat, sar q)
Pages In/Out
(vmstat, vmstat -s)
Context Switches
(vmstat, mpstat)
Thresholds
The following sample threshold tables are intended to be examples
and not standard recommendations by IBM
We do not maintain or advise one set of thresholds for all environments
48
49
Threshold
> 80%
n/a
Explanation
cpu busy%
(user+sys)%
Dedicated
Shared
relative to physical
consumption in shared
partitions
Entitlement %
Physical consumed
(physc or pc)
Physical busy
cpu busy% x physc
Available Pool
Run Queue
Workload dependent
Context Swithces
Workload dependent
Possible entitlement
under-sizing
50
Threshold
Explanation
Computational %
Free Memory %
none
File Cache %
none
Scan:Free ratio
Physical Paging %
Potential to crash
51
52
53
Queue Depths
If IO service times are reasonably good, but queues are getting filled up, then
Increase queue depths until:
You arent filling the queues or
IO service times start degrading (bottleneck at disk)
For hdisks, queue_depth controls the maximum number of in-flight IOs
For FC adapters, num_cmd_elems controls maximum number of in-flight IOs
Drivers for hdisks and adapters have service and wait queues
When the queue is full and an IO completes, then another is issued
Tools used on partitions to identify queue issues
SDDPCM:# pcmpath query devstats <interval> <count>
# pcmpath query adaptstats <interval> <count>
SDD: # datapath query devstats <interval> <count>
# datapath query adaptstats <interval> <count>
iostat: # iostat D <interval> <count>
fcstat: # fcstat fcs*
54
xfer:
%tm_act
87.7
read:
Use l option
For wide output
rps
271.8
write:
wps
0.5
queue:
avgtime
1.1
bps
62.5M
avgserv
tps
minserv
0.2
minserv
4.0
mintime
bread
272.3
9.0
avgserv
1.9
maxtime
0.0
14.1
62.5M
maxserv
168.6
maxserv
10.4
avgwqsz
0.2
bwrtn
823.7
timeouts
fails
timeouts
fails
avgsqsz
sqfull
1.2
60
DDD
DDDD
56
FC Adapter
# pcmpath query adaptstats
Adapter #: 0
=============
Total Read Total Write Active Read Active Write Maximum
I/O:
1105909
78
3
0
200
SECTOR: 8845752
0
24
0
88
57
FC Adapter Attributes
Fibre channel adapter attributes:
# lsattr -El fcs0
bus_intr_lvl 8355
bus_io_addr
0xffc00
bus_mem_addr 0xf8040000
init_link
al
intr_priority 3
lg_term_dma
0x1000000
max_xfer_size 0x100000
num_cmd_elems 200
pref_alpa
0x1
sw_fc_class
2
The max_xfer_size attribute also controls a DMA memory area used to hold data for transfer,
and at the default is 16 MB
Changing to other allowable values increases it to 128 MB and increases the adapters
bandwidth
Usually not required unless adapter is pushing many 10s MB/sec
Change to 0x200000 based on guidance from Redbooks or tools
This can result in a problem if there isnt enough memory on PHB chips in the IO drawer
with too many adapters/devices on the PHB
Make the change and reboot check for Defined devices or errors in the error log, and
change back if necessary
NPIV and virtual FC adapters the DMA memory area is 128 MB at 6.1 TL2 or later
58
nmon FC Monitoring
59
http://pic.dhe.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/p7hcg/fcstat.htm
60
5716
IOPS 4K
38,461
5758
n/a
5759
n/a
5773
n/a
5774
n/a
61
FC
5735
5708
~750 MB/s
142,000
150,000
62
63
Network Capacity
How do I know network capacity?
You should be able to reach 70-80% of line speeds with 1 Gb but 10 Gb
may require special tuning (beyond tcp/send receive, rfc1323, etc)
On VIOS, if you are not near limits
Review CPU and Memory, always run uncapped
Review netstat s/-v and entstat for any errors
Review CPU, Memory and network on problem client
On 10 GB, if you are driving hundreds of thousands of tiny/small
packets/sec at 1500 MTU, or have very short latency requirements, tuning
will be required
Two POWER7 cores will be required to reach 10Gb
Large Send/Receive, mtu_bypass (assuming AIX clients only)
Virtual buffer tunings (backup slide)
Nodelay Ack, nagle, dog threads
In extreme environments, using dedicated-donating mode on VIOS
If tuning exhausted and still have issues, then likely network environment
or APARs required
64
VIOS Network
Network requirements are the same for CPU, regardless of operation as
dedicated or shared lpar
Driving 10 Gb adapters to capacity for workloads can take two physical
cores
Larger packets, larger MTU sizes dramatically decrease CPU utilization
Integrated Virtual Ethernet vs Shared Ethernet
IVE is the performance solution, will take more memory
Shared ethernet 1 Gb performance is competitive with IVE, but 10 Gb
performance is more limited for small MTU receives (~5-6 Gb/sec) without
tuning
Virtual Ethernet (Switch)
Reliable < 1 ms latency times, but driving 1 Gb at normal packet sizes will
consume up to two physical cores
Virtual switch within hypervisor is not designed to scale to 10 Gb at 1500
MTU for a single session (usually gated by core throughput for single
session)
65
1.0
140,000
120,000
100,000
80,000
60,000
40,000
20,000
0
0.8
0.7
0.6
0.5
0.4
12
8
25
6
51
2
10
24
20
48
40
96
81
92
16
38
4
64
32
16
0.3
1
Throughput
(KBytes/Second)
0.9
0.2
0.1
CPU
Entitlement
(capped)
66
67
68
seastat d ent5
================================================================================
Advanced Statistics for SEA
Device Name: ent5
================================================================================
MAC: 32:43:23:7A:A3:02
---------------------VLAN: None
VLAN Priority: None
Hostname: mob76.dfw.ibm.com
IP: 9.19.51.76
Transmit Statistics:
Receive Statistics:
-------------------------------------Packets: 9253924
Packets: 11275899
Bytes: 10899446310
Bytes: 6451956041
================================================================================
MAC: 32:43:23:7A:A3:02
---------------------VLAN: None
VLAN Priority: None
Transmit Statistics:
Receive Statistics:
-------------------------------------Packets: 36787
Packets: 3492188
Bytes: 2175234
Bytes: 272207726
================================================================================
MAC: 32:43:2B:33:8A:02
---------------------VLAN: None
VLAN Priority: None
Hostname: sharesvc1.dfw.ibm.com
IP: 9.19.51.239
Transmit Statistics:
Receive Statistics:
-------------------------------------Packets: 10
Packets: 644762
Bytes: 420
Bytes: 484764292
69
Note: In order for this tool to work on a Shared Ethernet Adapter, the state of the layer-3
device (en) cannot be in the defined state. If you are not using the layer-3 device on the
SEA, the easiest way to change the state of the device is to change one of its parameters.
The following command will change the state of a Shared Ethernet Adapters layer-3
device, without affecting bridging.
chdev -l <sea_en_device> -a state=down
70
71
Packet Counts
Unfortunately, Analyser does not provide stacked graphs for SEA aggregation views
2013 IBM Corporation
Direction
Sessions
1500 MTU
Single port
TCP STREAM
send
receive
duplex
TCP_Request
& Response
1 byte
message
Both ports
9000 MTU
Single port
Both ports
870 MB/s
1311 MB/s
1076 MB/s
1647 MB/s
1068 MB/s
1402 MB/s
1111 MB/s
1668 MB/s
785 MB/s
1015 MB/s
1173 MB/s
1393 MB/s
925 MB/s
992 MB/s
1179 MB/s
1393 MB/s
1439 MB/s
1712 MB/s
1733 MB/s
2106 MB/s
1527 MB/s
1914 MB/s
1756 MB/s
2176 MB/s
13324 TPS1
26171 TPS
150
182062 TPS
237415 TPS
Host: P7 750 4-way, SMT-2, 3.3 GHz, AIX 5.3 TL12, dedicated LPAR, dedicated adapter
Client: P6 570 with two single-port 10 Gb (FC 5769), point-to-point wiring (no ethernet switch)
1Single
session 1/TPS round trip latency is 75 microseconds, default ISNO settings, no interrupt coalescing
AIX 6.1 should do better with SMT4, Disk I/O will do better due to larger blocks/buffers
73
Metrics: IO
IO
Hdisk
Storage
Adapter
Network
Adapter
IO/sec
(iostat -as)
Send/Receive Packets
(entstat, netstat)
Read/Write Bytes
(iostat -as, fcstat)
Send/Receive MB
(entstat, netstat)
74
Sample Thresholds: IO
Metric
Threshold
Hdisk %busy
n/a
Hdisk IOPS
Hdisk KB in/out
n/a
75
Explanation
Trademarks
The following are trademarks of the International Business Machines Corporation in the United States, other countries, or both.
Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not
actively marketed or is not significant within its relevant market.
Those trademarks followed by are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States.
76