Sie sind auf Seite 1von 6

Specify how to choose an uplink.

Route based on the originating virtual port . Based on the virtual port where th
e traffic entered the vSphere distributed switch.
Route based on IP hash. Based on a hash of the source and destination IP address
es of each packet. For non-IP packets, whatever is at those offsets is used to c
ompute the hash.
Route based on source MAC hash. Based on a hash of the source Ethernet.
Route based on physical NIC load. Based on the current load of the physical netw
ork adapters connected to the port. If an uplink remains busy at 75% or higher f
or 30 seconds, the host proxy switch moves a part of the virtual machine traffic
to a physical adapter that has free capacity.
Use explicit failover order. Always use the highest order uplink from the list o
f Active adapters which passes failover detection criteria.

USEFUL Network related commands in ESXi 5.5


#
#
#
#
#
#

esxcfg-info | less
esxcfg-scsidevs -l
lspci -vvv
esxcfg-nics -l
esxcli network nic
esxcli network nic

-I
| egrep -i 'display name|vendor'
up -n vmnicX
down -n vmnicX

# xesxcli network ip interface ipv4 get


# esxcli network ip interface list
# esxcfg-vmknic -l
# esxcli network ip connection list
# esxcli network ip neighbor list
#esxcli network ip route ipv4 list
#esxcfg-route
#esxcli network diag ping -I vmk0 -H 10.27.51.132
# esxcli network ip interface ipv4 set -i vmk1 -I 10.27.51.143 -N 255.255.255.0
-t static
#NIC teaming in ESXi and ESX
A NIC team can share the load of traffic between physical and virtual networks a
mong some or all of its members, as well as provide passive failover in the even
t of a hardware failure or network outage.
> If the physical switch is using link aggregation, Route based on IP hash load
balancing must be used
#Host requirements for link aggregation for ESXi and ESX (1001938)
Note Link aggregation is also known as Ether-Channel, Ethernet trunk, port chann
el, and Multi-Link Trunking.

- route based on originating port (default)


forwarding decision is based on srcport id
Restricts VM from using more than single physical nic
- route based on mac hash
forwarding decision is based on mac hash of VM
Restricts VM from using more than single physical nic
- route based on ip hash ( used for LACP port channels)
#Failover
Link Status only
Relies solely on the link status provided by the network adapte
r. This detects failures, such as cable pulls and physical switch power failures
, but it cannot detect configuration errors, such as a physical switch port bein
g blocked by spanning tree or misconfigured to the wrong VLAN or cable pulls on
the other side of a physical switch.
- beacon probing - Need atleast 3 nics
- Sends and receives beacon probes to upstream switches
# Notify switch .. to update the look up tables during nic failover
# LACP limitations on a vSphere Distributed Switch
LACP does not support Port mirroring.
LACP settings do not exist in host profiles.
LACP between two nested ESXi hosts is not possible.
LACP only works with IP Hash load balancing and Link Status Network failover det
ection.
Support for only one LACP group (Uplink Port Group with LACP enabled) per distri
buted switch and only one LACP group per host.
Only webclient to enable LACP
Lacp are of two types IEEE 802.3ad
Static
Dynamic
Broadly there are two types of link aggregation approaches one is static and ano
ther is dynamic. Static link aggregation is configured individually on hosts or
switches and no automatic negotiations happen between the two end points. This l
ink aggregation approach doesn t detect any cabling or configuration mistakes and
any switch port failures that don t result in loss of link status.
Dynamic link aggregation, also called as LACP, on the other hand addresses the c
oncern that static link aggregation has and thus provides better operational exp
erience by detecting configuration or link errors and automatically re-configuri
ng the aggregation channel. This is possible because of the heart-beat mechanism
Active LACP has between the two endpoints.

# Changing speed and duplex setting


Autonegotiation is preffered WHY ?? Resolve iSCSI, vMotion, network performance,
and related network issues.
Flow Control: Because of the amount of traffic that can be generated by Gigabit-

Ethernet, there is a PAUSE functionality built into Gigabit-Ethernet.


Note: The PAUSE frame is a packet that tells the far-end device to stop the tran
smission of packets until the receiver is able to handle all the traffic and cle
ar its buffers. The PAUSE frame has a timer included, which tells the far-end de
vice when to start to send packets again. If that timer expires without getting
another PAUSE frame, the far-end device can then send packets again. Flow Contro
l is an optional item and must be negotiated. Devices can be capable of sending
or responding to a PAUSE frame, and it is possible they will not agree to the fl
ow-control request of the far-end.
esxcfg-nics -l
# esxcfg-nics -s 100 -d full vmnic1 - to set to full
esxcfg-nics -a vmnic1m - auto nego
esxcli network nic set -n vmnic# -S speed -D duplex

## Packet capture
The tcpdump-uw tool can only capture packets/frames at the vmkernel interface le
vel and cannot capture frames at the uplinks, or vSwitch, or virtual port levels
.
The new pktcap-uw tool allows traffic to be captured at all points within the hy
pervisor for greater flexibility and improved troubleshooting.
## CHECK
net-stats -l to list port
Run the pktcap-uw command to capture packets at both points simultaneously:
pktcap-uw --switchport 50331665 -o /tmp/50331665.pcap & pktcap-uw --uplink vmnic
2 -o /tmp/vmnic2.pcap &
#Usage
# pktcap-uw
# pktcap-uw
# pktcap-uw
# pktcap-uw

--vmk vmk0
--uplink vmnic7
--switchport switchportnumber
--vmk vmk# -o file.pcap

To stop kill the process


kill $(lsof |grep pktcap-uw |awk '{print $1}'| sort -u)
## TCP DUMP-uw
/etc # tcpdump-uw -h
tcpdump-uw version 4.0.0vmw
libpcap version 1.0.0
Usage: tcpdump-uw [-aAdDefIKlLnNOpqRStuUvxX] [ -B size (KB) ] [ -c count ]
[ -C file_size ] [ -E algo:secret ] [ -F file ] [ -G seconds ]
[ -i interface ] [ -M secret ] [ -r file ]
[ -s snaplen ] [ -T type ] [ -w file ] [ -W filecount ]
[ -y datalinktype ] [ -z command ] [ -Z user ]
[ expression ]
-D to show list

-e to list l2 info like mac


-s 0 to capture full packet
-i to specify interface number
# Troubleshooting NFS
Check for APD in vmkernel.log vmkwarning vobd ( observation )logs
YYYY-04-01T14:35:08.075Z: [APDCorrelator] 9414268686us: [esx.problem.storage.apd
.start] Device or filesystem with identifier [12345678-abcdefg0] has entered the
All Paths Down state.
Device or filesystem with identifier [12345678-abcdefg0] has entered the All Pat
hs Down Timeout state after being in the All Paths Down state for 140 seconds. I
/Os will now be fast failed.
Symptoms
The NFS datastores appear to be unavailable (grayed out) in vCenter Server, or w
hen accessed through the vSphere Client
The NFS shares reappear after few minutes
Virtual machines located on the NFS datastore are in a hung/paused state when th
e NFS datastore is unavailable
# Grep for datastore name in vobd log
[esx.problem.vmfs.nfs.server.disconnect] 192.168.100.1 /vol/datastore01 bf7ce3db
-42c081a2-0000-000000000000 volume-name:datastore01
Solution Change the queuedepth to 64
# esxcfg-advcfg -s 64 /NFS/MaxQueueDepth
# esxcfg-advcfg -g /NFS/MaxQueueDepth
#
-

Poor NFS
Version of NFS
Type of filesystem ext4 is better
Mount options and rsize and wsize
check if LACP is used .. NIC team
Speed/duplex setting of NIC -- set to autonego ( flow control pause feature)
using appropriate load balancing ( ip hash ) in lacp config
difference between load balancing and load sharing
Dropped packets / packet retransmission rates
Check networking gear | update them | consult with vendors
dont route nfs traffic ..keep in same subnet

Always NetApp Virtual Storage Console or plugin based from Array vendor
alignment issue
parameter defaults
For example
Parameter related to NFS mount
Net.TcpipHeapSize .. change this to 30 mb if you are using more than 8 NFS mou
nt
HA related NFS due to failback set to YES ... Fail back used to set to NO as be
st practice but now change these values for HA timeouts

NFS.HeartbeatFrequency = 12
NFS.HeartbeatTimeout = 5
NFS.HeartbeatMaxFailures = 10
# checking flowcontrol
esxcli system module parameters list --module e1000 | grep "FlowControl"
# esxcfg-nics -l
# ethtool --show-pause <VMNic Name>
# Networking issue after vmotion
- check if its mixed version esxi host .. 4 to 5 or 5 to 5.5
- Ports in use -- run esxcfg-vswitch -l to list the used port
- Increase the number of ports for the virtual switch and reboot the host for th
e changes to take effect.
-If using NIC teaming, ensure that the load balancing policy on all the ESXi/ESX
hosts in the cluster is the same.
- Notify switch set to YEs to update the lookup table ( load balancing)
- check vmkernel log for error
- ensure vlan settings and port group are properly
##Odd case
The maximum length of dvUplink name in ESX/ESXi 4.1 and ESXi 5.0 is 32 character
s, but in ESXi 5.1/5.5 the maximum dvUplink name length is 81 characters. When p
erforming a vMotion from ESXi 5.1/5.5 to ESX/ESXi 4.1 or ESXi 5.0, the maximum s
upported dvUplink name length is 32 characters. If the dvUplink name length is l
onger than 32 characters in ESXi 5.1/5.5 after vMotion to ESX/ESXi 4.1 or ESXi 5
.0, the VDS port might lose connection.
# check /etc/vmware/esx.conf
/var/log # grep -i nfs /etc/vmware/esx.conf
/nas/NFS02/readOnly = "false"
/nas/NFS02/enabled = "true"
/nas/NFS02/share = "/mnt/LABVOL/NFS02"
/nas/NFS02/host = "172.17.199.7"
/nas/NFS01/readOnly = "false"
/nas/NFS01/enabled = "true"
/nas/NFS01/host = "172.17.199.7"
/nas/NFS01/share = "/mnt/LABVOL/NFS01"
/firewall/services/nfsClient/allowedip[0000]/ipstr = "192.168.199.7"
/firewall/services/nfsClient/allowedip[0001]/ipstr = "172.17.199.7"
/firewall/services/nfsClient/allowedall = "false"
/firewall/services/nfsClient/enabled = "true"
# esxcfg-advcfg -g |grep /NFS to list out nfs related timeout and advance settin
gs
## troubleshooting DVS
Static Binding: Adheres a virtual machine to a port on the DvSwitch that remains
active even when the machine is powered off
Dynamic Binding: Assigns a port to a virtual machine at the time the machine is

powered on
Ephemeral: Allocates a port to a virtual machine when powered on. If the machine
reboots, it loses its old port and is assigned a new one
- Use net-dvs commands to check properties of DVS portgroups
/var/log # net-dvs -T
Testing DVSData routines...
DVSData serialization succeeded.
DVSData deserialization succeeded.
DVSData serialization to 4x succeeded.
DVSData deserialization 4x succeeded.
DVSData deserialization succeeded.
check the /etc/vmware/dvsdata.db - DVS database
- Every VMFS volume will have this .dvsdata directory
drwxr-xr-x

1 root

root

420 Feb 8 12:33 .dvsData

- Going further into it you will files with PORT ID of VM


drwxr-xr-x
1 root
root
ad dd b5 30 2f b1 0c aa

1.5k Feb 8 12:47 6d 8b 2e 50 3c d3 50 4a-

.dvsdata is reading and ports are assigned for networking

Das könnte Ihnen auch gefallen