Beruflich Dokumente
Kultur Dokumente
Route based on the originating virtual port . Based on the virtual port where th
e traffic entered the vSphere distributed switch.
Route based on IP hash. Based on a hash of the source and destination IP address
es of each packet. For non-IP packets, whatever is at those offsets is used to c
ompute the hash.
Route based on source MAC hash. Based on a hash of the source Ethernet.
Route based on physical NIC load. Based on the current load of the physical netw
ork adapters connected to the port. If an uplink remains busy at 75% or higher f
or 30 seconds, the host proxy switch moves a part of the virtual machine traffic
to a physical adapter that has free capacity.
Use explicit failover order. Always use the highest order uplink from the list o
f Active adapters which passes failover detection criteria.
esxcfg-info | less
esxcfg-scsidevs -l
lspci -vvv
esxcfg-nics -l
esxcli network nic
esxcli network nic
-I
| egrep -i 'display name|vendor'
up -n vmnicX
down -n vmnicX
## Packet capture
The tcpdump-uw tool can only capture packets/frames at the vmkernel interface le
vel and cannot capture frames at the uplinks, or vSwitch, or virtual port levels
.
The new pktcap-uw tool allows traffic to be captured at all points within the hy
pervisor for greater flexibility and improved troubleshooting.
## CHECK
net-stats -l to list port
Run the pktcap-uw command to capture packets at both points simultaneously:
pktcap-uw --switchport 50331665 -o /tmp/50331665.pcap & pktcap-uw --uplink vmnic
2 -o /tmp/vmnic2.pcap &
#Usage
# pktcap-uw
# pktcap-uw
# pktcap-uw
# pktcap-uw
--vmk vmk0
--uplink vmnic7
--switchport switchportnumber
--vmk vmk# -o file.pcap
Poor NFS
Version of NFS
Type of filesystem ext4 is better
Mount options and rsize and wsize
check if LACP is used .. NIC team
Speed/duplex setting of NIC -- set to autonego ( flow control pause feature)
using appropriate load balancing ( ip hash ) in lacp config
difference between load balancing and load sharing
Dropped packets / packet retransmission rates
Check networking gear | update them | consult with vendors
dont route nfs traffic ..keep in same subnet
Always NetApp Virtual Storage Console or plugin based from Array vendor
alignment issue
parameter defaults
For example
Parameter related to NFS mount
Net.TcpipHeapSize .. change this to 30 mb if you are using more than 8 NFS mou
nt
HA related NFS due to failback set to YES ... Fail back used to set to NO as be
st practice but now change these values for HA timeouts
NFS.HeartbeatFrequency = 12
NFS.HeartbeatTimeout = 5
NFS.HeartbeatMaxFailures = 10
# checking flowcontrol
esxcli system module parameters list --module e1000 | grep "FlowControl"
# esxcfg-nics -l
# ethtool --show-pause <VMNic Name>
# Networking issue after vmotion
- check if its mixed version esxi host .. 4 to 5 or 5 to 5.5
- Ports in use -- run esxcfg-vswitch -l to list the used port
- Increase the number of ports for the virtual switch and reboot the host for th
e changes to take effect.
-If using NIC teaming, ensure that the load balancing policy on all the ESXi/ESX
hosts in the cluster is the same.
- Notify switch set to YEs to update the lookup table ( load balancing)
- check vmkernel log for error
- ensure vlan settings and port group are properly
##Odd case
The maximum length of dvUplink name in ESX/ESXi 4.1 and ESXi 5.0 is 32 character
s, but in ESXi 5.1/5.5 the maximum dvUplink name length is 81 characters. When p
erforming a vMotion from ESXi 5.1/5.5 to ESX/ESXi 4.1 or ESXi 5.0, the maximum s
upported dvUplink name length is 32 characters. If the dvUplink name length is l
onger than 32 characters in ESXi 5.1/5.5 after vMotion to ESX/ESXi 4.1 or ESXi 5
.0, the VDS port might lose connection.
# check /etc/vmware/esx.conf
/var/log # grep -i nfs /etc/vmware/esx.conf
/nas/NFS02/readOnly = "false"
/nas/NFS02/enabled = "true"
/nas/NFS02/share = "/mnt/LABVOL/NFS02"
/nas/NFS02/host = "172.17.199.7"
/nas/NFS01/readOnly = "false"
/nas/NFS01/enabled = "true"
/nas/NFS01/host = "172.17.199.7"
/nas/NFS01/share = "/mnt/LABVOL/NFS01"
/firewall/services/nfsClient/allowedip[0000]/ipstr = "192.168.199.7"
/firewall/services/nfsClient/allowedip[0001]/ipstr = "172.17.199.7"
/firewall/services/nfsClient/allowedall = "false"
/firewall/services/nfsClient/enabled = "true"
# esxcfg-advcfg -g |grep /NFS to list out nfs related timeout and advance settin
gs
## troubleshooting DVS
Static Binding: Adheres a virtual machine to a port on the DvSwitch that remains
active even when the machine is powered off
Dynamic Binding: Assigns a port to a virtual machine at the time the machine is
powered on
Ephemeral: Allocates a port to a virtual machine when powered on. If the machine
reboots, it loses its old port and is assigned a new one
- Use net-dvs commands to check properties of DVS portgroups
/var/log # net-dvs -T
Testing DVSData routines...
DVSData serialization succeeded.
DVSData deserialization succeeded.
DVSData serialization to 4x succeeded.
DVSData deserialization 4x succeeded.
DVSData deserialization succeeded.
check the /etc/vmware/dvsdata.db - DVS database
- Every VMFS volume will have this .dvsdata directory
drwxr-xr-x
1 root
root