Beruflich Dokumente
Kultur Dokumente
field of network and desktop virtualization. VMware has changed the tech world from physical to software
based virtual world.
Due to implementation of virtual environment to consolidate the available hardware, the demands of subject
matter experts are increasing day by day in the market. To manage and maintain organization’s business, an
experience workforce is required, and to choose the top talent from the market for growing an organization’s
business, professionals are selected via many technical interviews and HR processes.
Along with the HR processes, technical interviews are also conducted to assess the abilities of a professional in
the field of virtualization for continues business process. Here are 80 interview questions on data center
virtualization technology fresher and up to 3 years of hands-on experience that may be asked to assess the
candidate’s technical and hands-on expertise.
Hypervisor
Fault Tolerance (FT)
Virtual Networking
vCenter Server
Virtual Storage (Datastore)
What’s New in vSphere 6.0
Content Libraries
vSAN
vApp and
Miscellaneous
Hypervisor
1. What is VMKernel and why it is important?
VMkernel is a virtualization interface between a Virtual Machine and the ESXi host which stores VMs. It is
responsible to allocate all available resources of ESXi host to VMs such as memory, CPU, storage etc. It’s also
control special services such as vMotion, Fault tolerance, NFS, traffic management and iSCSI. To access these
services, VMkernel port can be configured on ESXi server using a standard or distributed vSwitch. Without
VMkernel, hosted VMs cannot communicate with ESXi server.
Hypervisor is a virtualization layer that enables multiple operating systems to share a single hardware
host. Each operating system or VM is allocated physical resources such as memory, CPU, storage etc by the
host. There are two types of hypervisors
3. What is Virtualization?
The process of creating virtual versions of physical components i-e Servers, Storage Devices, Network Devices
on a physical host is called virtualization. Virtualization lets you run multiple virtual machines on a single
physical machine which is called ESXi host.
Server virtualization: consolidates the physical server and multiple OS can be run on a single server.
Network Virtualization: Provides complete reproduction of physical network into a software defined
network.
Storage Virtualization: Provides an abstraction layer for physical storage resources to manage and
optimize in virtual deployment.
Application Virtualization: increased mobility of applications and allows migration of VMs from host
on another with minimal downtime.
Desktop Virtualization: virtualize desktop to reduce cost and increase service
FT stands for Fault Tolerance very prominent component of VMware vSphere. It provides continuous
availability for VMs when an ESXi host fails. It supports up to 4 vCPUs and 64 GB memory. FT is very
bandwidth intensive and 10GB NIC is recommended to configure it. It creates complete copy of an entire VM
such as storage, compute, and memory.
The communication between two ESXi host is called FT logging when FT is configured between them. The
pre-requisition of configuring FT is to configure VMKernel port.
Main difference between VMware HA and FT is: HA is enabled per cluster and VMware FT is enabled per
VM. In HA, VMs will be re-started and powered-on on another host in case of host failure, while in FT there is
no downtime, because second copy will be activated in case of host failure.
Virtual Networking
11. What is virtual networking?
A network of VMs running on a physical server that are connected logically with each other is called virtual
networking.
vSS stands for Virtual Standard Switch is responsible for communication of VMs hosted on a single physical
host. it works like a physical switch automatically detects a VM which want to communicate with other VM on
a same physical server.
vDS stands for Virtual Distributed Switch acts as a single switch in whole virtual environment and is
responsible to provide central provisioning, administration, and monitoring of virtual network.
4096 ports per host are available either in standard switch or distributed switch.
VMKernel adapter provides network connectivity to the ESXi host to handle network traffic for vMotion, IP
Storage, NAS, Fault Tolerance, and vSAN. For each type of traffic such as vMotion, vSAN etc. separate
VMKernal adapter should be created and configured.
17. What are main use of port groups in data center virtualization?
You can segregate the network traffic by using port groups such as vMotion, FT, management traffic etc.
18. What are three port groups are configured in ESXi networking?
A logical configuration on the switch port to segment the IP Traffic where each segment cannot communicate
with other segment without proper rules mentioned is called VLAN and every VLAN has a proper number
called VLAN ID.
Promiscuous mode
MAC address change
Forged transmits
The default mode is Reject. If Accept is selected, VM will receive all traffic port group via vSwitch.
The default mode of this policy is Reject. If the Accept is selected, host will accept requests to change the
effective MAC address.
The default mode is Reject. If Accept is selected, host will not compare the source and effective MAC address
transmitted from a VM.
vCenter Server
25. What are main components of vCenter Server architecture?
PSC stands for Platform Services Controller first introduced in version 6 of VMware vSphere which handles
infrastructure security functions. It has three main components.
Embedded
centralized
Embedded deployment
External deployment
vROP provides the operation dashboards for performance analytics, capacity optimization and monitoring the
virtual environment.
30. What is the basic security step to secure vCenter Server and users?
Authenticate vCenter Server with Active Directory. By using this we can assign specific roles to users and can
also manage virtual environment in an efficient way.
Datastore is a storage location where virtual machine files are stored and accessed. Datastore is based on a file
system which is called VMFS, NFS.
Vmdk is a VM disk file and stores data of a VM. It can be up to 62 TB in size in vSphere 6.0 version.
1. Thick Provisioned Lazy Zeroes: every virtual disk is created by default in this disk format. Physical
space is allocated to a VM when virtual disk is created. It can’t be converted to thin disk.
2. Thick Provision Eager Zeroes: this disk type is used in VMware Fault Tolerance. All required disk space
is allocated to a VM at time of creation. It takes more time to create a virtual disk compare to other disk
formats.
3. Thin provision: It provides on-demand allocation of disk space to a VM. When data size grows, the size
of disk will grow. Storage capacity utilization can be up to 100% with thin provisioning.
4. What is Storage vMotion?
It is similar to traditional vMotion, in Storage vMotion, virtual disk of a VM is moved from datastore to
another. During Storage vMotion, virtual disk types think provisioning disk can be transformed to thin
provisioned disk.
What’s New in vSphere 6.0
36. What is VM Hardware version for vSphere 6.0?
Version 11
Version 13
Platform Services Controller (PSC) is introduced in vSphere 6.0. vSphere 6.0 is also known as Virtual hardware
version 11.
39. How many maximum hosts can manage a vCenter Server in vSphere 6.0?
In vSphere 6.0, a single vCenter Server can manage up to 1000 hosts either in Windows or in vCenter
Appliance (vCSA).
Virtual Volume a new VM disk management concept introduced in vSphere 6.0 that enables array-based
operations at the virtual disk level. VVol is automatically created when virtual disk is created in virtual
environment for a VM.
Standard Edition: Contains 1 vCenter Server Standard license, up to 2 vCPUs for Fault Tolerance,
vMotion, Storage vMotion, HA, VVols etc.
Enterprise Edition: Same as Standard Edition additionally APIs for Array Integration and Multipathing,
DRS, and DPM.
Enterprise Plus: Includes all features of Standard and Enterprise Editions with additionally Fault
Tolerance upto 4 vCPUs and 64GB of RAM. It also includes Distributed vSwitch and the most
expensive licensing option of vSphere 6.0.
Content Library is the central location point between two different geo-graphical locations with vCenter Servers
where you can store VM templates, ISO images, scripts etc. and share them between geo-graphical locations
We create VM templates and can share on another geo-graphical location of a company without creating again
on other locations. It has many benefits such as sharing and consistency, storage efficiency, and secure
subscription.
VMFS is a file system for a VM in VMware vSphere. VMFS is a datastore that responsible for storing virtual
machine files. VMFS can also store large files which size can up to 64TB in vSphere 6.0
VSAN
50. What is vSAN?
Virtual SAN is a software defined storage first introduced in vSphere 5.5 and is fully integrated with vSphere. It
aggregates locally attached storage of ESXi hosts which are part of cluster and creates distributed shared
solution.
Hybrid: Uses both flash-based and magnetic disks for storage. Flash are used for cashing, while
magnetic disks are used for capacity or storage.
All-Flash: Uses flash for both caching and for storage
54. Are there VSAN ready nodes are available in the market?
Yes, vSAN-ready such as VxRail 4.0 and 4.5 are available in the market. VxRail is the combination of min 3
servers which are part of a cluster and can scale up to 64 servers.
To configure a vSAN, you should have minimum 3 ESXi hosts/servers in the form of a vSAN cluster. If one of
servers fails, vSAN cluster will fail.
56. How many maximum ESXi hosts are allowed for vSAN?
57. How many disk groups and max magnetic disks are allowed in single disk group?
Maximum 5 disk groups are allowed on an ESXi host which is a part of vSAN cluster and maximum of 7
magnetic and 1 SSD per disk group is allowed.
58. How many type of storages can we use in our virtual environment?
Network File System (NFS) is file sharing protocol that ESXi hosts use to communicate with a NAS device.
NAS is a specialized storage device that connects to a network and can provide file access services to ESXi
hosts.
Raw Device Mapping (RDM) is a file stored in a VMFS volume that acts as a proxy for a raw physical device.
RDM enables you to store virtual machine data directly on a LUN. RDM is recommended when a VM must
interact with a real disk on the SAN.
An iSCSI SAN consists of an iSCSI storage system, which contains one or more storage processors. TCP/IP
protocol is used to communicate between host and storage array. iSCSI initiator is configured with the ESXi
host. iSCSI initiator can be a hardware based either dependent or independent and software based known as
iSCSI software initiator.
62. What is the format of iSCSI addressing?
vApp
64. What is vApp?
vApp is a container or group where more than one VMs can be package and manage multi-tiered applications
for specific requirements for example, Web server, database server, and application server can be configured as
a vApp and can be defined their power-on and power-off sequence.
We can configure several settings for vApp such as CPU and memory allocation, and IP allocation policy etc.
Miscellaneous
66. What is VMware DRS?
DRS stands for Distributed Resource Scheduler; that automatically balances available resources among various
hosts by using cluster or resource pools. With the help of HA, DRS can move VMs from one host to another to
balance the available resources among VMs.
Share: A value that specifies the relative priority or importance of a VM access to given resource.
Limit: Consumption of a CPU cycle or host physical memory that cannot cross the defined value (limit).
Reservation: This value defines in the form of CPU or memory and must be available for a VM to start.
An alarm is a notification which appears when an event occurs. Many default alarms exist for many inventory
objects. Alarms can be created and modified using vSphere Web Client;
69. What are the hot pluggable devices which can be added while VM is running?
To create a copy of a VM with the time stamp as a restore point is called a snapshot. Snapshots are taken when
an upgrade or software installation is required. For better performance, a snapshot should be removed after
particular task is performed.
73. What is vMotion and what is the main purpose to use it in virtual environment?
It is very prominent feature of VMware vSphere used to live migrate running VMs from one ESXi host to
another without any downtime. Datastores and ESXi hosts both can be used while vMotion.
A clone is a copy of a virtual machine. By cloning a VM, it will save time if multiple VMs with same
configurations are required to configure. While a template is a master copy of an image created from a VM
which can be later used to create many clones. After converting a VM to a template, it can’t be powered-on or
edited.
Network Hearbeat
Datastore Heartbeat
When HA is enabled in a cluster, all hosts take part in selection process to be selected as a master host. A host
which has highest number of datastores mounted, will be selected as a master host. All other hosts will remain
slave hosts.
It is a suite of utilities which are used to enhance performance of a VM in the form of graphics, mouse/keyboard
movement, network card and other peripheral devices.
Stands for Distributed Power Management is a feature of VMware DRS is used to monitor required resources in
a cluster. When the resources are decreases due to low usage, VMware DPM consolidates workloads and shut
down the hosts which are not being used, and when resources are increased it automatically power on the un-
used hosts.
79. What is ESXi Shell?
It is a command-line interface is used to run repair and diagnostics of ESXi hosts. It can be accessed via DCUI,
vCenter Server enable/disable, and via SSH.
I hope you have enjoyed reading this post. Thanks for reading! Be social and share it to social media if you feel
worth sharing it.
Question: Is Jungle Book Movie helped you to recollect your Sunday childhood memories? (Indian kids loved
it in the year 1993)
Answer:
Wait wait … this is suppose to be VMware question but Interviewer asked you Jungle Book question
Question: As part of Data Center Network devices upgrade/change – someone changed vCenter IP Address.
How do you tackle this Scenario as a VMware Administrator? What is the Technical plan that you will follow
for this Change Record? (ITIL Process)
Answer:
Hint: Interviewer looking at your Technical direction/plan along with ITIL Chanage management procedures
We may think that changing IP Address is easy job like going to vCenter VM Console (most of the cases)
[OR] Remote console for Physical servers and modify the Network Adapter Settings. But what happens to your
ESXi servers, NSX VM’s and Update Manager?? will they communicate to your vCenter server with new IP
directly without any modification? Here is the detailed Technical Plan to answer this question.
Create backups of the vCenter Server & underlying SQL database for Backup Plan
Set DRS to manual mode to avoid anything moving around
Identify the ESXi host running the vCenter VM and connected directly to the host with the vSphere
Client – Do not forget your vCenter going to disconnect and you can’t manage it anymore via vSphere
client
Close any sessions you have open to the vCenter Server (Web Client, vSphere Client sessions
Open a console window to the vCenter Server by way of the ESXi host.
Change the IPv4 address and IPv4 gateway as per new Networking configuration
Uninstall Update Manager software from the VM (Some times it’s installed other than vCenter)
NOTE: There is easy method to update vCenter IP Address at Update Manager via command line (we
will discuss it in future posts)
NSX requires your attention as vCenter re-registration is complex procedure – leave this for Network
Specialists to provide technical plan
Reconnect to the ESXi host to use new vCenter IP Address for communication and agents Installation
Finally I tried to bring most of the related items for vCenter IP Address change from my Experience and
knowledge but do not treat this as final Technical Plan. You need to refer your Infrastructure for better planning
Change Records (CR’s) as per ITIL Procedure.
I hope this post is helpful. Thanks for Reading. Be Social and Share it in Social media, if you feel worth sharing
it.
VMware Interview Question No.3
by govmlab | Aug 12, 2016 | Virtual_Networking | 0 comments
In case if vPorts mentioned in Diagram are not visible properly then please find vport details below:
In case if vPorts mentioned in Diagram are not visible properly then please find vport details below:
Answer :
VMs running on Scenario-1 will perform better than VMs running on Scenario-2.
To understand the answer of this question, we need to first understand what is Port ID based teaming
Policy and How it works?
“Route Based on Originating Virtual PortID” is one of the VMware NIC-Teaming mechanism used for
bandwidth Aggregation and Network Redundancy in case of Uplink Failure.
Every VM and VMkernel port on a vSwitch is connected to a Virtual Port. Whenever vSwitch receives any
network traffic from either of these entity, it assigns Virtual port to one of the uplinks in NIC Team and forward
the traffic on Wire.
In PortID based Algorithm, Assigning vPORT to Uplink Port is done based on the Hash of PortID and Active
Adapters available in the NIC-team. VMkernel doesn’t consider Standby Adapters while calculating HASH
because no network traffic is sent on standby adapter if all the active adapters in team are alive.
– Value of Modulus refers to Uplink port in a team which would be mapped to that specific vPORT
Traffic.
So Let’s Understand Scenario-1
As explained above, We have 3 Active Adapters and 2 Standby Adapters in a Team. Hash Calculation will
only be considering 3 Adapters. Standby Adapters will be not included in Hash calculation.
CONCLUSION:
All the VM traffic is distributed across all the Active Uplinks available in Team so Total bandwidth
available for VM Traffic is 3GB
As explained above, We have 5 Active Adapters in a team so all the 5 Uplinks will be considered during
HASH Calculation.
VM1 -> vPort ->18330
CONCLUSION:
Even if we have 5 Uplinks in Team, VM traffic is forwarded to Only vmnic0. Other 4 uplinks in team are
still sitting idle and VM is only able to consume 1G Bandwidth even though Total bandwidth of 5GB is
available in Team.
As most of you said, Scenario-2 is better but I think now you would have got clarity on why scenario-1 would
perform better in this case.
There are 3 ESXi Host having different CPU configuration as mentioned in above diagram.
There is CPU intensive VM running on host executing CPU workload which triggers 4 Independent
Processes with IPC (inter process communication) disabled.
Out of these 3 Scenarios, in which Scenarios VM will perform better and gives Best performance & most importantly WHY?
1. VM running on Host3 will perform better because VM vCPU Topology is exact identical to Physical CPU
Topology.
2. VM running on Host2 will perform better due to more no of cores assigned to single Sockets at VM level.
3. VM running on Host-1 will perform better due to more no of physical cores assigned to single socket at Host
level.
4. None of the above.
Option1:VM running on Host3 will perform better because VM vCPU topology is exact identical to
Physical CPU Topology
Explanation:
vCPU allocation at VM level has nothing to do with Physical CPU Topology of ESXi Host.
Virtual Socket & Virtual Core at VM level gets translate into no of vCPUs which gets schedules on
Physcial Cores by VMkernel CPU scheduler.Having Identical Toplogy doesn’t increase VM
performance.
Option2: VM running on Host2 will perform better due to more no of core assigned to single socket at
VM
level
Explanation:
Assigning more no of cores to single socket at VM level doesn’t make any difference in VM
performance due to abstraction layer. It only make sense when Guest OS have socket limitations.
Option3: VM running on Host1 will perform better due to more no of physical cores assigned to single
Socket at Host level
Explanation:
Since all the processes are independent to each other so sharing memory between process
doesn’t make sense in this scenario. That’s the reason, having more no of cores associated with single
socket at Host level doesn’t improvise VM performance in this specific scenario.
Explanation:
All the 4 Processes triggered from CPU intensive application are independent of each
Other which means no memory sharing between processes. So NUMA optimization or concept of
Memory
locality doesn’t get apply in this specific scenario.
From VMkernel perspective, there are four independent process which requires Core for execution. If
VMkernel could find 4 dedicated cores then it will schedule each process on its dedicated core
irrespective of no of socket*no of core combination at VM and Host level.
How Physical Switch will be processing VMs Packets coming from vSwitch and goes on Wire.
1. Physical Switch will be masking all VMs MAC address with Uplink Mac Address(XX) ( similar to NAT
implementation)
2. Virtual Switch will be masking all outgoing VMs MAC address with Uplink Mac Address
3. Physical Switch Will be learning VMs MAC address and updating MAC table with only VMs MAC address.
4. Physical Switch will having both VM MAC address and Uplink Mac Address in its MAC table
ANSWER:
3) Physical Switch Will be learning VMs MAC address and updating MAC table with only VMs MAC
address.
EXPLANATION:
In Physical Networking, Ethernet Frames coming from Host will be encapsulated with MAC address of
NIC Cards installed on Physical Host. When Switch receives frame on its switch port to which NIC card
was connected, then it updates its MAC table with Source NIC MAC address along with Port details
from which this frame was received.
In Virtual Networking, There is no significance of Physical NIC Cards (Uplinks) MAC address installed
on ESXi Host.
Reason Being, VMkernel configure every Uplink port connected to vSwitch in Promiscous Mode. Once
Uplink is configured in Promiscous Mode then its becomes like a PASS-THROUGH Device which
forwards all the frames coming from virtual machine to Physical Switch port directly without any
modification.
Since, Virtual Machines Frames are not masked by Uplink Port MAC address due to its PASS-
THROUGH Behavior, Physical Switch receives frame as its directly talking to Virtual Machines.
That’s the reason, Physical Switch Learns VM Mac address and Mapped it to respective Port in its MAC
table.
Issue
While performing vMotion, the operation fails at 14% with the below error :
A general system error occurred: Migrtion to host failed with erro Connection closed by remote host, possibly due to timeout
(0xbad003f).
Migrate virtual machine:A general system error occurred: Migration to host failed with error Connection closed by remote
host, possibly due to timeout (0xbad003f).
Scenarios
Scenario 1: Your management network and vmotion network are in the same subnet using the same physical NIC.
Consider the case, where the management network and vmotion network are in the same subnet and you have assigned a VLAN
id to the vMotion network, the operation fails at 14%.
My first point will be to avoid using same IP subnet for both management and vmotion networks. Because if you use the same
subnet, all the vmotion traffic will be forwarded to the physical NIC connected to the management network. This is because, by
default all traffic from vmkernel portgroups from the same subnet will be forwarded to the first NIC configured in the ESXi for
that IP subnet. Obviously this will be the management network.
And if you still stick to the plan of using same subnet, please make sure that you have not assigned any VLAN id to the vMotion
portgroup.
What happens when we assign a VLAN to the vMotion portgroup ? vMotion vmknic will try to communicate with the default
gateway and since the default gateway is not tagged with the VLAN id you choose for vMotion, the operation fails.
Scenario 2: Your management network and vmotion network are in the same subnet using different physical NIC (may be
using different vswitches as well).
The comments in the above scenario applies to this scenario also. It doesn't matter if you have created a new vSwitch or a new
portgroup or a dedicated physical NIC for the vmotion network, if your management network is in the same subnet, do not
assign a VLAN id to the portgroup.
Scenario
You have a single vmdk file of 200 GB and it has two logical volumes C & D with 100 GB each. Suppose you need to add
another 100 GB to the D drive making it 200 GB. What would you do ?
Change the VMDK size using vSphere client to 300 GB (Exisiting 200 GB + required space).
Log in to the VM and ensure that the added disk space is available to the VM as 'Unallocated' space
Execute the below commands in command prompt
o diskpart
o list volume
o select volume
o extend
3. CentOS Network interface is not detecting after VMware clone ?
Symptom:
Eth0 interface will not be present for a Centos VM after cloning. Only the loopback networking interface will be available. If
you try to turn up the interface manually (using the command ifup eth0 or ifup-eth0), you will receive the below error.
Root Cause:
When you clone a Centos VM from a template, a new NIC card will be created for the cloned VM. In other terms, a new MAC
address will be generated for the NIC of the cloned machine. This change happens only in VMware perspective and no
modification is made in Centos. Therefore the kernel will be still searching for the NIC with old MAC address and hence fails.
Resolution:
1. Update the exisiting ethernet configuration file to reflect the new MAC address.
Check the new MAC address using vSphere client and modify the ifcfg-eth0 interface configuration using the command:
vi /etc/sysconfig/networking/devices/ifcfg-eth0
rm -f /etc/udev/rules.d/70-persistent-net.rules
3. Reboot the VM
4.SQL servers hosted in VMs are facing performance degradation. How to confirm whether it is a SQL related
issue or VMware related issue ?