Sie sind auf Seite 1von 11

A First Look at Hyper-Vs Virtual Fibre Channel Feature

(Part 1)
by Brien M. Posey [Published on 27 June 2013 / Last Updated on 19 July 2013]

This article discusses the benefits and the limitations of Hyper-V’s new virtual Fibre Channel feature.

If you would like to read the next part in this article series please go to A First Look at Hyper-Vs Virtual Fibre Channel
Feature (Part 1).

Introduction

In spite of the rapid popularity of server virtualization, there are some types of physical servers which have historically
proven to be difficult or impossible to virtualize. Among these servers are those that have dependencies on Fibre
Channel storage. Although Hyper-V has long been able to connect to Fibre Channel storage at the host level, there
has been no provision for directly connecting virtual machines to Fibre Channel storage. This has all changed in
Hyper-V 3.0 thanks to a new feature called Virtual Fibre Channel. This article discusses the benefits and features of
virtual Fibre Channel.

Advertisement

Why Use Virtual Fibre Channel?

The greatest benefit to using virtual Fibre Channel is that it makes it possible to virtualize workloads that could not
previously be virtualized due to their dependency upon Fibre Channel storage. Virtual machines are now able to
directly access Fibre Channel storage in the same way that they could if the operating system were running on a
physical server.

Of course storage accessibility is not the only benefit to using virtual Fibre Channel. This technology also makes it a
lot more practical to create guest clusters at the virtual machine level.

Some administrators might be understandably reluctant to use the virtual Fibre Channel feature. After all, Hyper-V
has long supported SCSI pass through, which is another mechanism for attaching a virtual machine to physical
storage. Although SCSI pass through works, it complicates things like virtual machine backups and migrations. That
being the case, I wanted to make sure to say up front that when properly implemented virtual Fibre Channel is a first-
class Hyper-V component. Contrary to rumors, you can perform live migrations on virtual machines that use virtual
Fibre Channel.

Virtual Fibre Channel Requirements

There are a number of different requirements that must be met prior to using the virtual Fibre Channel feature. For
starters, your Hyper-V server must be equipped with at least one Fibre Channel host bus adapter (using multiple host
bus adapters is also supported). Furthermore, the host bus adapter must support N_Port ID Virtualization (NPIV).
NPIV is a virtualization standard that allows a host bus adapter’s N_Port to accommodate multiple N_Port IDs. Not
only does the host bus adapter have to support NPIV, but NPIV support must be enabled and the host bus adapter
must be attached to an NPIV enabled SAN.

The requirements listed above are sufficient for allowing Hyper-V to access your Fibre Channel network. However,
your virtual machines must also support Fibre Channel connectivity. This means that you will have to run a supported
operating system within your virtual machines. If you want to connect a virtual machine to virtual Fibre Channel then
the virtual machine must run Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012. No other
guest operating systems are currently supported for use with virtual Fibre Channel.

In case you are wondering, Hyper-V 3.0 does not allow virtual Fibre Channel LUNs to be used as boot media.

Live Migration Planning

As previously stated, the Hyper-V live migration feature is compatible with virtual Fibre Channel. However, facilitating
live migrations does require a bit of planning. As you would expect, the main requirement is that each Hyper-V host to
which a virtual machine could potentially be live migrated must have a compatible host bus adapter that is connected
to the SAN.

The live migration process is made possible by the way that Hyper-V uses World Wide Names (WWNs). Hyper-V
requires two WWNs to be applied to each Fibre Channel adapter. As you are no doubt aware, migrating a virtual
machine to another Hyper-V host requires Hyper-V to eventually hand off storage connectivity to the destination host.
If each host were only configured with a single WWN then Fibre Channel connectivity would be broken during the
hand off process. However, the use of two distinct WWNs on each adapter solves this problem.

When the live migration process is ready to hand over storage connectivity, it releases one WWN, but not the other.
The destination host establishes Fibre Channel connectivity for the VM by using the WWN that was released by the
original host machine. At that point, both the source host and the destination host maintain Fibre Channel
connectivity, but do so through two different WWNs. Once connectivity has been established by both hosts, the
source host can release its other WWN. This approach allows the virtual machines to maintain Fibre Channel
connectivity throughout the live migration process.

Multi Path I/O

Another networking technology that is compatible with virtual Fibre Channel is multipath I/O (MPIO). Multipath I/O is a
storage technology that is designed to provide continuous connectivity to a storage resource by routing network traffic
through multiple network adapters.

Multipath I/O is used as a way of providing fault tolerance and performance enhancements in SAN environments. The
basic idea is that multipath I/O prevents any of the SAN components from becoming a single point of failure. For
example, a server that is connected to SAN storage might use two separate host bus adapters. These adapters
would typically be connected to two separate Fibre Channel switches, before ultimately being connected to a storage
array. The array itself might even be equipped with redundant disk controllers.

Multipath I/O improves performance because storage traffic can be distributed across redundant connections. It also
provides protection against component level failures. If a host bus adapter were to fail for example, the server would
be able to maintain storage connectivity through the remaining host bus adapter.

Hyper-V 3.0 is actually very flexible with regard to the way that MPIO can be implemented. The most common
implementation involves using MPIO at the host server level. Doing so provides the Hyper-V host with highly
available storage connectivity. The virtual machines themselves would be oblivious to the fact that MPIO is in use, but
would be shielded against a storage component level failure nonetheless.

Another way in which MPIO can be used is to configure virtual machines to use MPIO directly. Virtual Fibre Channel
is based on the use of virtualized Fibre Channel Adapters within virtual machines (I will talk more about these
adapters in Part 2). That being the case, you can configure virtual machine level MPIO by creating multiple virtual
Fibre Channel adapters within a virtual machine. Keep in mind however, that the virtual machine’s operating system
must also be configured to support MPIO.
Virtual Fibre Channel Technology

In the previous section, I briefly mentioned the idea of virtual Fibre Channel Adapters within a virtual machine. A
similar component that I wanted to go ahead and mention right now, but will talk more about in the next article in this
series is a virtual SAN.

Normally a physical Fibre Channel port connects to a SAN. However, a single host server can contain multiple Host
Bus Adapters, and each host bus adapter can have multiple ports. This is where virtual SANs come into play. A
virtual SAN is a named group of ports that all attach to the same physical SAN. You can think of a virtual SAN as a
mechanism for keeping track of port connectivity.

Conclusion

In this article, I have introduced you to the Hyper-V virtual Fibre Channel feature and have talked about its
requirements and limitations. In the next article in this series, I plan to discuss virtual SANs in more detail and show
you how to actually implement virtual Fibre Channel.

A First Look at Hyper-Vs Virtual Fibre Channel Feature


(Part 2)
by Brien M. Posey [Published on 17 July 2013 / Last Updated on 17 July 2013]

This article concludes the series on Hyper-V’s virtual Fibre Channel feature by examining the process for providing a
virtual machine with Fibre Channel connectivity.

If you would like to read the first part in this article series please go to A First Look at Hyper-Vs Virtual Fibre Channel
Feature (Part 1).

Introduction

In the first part of this article series, I explained the basics of working with the new Hyper-V virtual Fibre Channel
feature in Windows Server 2012. In this article, I want to conclude the discussion by showing you how to actually use
virtual Fibre Channel.

Advertisement

Building a Virtual SAN

The process of setting up virtual Fibre Channel starts with building a virtual SAN. The easiest way to accomplish this
is to open the Hyper-V Manager, right click on the listing for your Hyper-V server in the console tree, and then choose
the Virtual SAN Manager command from the shortcut menu, as shown in Figure A.
Figure A

Right click on your Hyper-V host server and choose the Virtual SAN Manager command from the shortcut menu. At
this point, the Hyper-V Manager will display the Virtual SAN Manager dialog box, shown in Figure B.
Figure B: The Virtual SAN Manager is used to create a virtual SAN.

As you can see in the figure above, the top left portion of the window lists the virtual Fibre Channel SANs. Right now
the dialog box displays a generic virtual SAN called New Fibre Channel SAN. The server that I am using is a lab
machine without a Fibre Channel Host Bus Adapter, but if a Host Bus Adapter were installed in the server, you would
see it listed among the virtual Fibre Channel SANs.

Hyper-V also gives you the ability to create a new virtual Fibre Channel SAN by choosing the Virtual Fibre Channel
SAN option and clicking the Create button.

The other item that appears in the screen capture shown above is the World Wide Names. As you may recall from
the previous article, Hyper-V uses multiple world wide names as a way of facilitating live migrations without losing
virtual Fibre Channel connectivity in the process. That being the case, you must provide Hyper-V with a range of
world wide names that it can use for this purpose.
If you click on the World Wide Names container, Hyper-V will display the interface that is shown in Figure C. As you
can see in the figure, this interface allows you to specify the range of World Wide Port Names that can be
dynamically assigned to virtual Fibre Channel ports.

Figure C: The Virtual SAN Manager allows you to define a range of World Wide Port Names that can be dynamically
assigned to virtual Fibre Channel ports.

The other setting that you can see in the figure above is the World Wide Node Name address. This address is
dynamically assigned to any new Fibre Channel ports that you create. It is worth noting however, that modifying this
setting does not have any impact on previously existing Fibre Channel ports. If you want to change the World Wide
Node Name for an existing Fibre Channel port, you will have to remove the port, configure the World Wide Node
Name using the interface shown in the figure above and then recreate the World Wide Node Name.

Linking a Virtual Machine


Once you have created a virtual SAN, the next step in the process is to link a virtual machine to the virtual SAN. To
do so, right click on the virtual machine for which you want to provide Fibre Channel connectivity and select the
Settings command from the resulting shortcut menu. Upon doing so, Hyper-V will display the Settings dialog box for
the virtual machine. You can see an example of this in Figure D.

Figure D: A virtual machine’s Settings page lets you modify the virtual machine’s configuration.

Next, select the Add Hardware container, as shown in the figure above, and then select the Fibre Channel Adapter
option from the list of available hardware (shown on the right side of the figure). Click the Add button and you will be
taken to the Fibre Channel Adapter page, shown in Figure E.
Figure E: The Fibre Channel Adapter screen is used to configure the virtual Fibre Channel adapter.

As previously mentioned, this Hyper-V server does not have a Fibre Channel host bus adapter installed. That is the
reason why the Virtual SAN drop down list shown in the figure above shows a status of Not Connected. If an adapter
were present in the server, we would be able to select the host bus adapter’s corresponding virtual SAN from the
drop down list, similar to what you can see in Figure F.
Figure F: Select your virtual SAN from the drop down list.

Once the virtual SAN has been selected, you can optionally configure the port addresses. As previously mentioned,
virtual machines maintain two sets of addresses as a way of facilitating live migrations without losing connectivity to
Fibre Channel storage. Figure E shows the World Wide Node Names and the World Wide Port Names that are being
used. Although these addresses are grayed out in the figure, you can modify them by clicking the Edit Address
button. When you have finished defining the addresses, click the Create Address button, shown in Figure G.
Figure G: When you finish setting the addresses, click the Create Addresses button.

Conclusion

Now that I have shown you how virtual Fibre Channel works and how to configure it, the next obvious question might
be that of whether or not you should be using this feature. Personally, I prefer to stick with using VHD and VHDX
based virtual hard disks on traditional storage whenever possible. I find this approach to be easier to manage.

Of course some might be quick to point out that Hyper-V has supported the use of SCSI pass through disks for years
and that virtual Fibre Channel could be thought of as a new type of pass through disk. While this is certainly true, it is
important to consider why pass through disks were used in the first place.

Pass through disks have historically been used for two reasons. One reason was to isolate a virtual machine’s
contents to a specific physical disk. The second reason was because using a pass through disk allowed a virtual
machine to get around the storage capacity limitations of the VHD hard disk format. In Windows Server 2012 Hyper-V
however, capacity becomes much less of an issue because the VHDX virtual hard disk type offers much greater
capacity than VHD based virtual hard disks.

Even so, this does not mean that you should never use virtual Fibre Channel. There is simply no denying the fact that
sometimes there is simply no getting around using virtual Fibre Channel. This is particularly true if you want to
virtualize physical servers that need to maintain connectivity to SAN storage.

If you would like to read the first part in this article series please go to A First Look at Hyper-Vs Virtual Fibre Channel
Feature (Part 1).

Das könnte Ihnen auch gefallen