Sie sind auf Seite 1von 28

Course Transcript

Server 2016 - Install, Store, and Compute:


Implementing Failover Clustering
Implement Failover Clustering
1. Course Introduction

2. Workgroup and Multi-domain Clusters

3. Deploying a Workgroup Cluster

4. Deploying a Single-domain Cluster

5. Quorum Configuration

6. Cluster Networking

7. Restoring a Single Node

8. Restoring a Cluster Configuration

9. Cluster Storage

10. Cluster-Aware Updating

11. Cluster Operating System Rolling Upgrade

12. Clustered Shared Volumes (CSVs)

13. Deploying Failover Cluster without Network Name

14. Scale-Out File Server (SOFS)

15. SOFS Versus Clustered File Server

16. Guest Clustering

17. Clustered Storage Spaces

18. Storage Replica

19. Cloud Witness

20. Configuring Guest Clusters with Shared VHDX

Practice: Failover Clusters


1. Exercise: Determining Cluster Strategies
Course Introduction
Learning Objective
After completing this topic, you should be able to
◾ start the course

1.
[Course title: Server 2016 - Install, Store, and Compute: Implementing Failover Clustering.] Hi. I am Michael Murphy,
[Technical Instructor] and in this course, I will examine the high availability solutions for Windows Server 2016
provided by Microsoft's software implemented Failover Clustering.
Workgroup and Multi-domain Clusters
Learning Objective
After completing this topic, you should be able to
◾ identify implementation requirements for workgroup, single, and multi-domain
clusters

1.
[Topic title: Workgroup and Multi-domain Clusters. The presenter is Michael Murphy.] In this video, we want to talk
about the Failover Cluster Configurations available in Server 2016 because they are radically divergent from the
options that we have had in the past. In the past, all members of a cluster had to be domain joined, right? It was a
requirement that they belonged to the same Active Directory domain. Now in Server 2012, we got non-domain joined
clusters, work group clusters managed with a cluster access point rather than being represented as computer objects
in Active Directory.

This has been expanded now in Server 2016 to include multi-domain clusters, so I can actually have cluster nodes
that are members of different domains. [Requirements.] Now, why might you want to do such a thing really becomes
the question I think, because it does fly in the face of so much of our traditional management strategies. But I think
about resources that are owned by more than one group or more than one business partner. We have a shared
e-commerce database. We have a shared HR database. We have whatever it happens to be. And so the
management of those resources, traditionally, if I needed the solution cluster, the hardware resources at least fell to
one group and the other group was excluded. And so there's the opportunity today to bring that together. Now, single
domain cluster requirements, what they've always been, right? All the servers have to be running the same version of
the operating system, so Windows Server 2016 and the same edition. So standard, standard, data center, data
center, they've got to have the failover clustering feature installed, they want to use Microsoft certified logo'd
hardware, all right? That's what we always want to use with these things. And they've got to pass the cluster
validation tests.

Now beyond that, we also want to see two NIC cards in the machine, right, one for the cluster communications
network segment and one for client access. While that is not technically a requirement, you can have cluster nodes
with a single network interface. It is certainly best practice. For workgroup or multi-domain clusters, same
requirements, but I've got to have a local account provisioned on all nodes in the cluster with the same username
and password. It's got to be a member of the local administrators group. Now the reality is that your best bet is to use
the local admin account, the built-in one, on all of them, right? Set the password on the built-in admin account to the
same password and use that one. If you're going to do it with an account that you create and delegate permissions
to, then you have to do some tweaking there. The local account token filter policy has to be set to 1 in the registry of
each of these machines. So I go to HKEY local machines, software, Microsoft Windows, current version, policy
system, I add the local account token filter policy registry key and I set its value to 1.

Now of course, you could do that with the new item property in the PowerShell cmdlet, right? You could do it that way
too, but you've got to set that policy if you're going to use an account that you create and define. These Active
Directory detached clusters have no associated computer objects, but they do have, we create what's called a cluster
access point for the management of the cluster. The primary DNS suffix for each node must be present. And in
multidomain DNS suffixes for all domains in the cluster, must be present on all cluster nodes. So I've got to have
additional DNS suffixes assigned to that machine. And that's a look at the potential cluster configurations in Server
2016, a variety that we have never had before.
Deploying a Workgroup Cluster
Learning Objective
After completing this topic, you should be able to
◾ deploy a workgroup cluster

1.
[Topic title: Deploying a Workgroup Cluster. The presenter is Michael Murphy. The Server Manager is open and it's
divided into two sections. The first section is the navigation pane where Local Server is selected. In the second
section, two subsections named PROPERTIES and EVENTS are displayed. The PROPERTIES subsection displays
information such as Computer name, Workgroup, and Remote management.] In this video, we're going to create a
workgroup cluster. So a workgroup cluster exists on machines that are not domain joined, right? And we've always
had a requirement in the past, well, right up until Windows Server 2012 R2 anyway, to have those machines joined to
the same Active Directory domain. So workgroup clusters were first introduced for Server 2012 R2 and are supported
in 2016, with the addition in 2016 of multi-domain clusters. So today, I can actually have servers joined to different
domains, but in the same cluster. So we can see here, this cluster is going to be created out in the DMZ Workgroup.
And that's not an uncommon thing, right? Out in the DMZ, I don't want domain joined resources. Maybe I'm
concerned about exposing Active Directory out there, because that's the area that's most vulnerable to attack. So
we've disjoined this machine from the domain. It's in the DMZ Workgroup.

There are other prerequisites that must be accomplished before we can do this. So the first thing is, on the local
server, [He switches to the local server where all users are listed. Administrator and MJMurphy are listed as users
and MJMurphy is selected.] every server that's going to be a member of the cluster, I have to create a user account.
And I have to make that user account a member of the Administrators group, [He right-clicks the user MJMurphy and
in the MJMurphy Properties dialog box, selects the Member Of tab.] the local Administrators group on this machine.
Now, legitimately, you can use the built-in Administrator account. And if you use the built-in Administrator account,
you can actually skip a step. But because we used this MJMurphy account that I created for this task and made a
member of Administrators group, we have to do this. [He switches to Windows PowerShell.] And what this is, of
course, is a PowerShell command for creating and placing a parameter value into a registry key. [The command
new-itemproperty -path HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System -Name,
LocalAccountTokenFilterPolicy -Value 1 is displayed.] And we do that with the new-itemproperty cmdlet. And specify
the path switch and the path to the place in the registry where that should be created. In this example,
Windows\CurrentVersion\Policies\System. The Name is the LocalAccountTokenFilterPolicy, and we set the value to
1.

We enable that policy. And that's going to enable our not built-in account to be able to do the job. [The output
displayed is: LocalAccountTokenFilterPolicy :1, PSPath:
Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System,
PSParentPath:
Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies,
PSChildName: System, PSDrive: HKLM, PSProvider: Microsoft.PowerShell.Core\Registry ] Now, in addition to the
filter policy and the local account, [He switches to the local server.] we have to set the DNS suffix. [In the Domain
Properties dialog box, he selects the option Internet Protocol Version 4 (TCP/IPv4) and the Internet Protocol Version
4 (TCP/IPv4) Properties dialog box opens. In this box, he clicks the Advanced button and the Advanced TCP/IP
Settings dialog box opens.] And here, and so you can see we're in the local TCP/IP properties of the NIC card [In the
DNS tab.] and we've appended the DNS suffix to the local NIC. [EarthFarm.com.] And to the cluster communications
NIC. [He right-clicks the Private icon and clicks Properties in the shortcut menu. He double-clicks the option Internet
Protocol Version 4 (TCP/IPv4).] And we can see that here. [In the Advanced TCP/IP Settings dialog box.] And this, of
course, is the name of the Active Directory domain [EarthFarm.com] that we will not be a part of. Once we've got all
of our prerequisites in place, we can now go ahead and create the workgroup cluster. [He switches to the Hyper-V
Manager window. The window is divided into three sections. In the first section, two servers, HV9424 and HV9423
are displayed. The second section displays three subsections named Virtual Machines, Checkpoints, and
Server9476. In the Virtual Machines subsection, Server9476 is selected.] This is the other node in the cluster. And
you can see on this node, I've already run the validation test, right? The first thing that we do is we run the validation
test. [The Summary tab in the Validate a Configuration Wizard.] And if we look through these, we can see that these
nothing failed. But I do want to call out a couple of warnings that I got here and tell you why I am dismissing them
and moving ahead.

And so the first thing here is that we're warned that we're not a member of the domain, but we know that, right? Then
we've got some concerns around the DNS Suffix Search List, but I've validated that. That's right on every NIC. So I'm
going to go ahead and create the cluster. The nodes do not have the same software updates. Now, guys, you may
have run into this. There's a problem, it's a known problem. There's a number of discussion threads on
social.technet.microsoft.com. And it's a cumulative update patch from January that's having a problem on a number
of servers. And it's funny because the two VMs are identical, [He selects the Create the cluster now using the
validated nodes checkbox.] basically, completely identical. Rolled out at the same time. I haven't done one to the one
that I haven't done to the other.

And the fix for this is a long and complicated one. So I'm not worried that I am one software update out of pace
between the two. I'm going to go ahead and Create the cluster using the nodes that we specified. [In the Create
Cluster Wizard.] So we come in here, give the Cluster Name, and we'll call the cluster DMZ. We're going to use the
77 IP address for these. [10.1.81.77.] Each of the subnets. [10.19.4.77.] And we can see that's finished up. We've
created the cluster DMZ, we'll Finish that up. And the Failover Cluster Manager updates. And I can see my DMZ
cluster is ready for further configuration and management. [The DMZ is displayed in the Failover Cluster Manager
window.] And this is how we deploy workgroup clusters.
Deploying a Single-domain Cluster
Learning Objective
After completing this topic, you should be able to
◾ deploy a single domain cluster

1.
[Topic title: Deploying a Single-domain Cluster. The presenter is Michael Murphy. The Failover Cluster Manager
window is open.] In this video, we'll validate and create a domain-joined cluster. Before I create the cluster, [He right-
clicks Failover Cluster Manager, clicks Validate Configuration in the shortcut menu and the Validate a Configuration
Wizard opens.] I'll start by validating that we meet all the requirements, right? And so I'm going to go out here. I'm
going to grab the two servers. These two servers are virtual machines. [SRV9470 and SRV9471.] They are
configured with the same hardware profile. They've each got two network adapters. One for the cluster
communications network segment. One for the production network segment. We're going to run all of the tests, right?
We want to run all the validation tests, not a subset of those. We want to know that we've covered everything. One of
the very last things I did before this was make sure that the machines were all patched. That they were all up to date
with their updates.

That they had all the latest fixes from Microsoft. Not only are they identical machines joined to the same domain, but
they are at the same patch level. And they have the same services installed on both of them. And let's see, what we
want to see in here are validated successes or not applicables. [In the Summary page of the Validate a Configuration
Wizard.] We would want to address any warnings, any errors and if we look through the list, everything looks good.
[He scrolls down the list of all the Results displayed.] Now, we could create the cluster using the validated nodes right
from here. This just launches the new Cluster Wizard. Or if we are not ready to do that, we can finish it up.

And we can then later access Create Cluster with the right-click on the Failover Cluster Manager and so we'll launch
the Cluster Wizard. [The Create Cluster Wizard.] We will go out and grab the two servers, the nodes that we had
validated. SRV9470 and SRV9471. We will give this a name of Cluster1. We'll give it an IP address, right? This will
be the Cluster IP address and that name acts as an administrative access point, right? [10.19.4.99.] We can
reference the Cluster1 to apply configuration settings to all nodes in the cluster. Confirmation dialog box displays,
these are the settings that we've made. [The Cluster registration is DNS and Active Directory Domain Services and
the IP address is IPv6 address on 2001:db:8:1::/64 and 10.19.4.99.] That looks good to me. We'll go ahead and we'll
create the cluster. Here, if we view the report, [He clicks the View Report button and the report opens in the Internet
Explorer browser.] everything looks good except, We need a witness. And we can configure the witness disk after the
cluster has been created. And here we can see in the report, everything looks good, right? No appropriate disk could
be found for the witness disk. An appropriate disk was not found to configure the witness disk. [The two errors in the
report.] So as a post-configuration step, we'll go back and we'll add that witness disk.

I can right-click the cluster. [Cluster1.Earthfarm.com] Under More Actions, there is a choice to Configure Cluster
Quorum Settings. [The Configure Cluster Quorum Wizard opens.] We need to select quorum witness. We'll configure
a file share witness. [He selects the option Configure a file share witness.] And we'll specify the path to server 69,
9469. [SRV9469.] And we can see there's my Witness share. [In the Shared Folders section.] I created a share over
there called Witness just for this purpose. [The File Share Path is \\SRV9469\Witness.] And in the confirmation dialog
box, we see the server and the path to the witness share. Cluster manage voting is enabled, so then this server, this
third file server that we're appointing as the file share witness, will be a tie-breaking vote in the case of failover
scenarios. Right? Should I failover? The passive partner will require a confirmation from this third voting party, our
tie-breaking vote. We'll Finish that up and we have created a domain-joined cluster [Cluster1.] designated the file
share witness.
Quorum Configuration
Learning Objective
After completing this topic, you should be able to
◾ identify cluster quorum configuration options

1.
[Topic title: Quorum Configuration. The presenter is Michael Murphy.] In this video, we want to discuss quorum, and
try to shed some light on quorum. Quorum is a great word, isn't it? Right it comes to us, of course, from the ancient
days of the Roman senate. When senators knew that if they didn't say, look, guys, if two-thirds of us aren't here, you
can't go passing laws, right? And that's precisely what the quorum does here in a cluster. The cluster nodes in the
quorum determine whether or not the quorom stays up and running, whether it can even run. So every node in the
cluster gets a vote, and those votes determine whether or not we have enough resources available to keep this thing
up and running. Now what happens if you have an even number of nodes? Suppose I have a four-node cluster?
Well, I don't want to be in a situation where I get a tie vote, so I have a quorum witness, an uninterested third party
that can also cast a vote. And that can be either a designated disk resource, a disk witness in the case of locally
attached clusters with shared storage, or a file share witness in the case of stretch clusters, where data is replicated,
where the storage data is replicated between sites. And that uses a different solution.

That uses the Paxos tag rather than maintaining the cluster database itself as the designated disk resource or disk
witness does. The benefits of cluster ensures that the cluster can start and continue to run properly, right? So that if
the active cluster membership changes, if I take a server offline for maintenance, the cluster stays up and running. If I
lose an entire site, right, there's a power outage at our secondary site in a stretch cluster scenario. Well, the subset of
nodes, the ones in the site unaffected by the power outage, continue to run. And this helps avoid what we sometimes
call split brain scenario, or here, we mentioned partitioning a cluster. So that when the second set of servers comes
back online, they don't think that they are still in charge, right? We got to make sure that if any failover happened
during that power outage, that when those nodes come back up and running, they don't necessarily take over as the
active nodes, which they may have been when the power went out.

[Factors] Full function of the cluster depends on, you've got to have network connectivity, right? That's why we're
always going to have two NIC cards in there, and we're going to have a dedicated subnet just for cluster
communications. Each node to host the clustered roles must have enough capacity to do its job. And I can configure
priority so that in failover scenarios, I can point to which nodes should take over in the event that one node in the
cluster fails, or comes offline for maintenance. [Options] We want to use the typical settings on this. I'll tell you, we're
going to look here at the options for quorum configuration but forget it. You do not want to set these yourself. You
want to let the cluster dynamically manage those votes. With dynamic quorum configuration, we have a far more
resilient cluster scenario that we've ever had before. And that's what we would want to do. We'll also want to add a
quorum witness. And I would say this in all scenarios, even if you've got an odd number of nodes in your cluster. So
you're not worried about tie-breaking votes. We're still worried about, well, what happens if one of those nodes fails?
Now if I started with five, now I'm down to four, now I'm in a scenario where I can have that tie vote again and that's
what I don't want. So in those scenarios, we add the file share witness or the disk witness and remember the
distinction between the file share for stretch clusters.

Disk witness when they're all collocated in the same data center with access to the same shared storage. We
nominate a quorom witness, we put two NIC cards in it and we connect it to the cluster communications network
segment as well. And then if my five node cluster suddenly loses one voting member and I've got dynamic quorom
configured, guess what happens? The file share witness now suddenly gets a vote. And now we're back to a world
where we're not going to have a tie and that's the world we want to live in. Advance quorum configuration and
witness selection, again, you can do this but I would prefer to dynamically manage those node votes. [Modes] Now
what are the options if you're going to do it manually? Well, simple node majority. Only the nodes have votes. No
quorum witness. Recommended for odd numbered nodes. Node majority with witness (disk or file share). Nodes
have votes. Quorum witness has votes. Recommended for clusters where you've got an even number of nodes in the
cluster. There's your tie-breaker. Now you can give the witness disk a vote, and no nodes have votes at all. And the
witness disk becomes the sole arbitrator. That can give you some problems. There's actually a number of things that
can go wrong with that. Disk witness, dedicated LUN, stores a copy of the cluster configuration database.
This is the one that we use when all the servers are co-located in the same facility with shared storage. If you're
stretched in that cluster, you're going to use an SMB file share. It doesn't store copy the cluster configuration
database instead. In the witness.log file, I find the history of the Paxos tags, that is to say, who's up, who's running,
who's giving me problems or who I've had heartbeat communication failures with. And we use these when we're
using replicated storage in a stretch cluster scenario. Quorum configuration can be automated with dynamic quorums
and that's really what we want to do. As best practice, review your quorum configurations before placing the cluster
into production, use the dynamic quorums, validate the quorum configuration, use the Validation Wizard, right, or the
Test-Cluster Windows PowerShell cmdlet. Changing the quorum configuration now, look, if you have got a dynamic
quorum configured, you don't have to do this. But you would because it will do automatically. But you would have to
change the quorum in any of the scenarios that I see listed here if I'm not using dynamic quorums. This is a look at
quorum configuration in Windows Server 2016.
Cluster Networking
Learning Objective
After completing this topic, you should be able to
◾ configure cluster networking

1.
[Topic title: Cluster Networking. The presenter is Michael Murphy.] In this video, we want to examine the network
requirements for the nodes in my cluster. Starting with the network adapters, they should be identical, right? We
should be using the same hardware on every node in the cluster, or presenting to the virtual nodes in the cluster, the
same virtualized hardware. I want to use the same IP Protocol version, 4 or 6. I want them to function at the same
speed. I want to have duplexing and flow control options available to me. In terms of redundancy, I want to create
redundancy, right? The whole point of the cluster is to eliminate the server as a single point of failure. Well, if I don't
eliminate all other single points of failure, what's the point? So that means redundant network paths, redundant
network connections, redundant routers. So that if I lose a router, especially in stretch cluster scenarios, I've got
another path to get to the other members of the cluster. So I want redundancy, not just in the server and in the NIC
cards, but in the networks and the network equipment. When it comes to the NIC cards on any individual cluster
node, I want to team those NIC cards, right? So that on that single network, I've got redundancy at the NIC card
level. If that's the client-facing network, I've got load balancing and failover right there at the NIC level. Multiple
networks, provide multiple paths between the nodes in every case.

This means not just a client-side NIC and a cluster communications NIC, but maybe it means client-side NICs to
multiple subnets. Maybe it means that on the cluster communication side of things, we ensure availability of the paths
for communication. Maybe we even have a failover cluster communication network segment. Which that can be, if
you have a third network segment that's used for management or backups or data replication, maybe you have that
as a failover for cluster communications. When it comes to the IP address assignments for the nodes in my cluster, I
want to be consistent. I either want to use static IP addresses or DHCP for all the nodes in the cluster. I want to have
matching network settings. Ideally, I think, I would want everything to be statically assigned. And I'd want the IP
addressing to follow a scheme. So that when anybody in the place sees those IP addresses, they know that's from
one of our cluster nodes. Unique subnets, I have a private network just for cluster communications.

That's what I want, and that's going to avoid some problems for me, right? Remember, this thing is chatty. There's a
lot of communication that goes on in the exchange of heartbeat information between the nodes in the cluster. So I
want to dedicate a channel for that communication. For the network adapters, I want to see RSS implemented in the
network drivers. That'll help me distribute network-received processing across multiple CPUs. I want to use RDMA to
provide high-throughput communication, while minimizing CPU usage. [Configuring Cluster Networking.] The private
network should carry the internal cluster communication, right? It'll exchange heartbeats, check in with the other
nodes, make sure everybody's up and running. Hey, are you up and running? Yeah, I'm up and running, are you up
and running? Yeah, I'm up and running, how about Harry? Is Harry up and running? Back and forth, right, all the time
constantly checking in.

For authentication, the failover cluster authenticates all those internal communications. You may consider restricting
internal communication to a physically secure network. So I've got a cable that comes out, plugs into a switch. And
only the members of the cluster are plugged into that switch, and that's it. And that switch is in a locked server closet,
right? The public network provides client access to the cluster-application services. And I want to avoid using that for
cluster communications. Now the public network can be a failover network, if those are the only networks I have. And
the NIC card to the clustered communications network segment fails, clustered communication will failover to the
public network. Now in those eventualities, that can cause network congestion on the client side. And your phone
may start lighting up, telling you that things are moving slow today, what's going on? That's why we avoid this, a
mixed network, a cluster node with only one NIC card, where both the internal cluster communications and the client
communications happen on the same NIC. [Cluster Networking Features] That's what we want to avoid. Today, User
Datagram Protocol in unicast, UDP over 3343, replaces the UDP broadcast messages used in the legacy clusters.
The Failover Cluster Virtual Adapter is a hidden device that's added at installation. I support IPv6 here for node to
node, and node to client communications. If you're going to do IPv6 addressing, you should be consistent with it. It
should either be provided by DHCP or it should be static. And that's a look at cluster networking in Windows Server
2016.
Restoring a Single Node
Learning Objective
After completing this topic, you should be able to
◾ restore single node configuration

1.
[Topic title: Restoring a Single Node. The presenter is Michael Murphy. The wbadmin window is open.] In this
demonstration, we want to restore a single node. So if I've got a single node in a cluster that fails on me, what do I
do? Well, the node gets disjoined from the cluster, right? If it's not functioning. I may have to rebuild it, I may have to
do a bare metal restore. In the case of virtual machines, it's as simple as spinning up a new VHD. And with the new
VHD, I then restore the full backup copy from the previous backup. So this all depends, of course, on there being
backups, obviously. So we're going to come, in this example, we're using the Windows Backup tool. We'll come into
the Recovery choice that's for this server. [In the Recovery Wizard, he selects SRV9476.] There's only one backup
set to choose from, I did a full backup of this just before, and we're going to restore everything.

The System Reserved. All right, I see that that's all going to be restored. Everything on the local disk, that's all going
to be restored. We're just going to restore everything. We're going to restore it all to the Original location. We will
Overwrite the existing versions with the recovered versions. We'll Restore the access control lists. Confirmation tells
us what we're going to do, we're going to restore all of this. We go ahead and we say Recover. You can see these
are almost all completed now, this is just about to wrap up. [The File recovery progress progress bar.] I want to
highlight though, guys, this is not an authoritative restore of the cluster services database. When I restore this failed
replication partner or this failed cluster partner, node in the cluster, which may or may not be a replication partner, the
cluster configuration database, the copy that it keeps, will be updated with the information from the running nodes.

So that's a separate process. If I need to restore the cluster database authoritatively, that's a separate process. But
this will certainly, as soon as this finishes up, we'll have a restored node. We can do a little more. Depending on what
happened, you might have to re-add the node to the cluster, or you might have to do a little cleanup with cluster
services cmdlets. But for the most part, we're done now, and this will get this node back up and running.
Restoring a Cluster Configuration
Learning Objective
After completing this topic, you should be able to
◾ restore a cluster configuration

1.
[Topic title: Restoring a Cluster Configuration. The presenter is Michael Murphy. The Windows PowerShell is open.]
In this video, we want to demonstrate how to perform an authoritative restore of our Cluster Configuration Database.
Now this depends on you having performed backups in the past, right? You got to have a backup if you're going to do
a restore, so the assumption is that there's a backup. And for our purposes we're using the WBAdmin utility. Now you
may be asking yourself, why are you in PowerShell? Why am I doing this in the command line? Because the fact is,
guys, this is the only way to do it. I mean, I could walk you through the GUI and show you how far you get but believe
me, it doesn't work. This is where you got to be. So the command line tool for Windows Backup utility is wbadmin.
And if I go into PowerShell and I type wbadmin, [The complete output is in the transcript for reference at the heading
Commands Supported.] you can see this tells me what kinds of commands are supported, what kinds of switches are
supported. And what we're interested in here is I need to know what's the latest backup that I can restore. So I'm
going to go ahead and I'm going to call the wbadmin get versions command.

And then this'll give me the backups that are available to restore. And what I'm interested in is this one here, this
most recent one. [Version identifier: 02/12/2017-22:57.] And you'll notice here today, right, I can Copy this here, or I
could just hit enter today, which is nice, right? [He copies the Version identifier by clicking Edit and then Copy from
the drop-down menu.] You notice the PowerShell window opens all the ways, it covers the screen, it's nice, we like
that. So I'm going to grab that. Now I want to show you this here, because we want to drill down into this a little bit.
We called wbadmin get versions so that we could see the backup versions that were available to us. Now within this
version, we can drill down and we can see the individual items that are available for recovery. So I'm going to come
down here and add this cmdlet, wbadmin get items. Call the version switch, you need a colon there and then no
space and then the parameter value, which in this case is the version ID. We do that and now look, it tells us, all
right, okay, backed up the Volume here, the System Reserved Volume. It backed up the Volume here, right, this is
the C drive. It backed up the Application Cluster. And there is the Cluster Configuration Database. [Application =
Cluster, Component = Cluster Database (\Cluster Database).]

Now we want to restore that database authoritatively. We want to tell all the other servers in the cluster that this is the
appropriate copy of the database to have, this is the one. And so I'm going to come down here. Now that we know
what the item is, we can specify a wbadmin start recovery command. Take a look at the syntax, wbadmin start
recovery. Specify the version from which you'd like to recover. The itemtype, as specified in the output from the get
items cmdlet is application, and the item specifically is Cluster. Now if I wanted to restore the registry, too, right, I
could comma delimit these and say cluster, registry, etc, etc. DHCP, the DHCP database, for example. But it's the
Cluster Database that I'm interested in, in this example.

So we're going to go ahead and start that recovery. Now we are warned that, this operation will perform an
authoritative restore of your cluster. After recovering the cluster database, the Cluster service will be stopped and
then started. This may take a few minutes, are you sure you want to continue with this authoritative recovery? We
are. And if we just zoom ahead there to the end, Summary of the recovery operation: The component Cluster
Database was successfully recovered. And then there's some additional steps here to complete the restoration of the
cluster associated with this node. Start the Cluster services on the nodes identified in the restored cluster database.
And we should be good to go. And that's the process for restoring the configuration database for the cluster.

Commands Supported

ENABLE BACKUP -- Creates or modifies a daily backup schedule.


DISABLE BACKUP -- Disables the scheduled backups.
START BACKUP -- Runs a one-time backup.
STOP JOB -- Stops the currently running backup or recovery operation.
GET VERSIONS -- Lists details of backups that can be recovered from a specified location.
GET ITEMS -- Lists items contained in a backup.
START RECOVERY -- Runs a recovery.
GET STATUS -- Reports the status of the currently running operation.
GET DISKS -- Lists the disks that are currently online.
GET VIRTUALMACHINES -- Lists current Hyper-V virtual machines.
START SYSTEMSTATERECOVERY -- Runs a system state recovery.
START SYSTEMSTATEBACKUP -- Runs a system state backup.
DELETE SYSTEMSTATEBACKUP -- Deletes one or more system state backups.
DELETE BACKUP -- Deletes one or more backups.
PS C:\Users\administrator. EARTHFARM>
Cluster Storage
Learning Objective
After completing this topic, you should be able to
◾ configure cluster storage

1.
[Topic title: Cluster Storage. The presenter is Michael Murphy.] In this video, we want to discuss the options for
cluster storage. Now guys, the first thing that we've got to get clear is that that's a complex topic. Because we're
talking not just about the backend storage, right, that the clustered service might need to access. But we're also
talking today very much about how do we store the virtual machines that are actually nodes in our cluster. Because
commonly today, right, we're not talking about clustering physical machines, we're talking about clustering virtual
machines. So before we even begin the discussion, we want to sketch in broad strokes those two main topic areas
for this conversation. The backend storage and then the storage of the virtual machines themselves. And there's a
couple of options here. So you've got shared serial attached SCSI. And SAS devices, right, are low cost options.
Their heart, they tend to be vendor specific, right, so we go to a vendor and we buy a solution. And it's great when
the cluster nodes are physical and are in the same rack or they're virtual machines on the same hypervisor hosts and
they provide that shared storage that the traditional cluster requires. Now of course, I can implement that with iSCSI
so that I get SCSI commands over IP networks.

To simulate a fiber channel SAN, at a less than a third the cost, way less sometimes, and no specialized networking
hardware. Now again, this historic solution itself is proprietary. You're going to get that from some hardware vendor.
But the network doesn't require anything special. Unlike a fibre channel SAN, where I'm going to have to have fibre
devices between the host and the backend storage array. So it's more expensive, both in terms of implementation
costs and in having a team with specialized knowledge to manage both the hardware and the network that it sits on
that fibre network. We can do shared virtual hard disks. Right I can create a VHD, I can share it between virtual
machines. I can use it as the guest clustering solution. So when I think about the virtual machine files that might live
inside of a shared virtual hard disk on top of two hypervisor hosts that are clustered themselves. So that if one
hypervisor host fails, everything fails over to the other. These are placed in what we call clustered shared volumes.

Clustered shared volumes are a solution for defining shared storage among hypervisor guest machines. And they are
function of the Scale-Out File Server role of Windows Server 2016. That Scale-Out File Server role, it was first
introduced in Windows Server 2012, but it's certainly supported in 2012 R2 and later. Shared Server Message Block
storage, used as a shared location for failover cluster nodes. So for example, I have a SQL Server that's clustered,
the shared solution can be that Cluster Shared Volume on the Scale-Out File Server. Nodes do not require local
storage. And so the storage can be a connect over SMB 3.0. So it can be housed, say, in a backend JBOD array.
[Storage Requirements.] Native disk support. Now when we think about the file systems that we use today, what we
really think about is NTFS and ReFS, the Resilient File System for Microsoft which lets me scale to much larger
drives, multi-terabyte drives.

Partition styles, we think about GUID partition tables today. And we think about these because of the limitations of
master boot record. So today and moving forward, we're going to be thinking about NTFS ReFS and GPT disks,
wherever we can support them. SCSI Primary Commands-3 standard, these are specific SCSI commands that all
work here and are all supported here. Because we've got compatible with Microsoft Storport storage server, the
miniport driver. And that's going to give me better performance architecture, better Fibre-Channel compatibility, and
is supported on Server 2016. And when we think about providing backend storage devices, we think about mapping
them to a particular cluster. So we don't have the same storage device trying to host, trying to serve the needs of
multiple clusters. Instead, we have clusters with multiple storage devices on the backend. Multipath I/O, every iSCSI
solution that we get comes with a device specific module from the vendor to support redundancy in the network,
right? That's what Multipath I/O gives me. And shared virtual hard disks, we mentioned this before, are stored in
CSVs, Cluster Shared Volumes. [Configuring Storage.] When we think about lower end solutions like a JBOD array,
to maximize those, we think about the Microsoft storage spaces solution and we can use storage spaces to provide
backend storage configuration to a failover cluster. We do that by creating the storage space first and then create the
clustered storage spaces in the Failover Cluster Manager. We can create a new storage pool. We need at least three
physical disks in there. And then we choose, is it a stripe, is it a, pardon me, is it a simple, is it a mirror, or is it a
parody? And then we specify the size of that disk. We can add disks in there from the failover manager console. We
can select to manage the cluster and then add disks from the storage system. And we can take those disks online or
offline. And this is a look at configuring storage to support our Windows 2016 Server Clusters.
Cluster-Aware Updating
Learning Objective
After completing this topic, you should be able to
◾ implement Cluster-Aware Updating (CAU)

1.
[Topic title: Cluster-Aware Updating. The presenter is Michael Murphy.] In this section, we want to talk about Cluster-
Aware Updating, which lets me automatically update my cluster nodes. And this has been a big problem for a long
time, right? You have a software implemented Microsoft cluster, you've got to take the nodes offline in order to
update them and reboot them. And that has been a manual process unless you turn to a third party solution. Well,
now we have Cluster-Aware Updating. And this has actually been around since 2012, so if you were running clusters
in 2012, you're probably aware of this. The process automatically takes those cluster nodes offline, updates are
installed, a restart happens if it's required, of course. And then the nodes are brought back online, and then the
process moves on to the next node in the cluster. [Functionality] All automatic, but of course, you get to approve what
updates get installed, right? Because you're using WSUS or using another update mechanism that this interacts with.
And there's a user interface, and there's PowerShell commands of course.

And on that subject, we're going to talk in here about remote administration of cluster updating, which you can do.
But you've got to have PowerShell installed on all the machines to do it, right, the ones that you're going to remote
into, clearly. End-to-end automation, cluster-updating happens by the update coordinator. And the update coordinator
moves us through one node at a time of this process without your intervention. There are plugins that integrate the
Windows Update Agent with WSUS, and that will let you apply your updates, hotfixes, service packs, etc. Now the
question of non-Microsoft updates is more complex than it might appear from the bullet point. Out of the box, this
thing does not do non-Microsoft updates, guys, that's the reality. You got to get an integrated plugin from the vendor,
or you got to build your own, and you can do that. There's also this thing. See now, this looks like it's telling you to
update the profiles, it's not. We're talking about update profiles. And update profiles are a set of the update settings
that you want to apply. And so I create an update profile so that I am assured that the same update processes and
settings apply to all of the nodes in the cluster. Dig it? Now here's where the non-Microsoft updates come in. There is
an extensible architecture.

You can build your own custom plugins for updating your proprietary applications. So if you've got proprietary
applications and that are sitting on a cluster, your developers, the same folks that built the application, can use the
tool set provided by Microsoft to evolve your own plugins. So while it's true Cluster-Aware Updating supports non-
Microsoft updates, it's true inasmuch as there is a plugin to support that non-Microsoft solution. In self-updating
mode, Cluster-Aware Updating itself can be a clustered role, right? So it can sit on a cluster to configure the failover
cluster. And the cluster updates itself. And it does this at the times that you specify, using the update run profiles that
you've created. Remote-updating mode, and you can remote update from a Windows 8 machine or a Windows 2012
machine or later. Will trigger the on-demand updating run or your update run profile. And let you monitor in real-time
the progress of that updating. Now, [Administering Cluster-Aware Updating (CAU)] you can also do this from tools on
the Tools menu in Server Manager, to Cluster-Aware Updating. There's the ClusterUpdateUI command line in the
system32 folder.

The failover cluster manager also provides me with some management support. What do I need to do this, Server
2012 and later? Also supported on Server Core. I need to install the clustering feature, right, the machine's got to be
part of the cluster. And I need the Cluster-Aware Updating tools which are part of the failover clustering tools. And
that's a look at Cluster-Aware Updating in Server 2016.
Cluster Operating System Rolling Upgrade
Learning Objective
After completing this topic, you should be able to
◾ identify the steps to perform a cluster operating system rolling upgrade

1.
[Topic title: Cluster Operating System Rolling Upgrade. The presenter is Michael Murphy.] In this video, we want to
introduce you to a brand new feature of Windows Server 2016, Cluster OS Rolling Upgrades. Which is a problem
that we've had, right, for a long time. If you're managing a cluster that's 2012 and now 2016 is out. Well, it used to
have to be, you'd build a whole new cluster of 2016 machines, and then you would migrate the roles and features
over to the new cluster. Now there's other ways of doing it. You could take each node out of the cluster, do the
upgrade, and build a new cluster from the upgraded machines. Right, but we didn't have a way of just doing it in
place. Now we do and that's a great thing. If you've got, if we're talking about clustered Hyper-V machines or we're
using Scale-Out File Servers, we can do this with no downtime. That's a thing of beauty. Again, [Benefits of a Cluster
OS Rolling Upgrade.] what's the requirement to do it with zero downtime? The failover clusters are running Hyper-V
virtual machines and Scale-Out File Server workloads.

I can upgrade from Windows 2012 R2 right to Windows server 2016 with no additional hardware requirements.
Additional cluster nodes can be added temporarily to improve availability during the upgrade process, right? So if you
have some spare, if you require some additional capacity, you can add new nodes to the cluster during this process
and then remove them after. Cluster does not required to be stopped or restarted. Existing clusters can be upgraded,
right? We're not creating a new cluster. Existing cluster objects are used and stored. The ones that are stored in
Active Directory and so it sees them as an upgraded cluster rather than as a new cluster. The upgrade process is
reversible. You can roll this back until the point where all your machines are 2016 or you have run the Update-
ClusterFunctionalLevel PowerShell cmdlet. Now that's one that you may not have seen before. Because it didn't exist
before we had rolling upgrades, right? The cluster existed.

All operating systems were the same. Today during the transition, you can have a mixed mode cluster. And so, once
we're transitioned all to 2016, we want to let the system know we're done, and we're going to raise the cluster
functional level and take advantage of all of the 2016 cluster functional options that are available. Because when
we're in mixed mode, we're restricted to any options that were 2012 R2 specific, or R2 specific. You got Mixed mode
support for patching and for maintenance. So the expectation is not that you're going to get this all done in one day,
but that the cluster may have to function for a few days or a week in Mixed mode. And that may require patching and
maintenance to happen. Maybe I have to stop doing everything because Superpatch Tuesday is this week and we've
got to get those patches installed. You got automation support with PowerShell and WMI calls.

You can determine the state of a cluster on Windows Server 2016 cluster nodes with the Get-ClusterFunctionalLevel
cmdlet. Which calls the ClusterFunctionalLevel property. What do I need to do this? I need Windows Server 2012 R2
Failover Cluster if I want zero downtime. And it's got to be Hyper-V VMs, Scale-Out File Server, Verify the Hyper-V
nodes, have CPUs that support SLAT. What does the process look like? [Cluster Transition States] Well these are
the transitions states. We're all 2012 R2. We start the upgrades. The first server becomes 2016, the second server
becomes 2016, we're in Mixed mode during that time. When they're all 2016 and we've updated the cluster functional
level, we're done. Until we do that, we can roll this thing back, right? That's what that back arrow is there from the
middle.

We can represent this now, the cluster functional level has a numeric integer value as its parameter value. An 8
indicates that we're running in Windows Server 2012 R2. A 9 indicates that we're running in Server 2016 mode. So
here, right, we've converted to of the machines' operating systems to 2016. We're still in server 2012 R2 Functional
Level = 8. When they've all been converted over, we're still running in Windows Server 2012 R2 Functional Level = 8.
Then, we run the cmdlet and we raise the functional level to Windows Server 2016. Now when we get that functional
level, we see that it's a value of 9. That tells us that we are now at 2016 and the rolling upgrade process not only is
complete but has been successful. And this is the rolling update process in Server 2016.
Clustered Shared Volumes (CSVs)
Learning Objective
After completing this topic, you should be able to
◾ configure and optimize Clustered Shared Volumes (CSVs)

1.
[Topic title: Clustered Shared Volumes (CSVs). The presenter is Michael Murphy.] In this video we want to talk about
Cluster Shared Volumes, which should not be new to anybody. CSVs have been around 2008 since Server 2008.
And the idea was to allow multiple nodes in a cluster to simultaneously have read-write access to the same LUN,
that's provisioned as an NTFS volume. Now, we're using the term LUN here really to mean the disk in the backend
array. It doesn't have to be a Fibre Channel SAN, right? I could be using the iSCSI targets, but we're talking about
the disk in the array on the backend. Multiple machines, simultaneous read-write access. Now that's a different thing
guys, right? That allows for active active software implemented clusters in Server 2016, and in previous versions of
Server too, right? But until we got this we didn't have that ability for our software implemented clusters.

Today, we do. Clustered roles can failover from one node to another without changing drive ownership, dismounting
and remounting the volumes. So, the Cluster Shared Volume appears to the virtual machine or the node in the
cluster as a locally attached drive, right? And when I change virtual machines, the ownership changes, no problem,
just no remounting, dismounting. And it lets me simplify the management of a large number of LUNs in a failover
cluster. I can cluster my Hyper-V virtual machines this way. This is a great place to store your VMs. So the VHDs
actually get stored on Cluster Shared Volumes and for scale-out files shares. So for application data, Hyper-V virtual
machine files, SQL Server databases, Exchange databases, etc. Multiple networks and multiple network adapters. I
need fault tolerance, not just in the machines which I'm clustering to eliminate the server as a single point of failure,
but in the network path. And I want to team those network adapters, so that the network adapter doesn't become a
single point of failure. The network adapter needs to have enabled the client from Microsoft networks and file and
printer sharing from Microsoft networks. And the Microsoft failover cluster virtual adapter performance filter. In terms
of prioritization, refrain from changing cluster configured preferences. Leave that alone.

In terms of the IP subnet configuration, there's no configuration required for nodes in a network using a CSV right?
The CSV presents as a locally attached drive. Configure quality of service for each node when you're using a CSV.
And then in terms of the storage network, that's going to be dependent upon who your vendor is. So I consult the
vendor documentation. In terms of CSV communication, there's an I/O synchronization, right? Because remember,
we're talking about a world in which two servers have access, two or more, I should say. Two or more servers have
access to the same disk, the same data, and can change that data simultaneously. Well, that means that write
commits have to be synchronized between the hosts. And those changes are synchronized with the physical nodes
that access the LUN using SMB 3.0. The redirection, because you've got two machines that need access to the
same data at the same time, can be either file system or block level redirection. The file system format should be
NTFS or ReFS today. And the resource type, the CSV itself, the Cluster Shared Volume should be its own spindle or
multiple spindles, right? Maybe I want it to span multiple spindles. But what I don't want to do is I don't want to try to
split a disk or make the CSV a logical partition on a disk on the backend. Because what that's going to do is give me
disk contention, right? That's going to be a problem. And so when I think about the disk itself, some of the things that
we think about are the Hyper-V files that I want to keep them on the CSV. When it comes to physical disks that are
actually installed in the machine, I'm not going to make those a CSV, right? That's not going to be the CSV.

The CSV is going to be on the backend or in their storage array. And most commonly for my Hyper-V machines, not
for physical machines, there's a path name that's used to identify the disks in the CSV. Remember, I want multiple
spindles. I want parity arrangements. I want mirroring. I want something on the backend that's giving me redundancy
in the data, just like we got redundancy in the server up front. With physical servers, system files are on one physical
disk including the page files, data files are on separate physical disk. That's going to be the CSV. For clustered virtual
machines, system files in a VHD on one CSV including the page file, and then data files in another CSV. So we're
going to split the storage of the VHDs and the operating systems from the data drives. For CSVs and failover
clusters, some of the recommendations include. Consult your storage vendor to determine the number of LUNs that
you can configure, the supported number of LUNs. Consider the number of VMs and the workload of each, right?
Don't put too much on there, avoid performance issues. Know where the spindles are, make them separate spindles.
The virtual machines themselves can access each LUN for dedicated compute operations. We can add disks to
CSVs using the failover cluster manager expand storage or with the Get-ClusterAvailableDisk Add-ClusterDisk
PowerShell cmdlet. And this is a look at CSVs in Server 2016.
Deploying Failover Cluster without Network Name
Learning Objective
After completing this topic, you should be able to
◾ configure a failover cluster without a network name

1.
[Topic title: Deploying Failover Cluster without Network Name. The presenter is Michael Murphy. The Failover
Cluster Manager window is open.] In this video, we'll show you how to create an Active Directory detached cluster.
Now, of course, anytime we're going to create a cluster, we go into the Failover Cluster Manager and we validate the
configuration first, right? [The Validate a Configuration Wizard opens.] And so we'll go ahead and we'll say Next here.
We'll grab the servers that we need. And that's 76 and 75. [SRV9475 and SRV9476.] And we'll add them in there.
We see them validated and entered into the console. Here, we want to Run all tests. [In the Testing Options page, he
selects the Run all tests (recommended) radio button.] Now, of course, if you've run this before, and you've passed
certain tests and then you got a warning, you know, that said you have to do this or you have to fix that and then you
went did it, well, then you only have to rerun that test. But when the first time, you run all the tests, and you want to
correct any errors that are found.

Now, while this is running, [In the Validating page of the wizard, all tests are running.] let's take a minute and talk
about Active Directory detached clusters and why I'm interested in them. In the example here, and in the case of all
Active Directory detached clusters, or clusters without a network name. What we mean when we say without a
network name is that we're not creating a cluster named object in Active Directory that represents the cluster, or
what's commonly called and sometimes referred to, mistakenly, as an administrative access point. And the reason
that I say that is because when we create this Active Directory detached cluster, it will be created with an
administrative access point, DNS, rather than Active Directory.

When we create these Active Directory detached clusters, it's important to recognize that the servers themselves are
domain joined. There is a separate solution for work group clusters, where the servers are not members of the
domain. And good examples of that would be servers that lived out in your DMZ that you didn't want to be domain
joined, right? [The tests are finished in the wizard.] Now, why would you create an Active Directory detached cluster
like this? Well, one of the reasons, one of the principal reasons, is because when we look at folks like our SQL
Server administrators, those folks don't have rights in Active Directory in an Active Directory split role permissions
model. And so when we're managing by least privilege, we may well have administrators that need to create
managed clusters but don't have the rights in AD to do it. And so we have this solution, Active Directory detached
clusters.

Now, normally, when you create a cluster, you do that here in the Failover Cluster Management tool, right? But
there's no way to create an Active Directory detached cluster in the Failover Manager tool, you must do it in
PowerShell. [He switches to Windows PowerShell.] And so we come in here to PowerShell, and you can see I
prepopulated the cmdlet here. [New-Cluster ADDETACHED -Node SRV9475, SRV9476 -StaticAddress 10.19.4.77 -
NoStorage -AdministrativeAccessPoint Dns.] It's the New-Cluster cmdlet. And then you specify a name for that
cluster, in this case, ADDETACHED. Specify the nodes that will be members of that cluster, or nodes in that cluster,
server 75 and 76. Are the cluster is going to need an IP address, so we give it this IP address here. For our purposes
here we'll specify NoStorage, and then there's the AdministrativeAccessPoint, Dns. How do you do the lookups?
How do you find the cluster? DNS. Rather than Active Directory and a reference to the cluster named object. And
when that finishes up, we can come on back over to, and there it is. [He switches to the Failover Cluster Manager
window.] The Cluster Manager and we see the ADDETACHED cluster has been created. [As a subnode under the
Failover Cluster Manager node.] And this is the way that we create Active Directory detached clusters, or clusters
without a network name.
Scale-Out File Server (SOFS)
Learning Objective
After completing this topic, you should be able to
◾ implement a Scale-Out File Server (SOFS) on Windows Server 2016

1.
[Topic title: Scale-Out File Server (SOFS). The presenter is Michael Murphy.] In this video, we want to talk about the
Scale-Out File Server which is not a new thing, right? We've had Scale-Out File Servers, and we should generally be
familiar with these. The whole idea is to provide a continuously available share, right, regardless of other failures.
And this is a role, right. The Scale-Out File Server is a file server role in Windows Server 2016 and it was in 2012,
etc. When we think about this, we think about housing application data in this. It's one of the scenarios. One of the
scenarios that we think about is hosting application data. In a clustered file server, that means that any failover, the
data's all available. We can store Hyper-V virtual machines here so that if one hypervisor host fails, we fail over to the
other. Everything keeps running. This creates a highly available and reliable infrastructure that gives me good
performance and I get manageability, both through the user interfaces and PowerShell. Now you can configure this
role for general use, right? Well, it says here, what does it say here? Information worker scenarios. Well you know
what that is, that's a file server, right? That's where all your files are stuck up there. Guys need access to the data,
they're there.

Now the beauty of this is that when I cluster it, I can have an active-passive or active-active cluster, so that we get
better throughput, right, better availability in an active-active solution. Active file shares, the cluster nodes accept and
serve SMB client requests. We can increase the total bandwidth, right? All the nodes in the cluster can have active
access to these shares. I can do CHKDSK with zero downtime, no impact on most applications. The Clustered
Shared Volume cache gives me support for Read cache. So I call a file from the file server, it gets stuck in cache.
You call it, it gets delivered from cache without having to go back to the disk. It's a nice thing. One of the things, the
opposite of this would be write caching, right? So on the disk controllers I get write caching. Commonly, we want to
disable that write caching. You want to disable that write caching because it can generate write commits when in fact,
the data hasn't been written to the disk yet. And that will interfere with multiple server simultaneous access to the
files.

Simplified management here in 2016. No longer required to create multiple clustered file servers with separate
clustered disks or develop placement policies. Automatic rebalancing of Scale-Out File Server clients to improve
scalability and manageability, right, like load balancing natively inside of the Scale-Out File Server application. What
do I need for this? Well I got to install the file and storage services server role. The Scale-Out File Server role is a
role feature, or a role service of the file and storage services server role. And I need to have the failover clustering
components installed. Validate the hardware and create the cluster. And notice guys, that it's not create and validate.
It's validate and create.

So first, I want to make sure that I'm using hardware that is certified for Microsoft, logoed hardware, certified for use
with Microsoft clustering solutions. And then I build the solution and I validate it with the Microsoft Cluster Validation
Wizard before I create the cluster. Then you create the cluster. Add the storage to the cluster shared volume that'll be
used by the failover cluster. I can sign into the server as a member of the local administrator group, open Cluster
Manager. Configure the cluster services role and select the File Server Type, a Scale-Out File Server for application
data. Specify the NETBIOS name of the Scale-Out File Server. In PowerShell, you can create a clustered file server
for scale-out application data with the Add-ClusterScaleOutFileServerRole. Specify the name and this example here,
DistributedNetworkName. And then the cluster ClusterName. And then these are some of the parameters that are
available with this cmdlet. There are in fact additional parameters that are available, but these are the big ones. This
is a look at the Scale-Out File Server role in Server 2016.
SOFS Versus Clustered File Server
Learning Objective
After completing this topic, you should be able to
◾ determine when to use a SOFS versus a clustered File Server

1.
[Topic title: SOFS Versus Clustered File Server. The presenter is Michael Murphy.] In this section we want to talk
about the Scale-Out File Server versus the file server for general use in a clustered array. So with the Scale-Out File
Server, the active file shares accept and serve SMB client requests. SMB is Microsoft's Server Message Blocks.
That's the way Microsoft servers traditionally package up client file data and transfer it. Increased bandwidth, all
nodes aggregate network throughput. So I can add additional nodes, I can add additional NIC cards, and I can
increase the total bandwidth that's available for throughput. Ease of administration. I no longer required to balance
shares on each node. Instead, here's an example, right? [A diagram displaying a Hyper-V connected to a File Server
Cluster is displayed.] I have a file server cluster, there's File Server Node A, File Server Node B. These are
virtualized instances. There's a share there, [\\fs\share] and I can see multiple simultaneous client access to the
same files. That's the great benefit of this solution. So that I can support active-active or dual-active clusters.

The Hyper-V cluster itself can contain up to 64 nodes, the file server cluster up to 8 nodes. All right, the virtualized file
server virtual machine. Single logical file server, single file system namespace, support for cluster shared volumes. I
shouldn't say support, I should really say dependency, right? There's a dependency, I don't get the dual active
without cluster shared volumes. The file server for general use provides a central location on a network for users to
share files and for server applications. Protocol support, either Server Message Block, or NFS protocols if you're
supporting Unix clients and servers. The role service supports data deduplicaton, at least for specific workloads,
right? For virtual machines, for VDI workloads, for archival data. Data that's not volatile, I would use data
deduplication there. Support for File Server Resource Manager, for DFS replication, and for the file services provided
by the file server role in Server 2016. And then, finally, the Scale-Out File Server provides storage for server
applications, virtual machines. Client connections are distributed across the nodes in the cluster. So I get a native
load balancing feature in the dual-active cluster. And the only protocol supported there is SMB. And this is a
comparison of a classic clustered file server versus the Scale-Out File Server.
Guest Clustering
Learning Objective
After completing this topic, you should be able to
◾ determine implementation scenarios for guest clustering

1.
[Topic title: Guest Clustering. The presenter is Michael Murphy.] In this video, we want to discuss Guest Clustering.
Now, ladies and gentlemen, with all due respect, this can get confusing because what we're talking about. Now
remember, we're living in an age of abstraction. We're living in an age, right. When I was growing up in this business,
the business of technology, there was a one-to-one relationship between the operating system and the hardware,
right. You had a piece of hardware, it was a server, or it was a client, and that was it. And all of its storage, all its
disks were locally attached. Now guys, that's not the case anymore at all, is it? Today, so often the machine is a
virtual machine sitting on top of a hypervisor box, which itself may be clustered, right. Physical clustering, hardware
level clustering or software implemented clustering. And then the storage itself, the disks that present to the virtual
machines as locally attached storage are, in fact, commonly not. They are a lot on the SAN, or they're an iSCSI
target or an iSCSI array, or they're a storage space on a JBOD. So it's a different world to live in. And we want to be
clear that here we're talking about virtual machines that may well live on a cluster themselves. And we can cluster the
virtual machines inside of the Hyper-V cluster.

Dig it, that's what we're talking about. So it's a failover cluster made up of two or more virtual machines or a guest
cluster. I can run on a guest cluster on top of a Hyper-V failover cluster. Run the guest clusters with nodes that are
on the same Hyper-V failover cluster. Or better yet, separate the nodes that you cluster onto different hypervisors so
that if in the event that the Hyper-V host fails, the cluster stays up and running on the other Hyper-V host, right. I get
proactive health monitoring, look, the monitoring today built in, without having to buy a third-party monitoring solution
is about as good as it gets. And so take advantage of the native resource. Now look, if you've got a third-party app
you spent money on, of course, you're going to use that one. But if you don't, it's all in there, It's like Prego, right. We
can verify the functionality. There's PowerShell cmdlets to do that as well as the Cluster Validation Wizard. And then
in the terms of failures, I have automated failover and failback, right, or I can manage the failback scenarios based on
what the failure was. This is a thing of beauty, application mobility. I can move cluster roles to other nodes, right
within the guest cluster. Remember we're talking about the virtual clustering of virtual machines or guest clustering.

We get protection from the Hyper-V host failure by placing the guest cluster nodes on different physical hosts,
different Hyper-V hosts. So that if one of the Hyper-V host fails, we just failover to the other one, nobody even
notices. That's what you want to live in, guys, a world where when servers go down, nobody notices. I get virtual
machine mobility, right. The clustered applications can be deployed to any host that supports those virtual machines.
We get support for live migration which is really a function of Hyper-V. So if you're using VMware, for example, they
have a technology that's like Microsoft's Live Migrate that lets me move the VMs from one Hyper-V host to another
without any downtime, right. That's a thing of beauty. When I think about clustering these guest machines, what are
some of the requirements? Well, they've got the same requirements for running directly on physical hardware. So I
want the machines to be configured as identical VHDs. I want them to be configured preferably with static IPs, but
they can be DHCP hosts. But regardless, They have got a all the nodes in your guest cluster have to be either static
or DHCP.

You don't want to mix the two. And so I got to be able to run the Cluster Validation Wizard and have it pass the test
before I create my cluster. It's got to be on a supported hypervisor, right, so Hyper-V or VMware, it's supported in
both those places. 64 node limit, both for physical host clusters and for guest clusters. The failover cluster must be
members of the same Active Directory domain. That's always been the way it was, guys, right. Now in Server 2016,
we have some new options for this. We can, in fact, create detached clusters. That is to say clusters that are not
joined to an Active Directory domain. Or even mixed clusters where some nodes are members of one domain and
others are members of another. But the traditional requirement was that they be members of the same domain. In
terms of storage options, look, it doesn't matter what. I don't care if you've got a fiber channel SAN. I don't care if
you've got iSCSI targets. I don't care if you've got virtual hard disks created on a JBOD array and storage spaces.
Anything will work, right, even physically attached storage. Wow, what an idea, physically attached storage. Some of
the network considerations include. I want to have redundancy in the network. I want to have more than one network
path, so I can use MPIO. I want to have multiple NIC cards to improve throughput. And I want to team those NIC
cards so that I get native load balancing and fault tolerance at the NIC level. These are some of the things that I think
about when I'm thinking about clustering guest machines in a hypervisor solution.
Clustered Storage Spaces
Learning Objective
After completing this topic, you should be able to
◾ implement a clustered storage spaces solution

1.
[Topic title: Clustered Storage Spaces. The presenter is Michael Murphy. The Server Manager window is open. DNS
is selected in the navigation pane.] In this demonstration, we want to configure storage spaces to make that available
to a Hyper-V cluster that will use this storage as the storage location for the VMs. Now you think about the
implications of that, right? The VMs are running on the back end storage. We can see this is a file and storage
services server. [He clicks File and Storage Service in the navigation pane.] We can take a look at the disks that are
in here. And so we're going to make these volumes, these disks available on the backend storage array. The
Hyper-V host will connect to this as their storage and that's where we'll store the VMs. Now guys, think about the
implications of that. If the one Hyper-V host fails and we fail over. Well, the other one's already connected to the
cluster shared storage and can take over the management and running of these VMs, that's the idea. So if we look in
here, we can see that we've got three disks, three 60 gig disks and my goal is to combine them into a parity array.

So we're going to get redundancy at the data level for whatever we build on top of this. Does that make sense? So
I'm going to come down here to Storage Pools, and I've already configured them as a Primordial pool, you can see
that primordial pool here. [In the STORAGE POOLS section.] And we're going to just go ahead and create the
storage spaces. Or the new storage pool. [He clicks New Storage Pool from the TASKS drop-down list box and the
New Storage Pool Wizard opens.] And we'll give the storage pool a name, we'll call that the Clustered storage pool
and we're going to incorporate all three of those disks, [In the Physical Disks page, he selects Msft Virtual Disk and
the two Virtual HD servers.] so it'll be 180 gig volume, but with the parity bits, right, we lose about a third of that
space. There's the storage pool. Now, we'll go ahead and we'll create a new virtual disk there, [He clicks New Virtual
Disk from the TASKS drop-down list box.] and we'll use that pool. [The Select the storage pool dialog box opens.]
And, you'll notice, folks, that what I'm doing here is there's a one-to-one correlation between the primordial pool, the
storage pool, and the virtual hard disk that I'm creating. Now why do I do that, because I don't have to do that, right?
The concern that I have especially in large capacity JBOD arrays where you have lots of spindles. Guys if you create
the primordial pool out of 28 spindles and then you start creating storage space on top of that, you may find that the
storage spaces are spread across all 28 spindles, and that may not be what you want.

So there's an abstraction layer here, right? There's multiple layers of abstraction here. Because these disks that
we're configuring on the backend storage are going to present to the Hyper-V host as locally attached storage. So
there's multiple layers of obfuscation. And if we design this with one-to-one-to-one correlations, it's much clearer
what's going on. [He clicks the OK button in the Select the storage pool dialog box and the New Virtual Disk Wizard
opens.] And now it's true, you don't get quite the efficacy, you don't take advantage of trim, you don't take advantage
of a thin provisioning in trim the way that you could in a scaled solution. But for the server manager, especially the
server manager who's been working in a physical world for most of their career. If you map the one-to-one-to-one
correlations, you'll find your management is more consistent and somewhat easier. Now we're going to call this the
GUEST_Cluster, because of course this is where all the guest VMs will live. [In the Virtual Disk Name page.] And we
can create storage tiers on this virtual disk. Well actually we can, not in this configuration. But you can do that in a
more scaled configuration where you're not doing a one-to-one-to-one. And that'll enable automatic movement of the
most frequently accessed files to faster storage. Now here's where we're going to define this as a parity array. [In the
Storage Layout page, the Layout section displays three options: Simple, Mirror, and Parity. He selects Parity.]

So we'll get the parity information. We're going to make it fixed, [In the Provisioning page, the radio button Fixed is
selected for Provisioning type.] and we're going to use the whole thing. We lost some of it. And so here for the size,
[He enters 116 in the Specify size text box.] we're just going to enter the maximum available space, that's 116 gigs in
this example. Remember, we give up a lot of the space for the parity bits, that's what you're seeing there. Now all we
need to do is make this, and now all we have to do is make this clustered storage space available to the clustered
Hyper-V host. [The capacity is 178 GB and the Free Space is 2.50 GB.] They'll then be able to use it as a shared
storage location.
Storage Replica
Learning Objective
After completing this topic, you should be able to
◾ implement storage replica

1.
[Topic title: Storage Replica. The presenter is Michael Murphy.] In this section, we want to talk about a new feature in
Windows Server 2016 called Storage Replica. And what this does is it lets me control data replication between
servers in a cluster. And that replication is synchronous, and what this does is it provides really for stretch clusters.
That's what I need here today, is a cluster than spans more than one physical location. And I've got to replicate the
data, right. There's no shared storage in that solution, by definition, if they're in two separate physical sites. So what I
need is I need a way to replicate that data between the servers in the cluster. Now, there's third-party solutions that
you can use for this. Or you can use the native one here. Asynchronous replication, used to create failover clusters
that span two sites. The replication functions in two modes, synchronous or asynchronous. And synchronous data is
mirrored data. Essentially what we get here is we get simultaneous right commits.

Now, you can do that with this, if you're in the same site really. You got, or any time you've got low-latency, high
bandwidth connection. Maybe say on the same corporate campus. And in one building, some of the nodes exist, and
in another building, some of the nodes exist. And I have that nearby high bandwidth, low-latency. I can see that. But
most commonly where we're going to do this is off-site to the DR site, which is located in another city, maybe another
separate part of the Internet backbone. But it's not local, and we've got some latency. So we don't have the ability to
do the simultaneous right commits, instead we're shipping the data to the other site. Now, the difficulty there is that in
the event of a power outage, there may be lost transactions. There may be data that is lost between the sites. But the
good news is that in the DR site, we've got a near real-time version of the data. We minimize data loss. [Features]
Zero data loss and block-level replication, no possibility of data loss with synchronous replication by definition.

We can deploy this to virtualized guests or to physical machines. There's support for thin provisioning, to provide
near-instantaneous initial replication times. This is SMB3.0 based, which means that we've got multichannel support
and SMB direct support. Simple deployment and management, right, it's easy to use, we like that. Industry leading
securities for stretch cluster replication, we have encryption methodologies. So that as that data flows across the
WAN, we know that that data is secure. High performance for the initial sync. Support, now how do we get that,
right? How do you get that initial synchronization that goes so fast? Well, there's only one way to do it. You see that
initial synchronization. So I take the data that's on the servers already, I make a copy of it, I put it on USB, I send it to
the other site, and I copy it there. I can delegate permissions to manage replications, so we can follow the best
practice of administration by least privilege. And we can limit the storage replica to individual networks. Now, what we
mean by that is that it's a good idea if you're going to do this, to create a replication network segment. So I have at
least a NIC team, and that NIC team is connected to a subnet. And the only other machines connected to that subnet
are part of this replication network to get a dedicated replication subnet. This gives me support for stretch clusters
where some of the nodes are in one data center, others are in another data center. They can't share storage
because they're in two separate cities so they've got their own storage in both places. And we replicate
asynchronously between them. Now remember, we have support here for both synchronous and asynchronous. In
any of the stretch clusters, we're almost always going to use asynchronous because we've got to traverse the WAN.

However, it is worth mentioning, that if my stretch cluster were on a corporate campus between two separate
buildings, I could potentially, if the latency were low enough, perform synchronous application so that I had those
simultaneous right commits. So that I would know always that the data was up-to-date, and in the event of a failure of
one of the clusters or one half the clusters, I would have zero data loss. With Storage Replica in 2016, we enable
replication between two separate clusters. So one cluster replicates with the other cluster. Again, if they're local,
synchronously, if they're disparate, geographically asynchronously. And we get support for PowerShell, of course,
and Azure Site Recovery tools. So if you're an Azure subscriber, you have some integration there. Server to Server
replicates between two standalone servers, again, synchronously or asynchronously. And in terms of the back-end
storage, we don't care, we just don't care. It can be anything that you got. This is a look at Storage Replica in Server
2016.
Cloud Witness
Learning Objective
After completing this topic, you should be able to
◾ implement cloud witness feature for a failover cluster

1.
[Topic title: Cloud Witness. The presenter is Michael Murphy.] In this video, we want to talk about the cloud witness
for my guest and hypervisor and physical clusters. Because the beauty of this thing is that it works for all of our
cluster arrangements. And it answers a problem that many of us have had when it comes to our stretch clusters. So
brand new for Server 2016, does not require a file share witness, right? So this is the problem that we've had. Not
long ago, couple iterations of the server operating system, Microsoft introduced something called a stretch cluster.
So some of the nodes could live in my primary site and some of the nodes could live in my disaster recovery site.
The problem is, where does my file share witness live? If I put it in the disaster recovery site, and the main site goes
down, my cluster stays up and running. But if I put it in the main site, and the disaster recovery site goes down, or
rather if I put it in the main site and the main site goes down. Then the cluster, the half of it that's in the DR site when
I need it to come up won't come up because there's no file share witness.

They don't have quorum. So the best practice solution has been to locate the file share witness in a third site. So that
regardless of which site went down, the cluster stayed up and running at all times. And this would mean that I'd have
to have a VM up in the cloud or I'd have to have another third location. Either a third data center of my own or one
that I hosted or leased rack space from a hoster. This eliminates that need. Anybody with an Azure subscription can
put the file share witness or rather the cloud witness. Because there's no file share witness, no VM, up into the cloud.
It won't store a copy of the cluster database, the database will rotate between the nodes in the cluster. Instead, it
uses Microsoft Azure as an arbitration point and it creates a blob file and it uses basic Azure Blob Storage. I got to
have a valid Microsoft Azure Storage Account. As long as I do, that's what'll be used to create and store the blob file.

The cloud witness creates msft-cloud-witness container under the Microsoft Storage Account. And the Azure Storage
Account can configure cloud witness for multiple different clusters. So if I have more than one cluster, that single
storage account, that single cloud witness can support multiple clusters. The failover cluster does not store the
access key, right? So when I think about the security of my cluster, this is important to me. Instead, there's a shared
access security token that's generated. The SAS token, not to be confused with storage, right? This is a shared
access security token, remains valid as long as the access key remains valid. If the access key were to be revoked
or become invalid, the security token associated with the cluster, or the cloud witness rather, is also invalidated. And
the cloud witness uses HTTP REST interface. And there you go. Right there, a picture's worth 1,000 words, my
friend. That's the beauty of this thing. Look it, I got two nodes in Site 1. I got two nodes in Site 2. Up there in the cloud
is my witness. If I lose either site, I got three servers running. Quorum is maintained. That's what I want to know.
Where do we use this, disaster recovery for stretched multi-site clusters. This is the biggest one, I would argue.

Failover clusters without shared storage, which commonly, of course, are multi-site clusters. But can also include
DAGs, database availability groups in Exchange. And SQL Always ON, right, which are not true clusters but they
leverage the clustering components of Windows Server. For failover clusters running inside a guest OS is hosted in
Microsoft Azure VM role or any other public cloud. Or running inside of guest OSs of VMs hosted in your own private
clouds. For storage clusters, with shared storage or without, and for Scale-Out File Server clusters. Small branch-
office cluster, simple 2-node clusters. I can throw the cloud witness up there. When I run the Configure Cluster
Quorum wizard, I want to select the quorum witness. I configure the cloud witness to add a quorum vote to the cloud
witness, right? I want that cloud witness to be a participant in my quorum. I've got to have the Microsoft Storage
Account Name to do this. I've got to have the access key corresponding to the storage account. Use the primary
access key when creating for the first time. Use a secondary access key when rotating the primary access key.
Update the endpoint server name if you're using a different Azure service endpoint. So this is the configuration, I
used the Cluster Quorum wizard to specify, designate the cloud witness. And this is the information that I need to
have when I do that. We can also do this in PowerShell, right? With the Set-ClusterQuorum cmdlet, I specify the -
CloudWitness, the -AccountName, and the endpoint. The cloud witness in Server 2016 is just about the most useful
thing that Microsoft has done. To support the deployment of stretched and multi-site clusters since they were first
introduced. This is a great technology that's easy to take advantage of and answers the problem that we've had. It's a
thing of beauty.
Configuring Guest Clusters with Shared VHDX
Learning Objective
After completing this topic, you should be able to
◾ using shared VHDX as a storage solution for guest clusters

1.
[Topic title: Configuring Guest Clusters with Shared VHDX. The presenter is Michael Murphy. In the Server Manager
window, the New Volume Wizard is open.] In this video, we want to take a look at configuring the storage array to
make it available to Hyper-V hosts, as a location to store the VMs. And those Hyper-V hosts are clustered, right? So
you can see what I did here. I just finished formatting a new volume on this machine. This volume is called Guest
cluster. [GUEST_Cluster] I can see it's 116 gigs. Now what can be confusing here, guys, is it looks like this is a fourth
disk on this machine but in point of fact, it's not. What it is, if you notice, it goes 0, 1, 2, 4. Well, because what I did,
was I configured a Storage Pool, out of the three physical disks that are actually there and you can see that that
layout is Parity. So I've created redundancy in the data, right? I've taken three disks. I'm striping the data evenly
across those disks, and calculating a parity bit. If I lose one disk, I can replace the disk, and rebuild the data. Agreed,
right? So, that's the underlying configuration here. What we used to call a software implemented RAID 5 array. But,
today we'd call it a parity storage space.

Now, to make this available to the Hyper-V host, I'm going to define it as a VHDX, as an iSCSI target. So we're going
to create a VHDX on this machine. [The New iSCSI Virtual Disk Wizard opens.] It's going to encompass the whole
disk array. And then we're going to make it available to the iSCSI initiators on the Hyper-V host. Dig it? Does that
make sense? Where is the virtual disk to be located? It's going to be located here. We're going to call the
Shared_Guest _Storage. You may notice I'm using these underscores. I still would prefer not to use spaces in these
names and I would prefer them to be 15 characters or less. But for our purposes, I'm going to use the whole thing.
And you notice here, you have a couple of choices, right? I can make this dynamically expanding, so it wouldn't
actually consume all 116 gigs initially. There are, in fact, a number of good reasons to make it a fixed disk for this
business, for this particular business. [He selects the Fixed size radio button in the iSCSI Virtual Disk Size page.]
One of those is that a lot of services like Exchange and SQL can't use dynamically expanding storage. So if the
nature of the VMs to be hosted here is such that's a problem, I can configure it in the VM or I can configure it out here
and in the VM.

Assign the iSCSI virtual disk to an existing. We're going to create a new one, target name is Shared. We can make
the target available to the Hyper-V host, By specifying the iSCSI initiators in here. [In the Add initiator ID window.] So
we can query the initiator for the computer ID, bring that in here. [In the Specify access servers page.] The iSCSI
initiator is the Hyper-V host that will access the shared storage are now known to the shared storage array. We can
enable additional authentication methods. Now you notice this is reverse CHAP and CHAP. [In the Enable
Authentication page, two checkboxes, Enable CHAP and Enable reverse CHAP are displayed.] Not MS CHAP, not
MSCHAPv2. This is for backward compatibility, should you need it. And not necessarily an older Windows operating
system but a third-party operating system that was attaching to your iSCSI target. We can see all the processes have
completed successfully. [The View results page opens.] The virtual hard disk is now available. And from the cluster
manager, on the cluster, I would make it available as a cluster shared volume to the cluster there.
Exercise: Determining Cluster Strategies
Learning Objective
After completing this topic, you should be able to
◾ determine cluster strategies for a given scenario

1.
[Topic title: Exercise: Determining Cluster Strategies.] In this exercise, you will determine the best solution for the
following scenario. You have been directed to implement a High Availability virtualization solution but spend as little
money as possible. Any storage solution will have to be purchased. You cluster the Hyper-V hosts and for the VM
guest storage, you purchase a fiber channel SAN, purchase a proprietary iSCSI solution, purchase a JBOD and
implement storage spaces.

Take a moment to consider your answer and pause the video as you do so. When you're ready, go ahead and start
the video. The commodity solution, in this example, is a JBOD array. You would purchase a JBOD array and
implement storage spaces as your storage solution.

© 2018 Skillsoft Ireland Limited

Das könnte Ihnen auch gefallen