Hyper-V: Clustering & Resiliency. Step 6: Clustered Shared Volume & Optimize.

Hello Folks,

Finally… Step 6 of our “From The Ground Up” series.  Today we we’ll finish the storage equation by assign Shared Storage using iSCSI for our Cluster Shared Volume and we will  optimize our cluster.

If you feel like doing it and getting your hands on Hyper-V and SCVMM you can download Windows Server 2012 R2 and Hyper-V Server 2012 R2, setup your own lab and try it for yourself.  I think I may have made you wait long enough.  Right?

Cluster Shared Volumes (CSVs) in a Windows Server 2012 R2 failover cluster allow multiple nodes in the cluster to simultaneously have read-write access to the same LUN (disk) that is provisioned as an NTFS volume. With CSVs, clustered roles can fail over quickly from one node to another node without requiring a change in drive ownership, or dismounting and remounting a volume. CSVs also help simplify managing a potentially large number of LUNs in a failover cluster. To each of the cluster nodes in the cluster, the CSV appears as a consistent file namespace i.e. C:\ClusterStorage\Volume1.

CSVs provide a general-purpose, clustered file system which is layered above NTFS. They are not restricted to specific clustered workloads. (In Windows Server 2008 R2, CSVs only supported the Hyper-V workload) CSV applications include:

  • Clustered virtual hard disk (VHD) files for clustered Hyper-V virtual machines
  • Scale-out file shares to store application data for the Scale-Out File Server role. Examples of the application data for this role include Hyper-V virtual machine files and Microsoft SQL Server data

With the release of Windows Server 2012 R2, there have been a number of improvements in CSV.

Optimized CSV Placement Policies

CSV ownership is now automatically distributed and rebalanced across the failover cluster nodes.

In a failover cluster, one node is considered the owner or "coordinator node" for a CSV. The coordinator node owns the physical disk resource that is associated with a logical unit (LUN). All I/O operations that are specific to the file system are through the coordinator node. Distributed CSV ownership increases disk performance because it helps to load balance the disk I/O.

Because CSV ownership is now balanced across the cluster nodes, one node will not own a disproportionate number of CSVs. Therefore, if a node fails, the transition of CSV ownership to another node is potentially more efficient.

In Windows Server 2012, there is no automatic rebalancing of coordinator node assignment. For example, all LUNs could be owned by the same node. In Windows Server 2012 R2, CSV ownership is evenly distributed across the failover cluster nodes based on the number of CSVs that each node owns.

Additionally in Windows Server 2012 R2, ownership is automatically rebalanced when there are conditions such as a CSV failover, a node rejoins the cluster, you add a new node to the cluster, you restart a cluster node, or you start the failover cluster after it has been shut down.

Increased CSV Resiliency

Windows Server 2012 R2 includes the following improvements to increase CSV resiliency:

  • Multiple Server service instances per failover cluster node. There is the default instance that handles incoming traffic from Server Message Block (SMB) clients that access regular file shares, and a second CSV instance that handles only inter-node CSV traffic. This inter-node traffic consists of metadata access and redirected I/O traffic.
  • CSV health monitoring of the Server service

A CSV uses SMB as a transport for I/O forwarding between the nodes in the cluster, and for the orchestration of metadata updates. If the Server service becomes unhealthy, this can impact I/O performance and the ability to access storage. Because a cluster node now has multiple Server service instances, this provides greater resiliency for a CSV if there is an issue with the default instance. Additionally, this change improves the scalability of inter-node SMB traffic between CSV nodes.

If the Server service becomes unhealthy, it can impact the ability of the CSV coordinator node to accept I/O requests from other nodes and to perform the orchestration of metadata updates. In Windows Server 2012 R2, if the Server service becomes unhealthy on a node, CSV ownership automatically transitions to another node to ensure greater resiliency.

In Windows Server 2012, there was only one instance of the Server service per node. Also, there was no monitoring of the Server service.

CSV Cache Allocation

Windows Server 2012 introduced a new feature known as CSV Cache. The CSV cache provides caching at the block level of read-only unbuffered I/O operations by allocating system memory (RAM) as a write-through cache. (Unbuffered I/O operations are not cached by the cache manager in Windows Server 2012.) This can improve performance for applications such as Hyper-V, which conducts unbuffered I/O operations when accessing a VHD. The CSV cache can boost the performance of read requests without caching write requests. By default, the CSV cache was disabled.

In Windows Server 2012 R2, you can allocate a higher percentage of the total physical memory to the CSV cache. In Windows Server 2012, you could allocate only 20% of the total physical RAM to the CSV cache. You can now allocate up to 80%.

Increasing the CSV cache limit is especially useful for Scale-Out File Server scenarios. Because Scale-Out File Servers are not typically memory constrained, you can accomplish large performance gains by using the extra memory for the CSV cache. Also, in Windows Server 2012 R2, CSV Cache is enabled by default.

Let get it configured. VMM assigns an iSCSI LUN that was created earlier to the cluster nodes to provide shared storage. VMM manages the iSCSI connectivity between the hosts and the iSCSI SAN through multiple sessions, as well as configuring the LUN correctly. The files of a clustered VM would be hosted on this SAN and accessible by both node.

Assign Shared Storage using iSCSI

1.  In the Virtual Machine Manager Console select the Fabric workspace, in the Storage node, select Classification and Pools.

image

2.  From the central pane, expand Bronze Tier, then expand iSCSITarget: DC01: C:, then select CSV01. On the upper ribbon, select Allocate Capacity.

image

3.  In the Allocate Storage Capacity window, click Allocate Storage Pools. In the Allocate Storage Pools window, under Available storage pools, select iSCSITarget: DC01: C:, click Add, and then click OK.

image

4.  In the Allocate Storage Capacity window, click Allocate Logical Units. in the Allocate Logical Units window, under Available Logical units, select CSV01, then click Add, and then click OK.

image

5.  In Allocate Storage Capacity window, click Close.

6.  In the Fabric workspace, expand Servers, and then click All Hosts. Right-click HYPER-V01, and click Properties.  Click the Storage tab, click Add, and then select Add iSCSI Array.

image

7.  In the Create New iSCSI Session window, select DC01 from the Array dropdown, and then click
Create.

image

8.  In the Hyper-V01 Properties window, click OK. and repeat steps 6 to 7 for Hyper-V02.

9.  Under All Hosts, right click CLUSTER1, and select Properties. Click the Shared Volumes tab, and then click Add.

image

10.  In the Add Cluster Shared Volume window, select CSV01. In the Volume Label field, type CSV01, click the boxes for Quick Format and Force Format, and then click OK.

image

11.  In the CLUSTER1.contoso.com Properties window, click OK. VMM will then assign the LUN to both nodes in the cluster, format the volume, convert it to a Cluster Shared Volume, and make it available for placement of virtual machines.

image

Optimize a Cluster

1.  Open the Virtual Machine Manager Console. select the Fabric workspace, expand Servers, right-click All Hosts, and select Properties.

2.  In the All Hosts Properties window, click the Host Reserves tab. ensure that the Use the host reserve settings from the parent host
group box is unchecked, and change the Memory settings for Unit to % and for Amount to 5.

image

Since the Hyper-V hosts in this lab environment only have 2GB RAM, the Memory setting needs to
be set lower to ensure that a VM can be deployed.

3.  Click the Dynamic Optimization tab. slide the Aggressiveness slider to High. Aggressiveness affects how evenly distributed the VMs are across the different hosts. An aggressively balanced cluster will live migrate VMs more frequently to ensure that a similar amount of host resources are used
on each cluster node.

4.  Select the box for Automatically migrate virtual machines to balance load at this frequency (minutes) and type 5 in the box.

image

5.  Select the Enable power optimization box, and then click Settings. Review the default threshold values. Under Schedule click one of the blue squares between 8am and 6pm and Monday to Friday.  The square will turn white when clicked, to indicate no power optimization will operate during those times. By pressing the keyboard arrow keys and the space bar, the squares can be changed. Change a few of these squares, and then click OK.

image

image

6.  Back on the All Hosts Properties window, click OK to close the window.

Our cluster is UP! Optimized! and ready to accept virtual machines….

you can use the recipe in my “From The Ground Up” series to setup your own cluster in a lab or in production if you want.  You wont regret it.

Next week, we build on top of this cluster and start looking at Cluster Patching ( I know, I said in the last post that we were going to do that today, but this post is long enough. Some of you are probably already sleeping…).  We will look at creating a Generation 1 VM & Generation 2 VM, create a VM with PowerShell, (so you can automate your environment).

But for now…  I need some sleep.

Cheers!

clip_image011

Pierre Roman | Technology Evangelist
Twitter | Facebook | LinkedIn