For lab testing, we would often set up Microsoft Server 2003 2-node clusters, under virtual server 2005, in order to test OpsMgr configurations, such as clustered RMS, clustered databases, or just testing management packs on clustered applications like SQL and Exchange. Virtual Server 2005 did this quite easily, using the built in SCSI adapter which had cluster support. Here is a good write-up on setting this whole thing up:
Enter Server 2008 and HyperV. HyperV does not include SCSI adapter support for clustering. That is antiquated technology anyway, since most companies utilize SAN or iSCSI for shared storage clustering - not direct attached SCSI anymore. This means you won't be able to migrate your virtual clusters built under Virtual Server 2005 to a HyperV host, nor will you be able to use the same process to build a new cluster under HyperV.
I am going to use this post to document building a very simple 2-node Server 2008 (virtualized guest) cluster, using a Server 2008 HyperV host server.
There is a really good blog discussing all the options out there for combining fail-over clustering with HyperV - located at:
And a great PFE blog essentially doing the same things I am documenting in this post:
And of course the Microsoft step by step for building the cluster on Server 2008:
We will be using something similar to Options 4 and 5 in the above blog post... Only one physical HyperV host, and that same host will serve as the iSCSI target. My post assumes there is already a domain present.
The BIG challenge out there - is picking an iSCSI target to use, for labs, testing, and demos. You have a lot of choices out there... Microsoft Windows Storage Server, and then other third party products.... but in the "free" space... your choices are very limited. There are some free ones out there, but few are supported by Server 2008 fail-over clustering. Best thing is to use an existing in-house iSCSI target - if you have one. If not - then people have reported success using the latest "trial" version of iSCSI target from Starwind http://www.rocketdivision.com/wind.html. I will be using an internal-only version of the Microsoft iSCSI target... cause it's free to me, and doesn't expire. 🙂
Here is a high level overview of the process:
1. Setup your Server 2008 HyperV host.
2. Setup two Server 2008 guest OS's.
3. Setup a Microsoft iSCSI target on the HyperV host.
4. Connect the guest OS's to the iSCSI target
5. Setup Microsoft Failover clustering
6. Test the cluster
7. Install the cluster aware application (SCOM, Exchange, SQL, etc...)
Ok - lets get started!
I am going to make some assumptions here - to save time and not detail every part of setting up HyperV on the host, or the guests:
I have a single physical HyperV host, with a name of VS3 (Server 2008 x64). I have a single physical Domain Controller, DC1 (Server 2003 x86). My domain is OPSMGR.NET in this case. I create two guest machines - both Server 2008 x64 Enterprise. Their names will be OMCL1 and OMCL2. These will be my cluster nodes, and they are joined to OPSMGR.NET.
We need to some some quick planning for the cluster... we will need a few things.
OMCL1 - node 1 name
OMCL2 - node 2 name
OMCLV1 - virtual cluster quorum network name
OMCLSQLV1 ands V2 - virtual cluster server network name(s) for the clustered application instance(s)
VS3 - HyperV host and iSCSI target
primary NIC - 10.10.10.50/16
cluster private NIC - 192.168.1.1/24
iSCSI NIC - 192.168.2.10/24
primary NIC - 10.10.10.51
cluster private NIC - 192.168.1.2/24
iSCSI NIC - 192.168.2.11/24
OMCLV1 - 10.10.10.52/16
OMCLSQLV1 - 10.10.10.53/16
OMCLSQLV2 - 10.10.10.53/16
primary NIC - 10.10.10.12/16
iSCSI NIC - 192.168.2.1/24
External - External network connected to physical NIC of hyperV host - I use this for all VM and host communications.
Private - Private VM network for cluster heartbeat communication only.
iSCSI - Internal Only network that allows communications for HyperV guest OS and host only.
At this point, I have completed steps 1 and 2.... I installed hyper V, set up the networks required, added virtual network adapters for each network, made sure I can ping all devices (I had to adjust some of the Windows firewall settings on the host to get this working) but essentially - each guest should be able to ping all the other guest and host network interfaces - external, private, and iSCSI.
Then - I installed the iSCSI target - and created a 2GB shared virtual disk file on the host. I set this to allow the two Cluster node initiators by IP address.
Next, on the cluster nodes, I used the MS iSCSI initiator - connected to the host portal, and then connected to the iSCSI disks. Each node connected to the same disk, and I set the drive letters as "Q" on each. One node initialized and formatted the disk.
So I find myself at step 5. Following http://technet.microsoft.com/en-us/library/cc731844.aspx I installed the failover clustering feature on each node, then ran the cluster validation test. Everything passed except warnings from Active directory. This is because I am running my installs using a domain user account with local admin priv on the cluster nodes, but not as a domain admin. I do this on purpose... too many times Microsoft people test everything using a Domain admin account, and dont experience the same pain that our customers do in the field.... this warning was simply stating that the user running setup for the cluster, does not have permissions to create the computer accounts in the domain for the virtual cluster name. Therefore, I will create this manually using my domain admin account, and assign full control permission on the computer account object to my user account doing the cluster installs. Lastly - I need to set the computer account to "disabled" in ADUC.
Ok - now that is done - we will "Create a Cluster". Give the wizard the cluster virtual name, the the IP address we assigned as above, and click Next.
HOLY CRAP. Creating clusters in Server 2008 just got a LOT easier than doing this in Windows Server 2003!!!!
The next step is to manage the cluster, go to Networks, and set the iSCSI network properties to "Do not allow the cluster to use this network". You can also rename the 3 networks to more friendly names at this point.
At this point - you have a functioning cluster.
You should test this by making sure you can ping the virtual cluster resource name, and fail the cluster disk over to each node (stop cluster service or reboot each node)
Now - I am going to add my disks and install SQL 2005. For my SQL cluster - I have created 4 virtual disks on my iSCSI target host. 2 disks are 20GB and 2 disks are 10GB. Each 20GB disk will be for databases, each 10GB disk will be for transaction logs. Then I will install two instances of SQL 2005, and make this a multi-instance (Active/Active) cluster. First - I will connect to the disks with the iSCSI initiator, then bring them online in Disk management, then initialize them, then format them. Then - using the Cluster Management tool, add them to the cluster, and assign them the appropriate drive letters in the cluster admin tool.
From my reading - this is a lot tougher to get going - we will see.
Here is a list of known issues to be familiar with:
To start, I want to list some good resources on understanding clustering, and then adding SQL 2005 to a cluster.
First - a guide on installing a cluster to prepare for SQL clustering (this is Server 2003 based, however, but good tips and make sure you understand this)
Next, a step-by-step on installing SQL 2005 to a Server 2003 Cluster:
Background data on SQL clustering:
Here is a GREAT step by step video, of installing SQL 2005 on a 2008 failover cluster, done by a fellow PFE:
OK - because I don't want to deal with the issues defined in http://support.microsoft.com/kb/932897/ I need to create my Cluster Application group manually, and add my disks there. I will create a SQL group for each instance.... naming them the same as my network names will be, to keep it simple. So I create two "Empty Service or Application" groups, naming them OMCLSQLV1 and OMCLSQLV2
Now my cluster config looks like this:
And now my storage looks like this, with a DB and LOG volume for each :
So I make sure all resources are running on OMCL1, and then pop in the SQL 2005 Enterprise x64 CD into my VM. When setup runs the pre-req check - it runs against OMCL1 and OMCL2 - so that is a good start! On the components screen - I am able to check the box "Create a SQL Server failover cluster" so that means that SQL setup detected my cluster. Great news!
I choose a named instance of SQL (inst1) and then give it my virtual server name I want to use (OMCLSQLV1). Basically just fill in each page of the setup wizard from there.... entering the IP address, choosing the correct cluster disk for your data files, picking your domain groups for cluster services, etc....
Oddly enough - it all installed really well! I did run into the issue noted in Problem 5, at http://support.microsoft.com/kb/936302/en-us which basically tells me I need to get SQL 2005 SP2 installed ASAP. I now have a fully functioning SQL 2005 failover cluster on Server 2008 x64. I then will install another SQL instance for the A/A configuration I want, and will apply SQL SP2 to both instances. Then, take a look at the OpsMgr console... and see what it discovers!
Update - Interestingly enough.... before I could event complete the installs... I got an alert about service pack compliance from OMSQLV1\INST1 which means I have already discovered SQL and started monitoring it! Even though we dont have a Server 2008 Failover clustering MP yet... it does appear the current SQL MP detects and and discovers SQL 2005 on a Server 2008 failover cluster.
Update - I could not connect to my SQL servers from the network. The SQL 2005 install is not SQL 2008 Firewall aware - therefore it does not configure the firewall to allow for SQL server access. You can read up on this subject at the following links:
Essentially - I had to create and enable three rules on each node for SQL server - A program based rule for sqlserver.exe for each instance of SQL, and then a rule opening 1434_UDP for the SQL browser service. Then, I could connect to the SQL server using simply OMCLSQLV1\INST1. If I did not open the ports for the browser service - then my application would need to know the port that the named instance of SQL was using, and connect directly with the port in the connection string. Some choose to lock down a named instance to a static port, then set up connection strings to include the fixed port... but I did not choose that route.
Update - one last note.... one of the things I typically do in SQL security, to mirror my customer environments, is to add a global group with my admin account having SA access to SQL, and then I remove BUILTIN\Administrators, and NT AUTHORITY\SYSTEM (local system account). When you did this on a SQL 2005 clustered instance under Windows 2003, you needed to add the cluster domain service account to have access to SQL, or the resource would fail to come online. Well, not in Server 2008 failover clustering - we dont use a domain service account for the cluster, it apparently uses local system, as the cluster service runs under local system. What I found was - if I follow my standard, and remove local system, the cluster resource for SQL fails. I had to go and start sql from an elevated command line on the node: "sqlservr.exe -c -s INST1", connect in SQL mgmt studio, and add the NT AUTHORITY\SYSTEM back. It did not need SA, but did need public. Once this was done the SQL resource would come online and all was well again.