A great feature of many of the newer PowerShell cmdlets is that they use CIM sessions. This makes working with multiple remote machines a lot easier:
Rather than having to do lots of clicking to manually connect all the possible MPIO iSCSI sessions per SAN LUN, they use a PowerShell script that discovers the available target IPs, the storage NICs, the available volumes, and then initiates one connection per storage NIC to each volume. This saves an awful lot of repetitive mouse clicking work, and removes the scope for errors – eg. forgetting to make all the connections to a particular LUN.
I’ve adapted this script to work for LeftHand SAN volumes – but make no mistake, the wizardry is all Nimble’s – not mine.
Obviously test this before running it in your production environment. I’ve used it on Server 2012 R2 and it’s really made adding cluster volumes a lot easier.
Please don’t ask me how to get the PowerShell script to run if you come across execution policy issues. If you don’t know how to resolve that yourself, you shouldn’t be running scripts like this.
This applies to Server 2012 and newer, but these directions are from when I moved VMs from a 2012 cluster to a 2012 R2 cluster.
- On the new cluster, right-click on the cluster and choose More Actions -> Copy Cluster Roles
- Choose the roles to copy, and then complete the wizard. This will set up the roles and their dependencies on the destination cluster.
- Shut down the VMs on the source cluster
- Take the relevant storage offline in Failover Cluster Manager (FCM)
- Disconnect the iSCSI volumes on all source cluster nodes
- Re-assign the LUNs to the new cluster nodes in your SAN of choice’s management console
- Connect the LUNs on the destination cluster nodes, but leave the disks offline in disk management
- In FCM, go to storage and bring the copied disks online. You should see the volume details reflected in the bottom details pane if all is successful.
- Start the VMs
Today I had to troubleshoot an issue on our 2012 (non-R2) cluster that was causing live migration to fail. Quick migration wasn’t affected.
The error I was receiving was:
Cluster network name resource 'Cluster Name' failed registration of one or more associated DNS name(s) for the following reason: DNS bad key.
I tried all of the suggestions on the various KB articles and blog posts, but none of them would help. A lot of them were related to externally-hosted DNS, but our DNS is a normal AD-integrated zone hosted internally. I tried removing the cluster node object’s DNS A record, and forcing a repair:
(By the way, just confirming that taking the Cluster Name resource offline doesn’t affect the Hyper-V workloads running on the cluster)
The repair recreated the CNO A-record with the correct permissions assigned to the cluster’s AD computer account. This still didn’t help.
I then edited the permissions on the CNO’s DNS A-record to allow the individual cluster nodes’ computer accounts write access, and the problem went away.
I’ll be the first to admit that this is an annoying solution as I’m going to have to add the permissions for new cluster nodes as they’re added to the cluster in the future. That said, I think I’m going to build a new 2012 R2 cluster on the other two blades, move the workloads across, and then rebuild these nodes as well.
Had an issue on some new Server 2012 Hyper-V clustered hosts. Started seeing the following error in the logs:
Cluster Shared Volume 'SERVERNAME' ('') has identified one or more active filter drivers on this device stack that could interfere with CSV operations. I/O access will be redirected to the storage device over the network through another Cluster node. This may result in degraded performance. Please contact the filter driver vendor to verify interoperability with Cluster Shared Volumes. Active filter drivers found: ???????????????????????
Yes, those are random characters in the error message, so it’s difficult to track down the filter driver in question.
This seems to match the same issue in Server 2008 R2 – Redirected mode is enabled unexpectedly in a Cluster Shared Volume when you are running a third-party application in a Windows Server 2008 R2-based cluster
In the Microsoft KB notes, they state one of the conditions as being:
- The third-party application has a mini-filter driver that uses an altitude value to determine the load order of the mini-filter driver.
- The altitude value contains a decimal point.
Running fltmc.exe on these hosts shows the following filter drivers loaded:
The filter driver with the decimal point is stcvsm, the “StorageCraft Volume Snapshot Driver” The problem has only been occurring recently, after I installed ShadowProtect.
I’ve had to uninstall ShadowProtect from these servers until it’s officially supported on Server 2012. According to StorageCraft, that should be 90 days after the official launch date of Server 2012. That means there should be a new version of ShadowProtect around the 3rd of December 2012.
A point to be aware of when setting up a new Hyper-V host on a HP server that’s going to have the Network Configuration Utility (NCU) installed; You must install the Hyper-V role before installing the NCU.
This is as per HP’s recommendation here:
Using HP ProLiant Network Teaming Software with Microsoft® Windows® Server 2008 Hyper-V or with Microsoft® Windows® Server 2008 R2 Hyper-V
The reason for this, according to HP:
If you install the teaming software before installing HyperV, the network adapters may stop passing traffic.
If you’ve already installed both, but in the wrong order, you’ll need to uninstall both the Hyper-V role as well as the NCU.
To uninstall the NCU (I always forget how to do this, and have to look it up), do the following:
Just a quick gotcha I came up against when I was trying to set up a CAS array on my Exchange 2010 boxes. I kept getting errors when trying to set up a unicast NLB cluster, and the nodes wouldn’t converge.
I ended up figuring out that I had to enable MAC address spoofing on the virtual NICs. The cluster needs to assign the same MAC to both cluster NICs, and this isn’t possible without having that checkbox ticked. This feature is only available in Hyper-V since Server 2008 R2 was released.
I’ve been browsing the Hyper-V related articles on TechNet while planning to move over to a failover cluster, and have found some useful tips. The TechNet articles are a literal goldmine of information if you can be bothered reading through it all.
They answer some of the initial questions I had when first implementing Hyper-V
If you change the configuration of a virtual machine, we recommend that you use the Failover Cluster Manager snap-in to access the virtual machine settings. When you do this, the cluster is updated automatically with the configuration changes. However, if you make changes to the virtual machine settings from the Hyper-V Manager snap-in, you must update the cluster manually after you make the changes. If the configuration is not refreshed after networking or storage changes are made, a subsequent failover may not succeed or may succeed but result in the virtual machine being configured incorrectly. original here…
For iSCSI: If you are using iSCSI, each clustered server must have one or more network adapters or host bus adapters that are dedicated to the cluster storage. The network that you use for iSCSI cannot be used for network communication. In all clustered servers, the network adapters that you use to connect to the iSCSI storage target should be identical, and we recommend that you use Gigabit Ethernet or a faster network adapter. original here…
You cannot use teamed network adapters, because they are not supported with iSCSI. original here…
On a failover cluster that uses Cluster Shared Volumes, multiple clustered virtual machines that are distributed across multiple cluster nodes can all access their Virtual Hard Disk (VHD) files at the same time, even if the VHD files are on a single disk (LUN) in the storage. original here…
Software requirements for using Hyper-V and Failover Clustering: All the servers should have the same software updates (patches) and service packs. original here…