Auto-map all available HP StoreVirtual/LeftHand iSCSI volumes in Windows via PowerShell

Now that we’re running a Nimble SAN alongside our HP LeftHand/StoreVirtual system, I’ve been spoiled by the little PowerShell script that Nimble have drummed up.

Rather than having to do lots of clicking to manually connect all the possible MPIO iSCSI sessions per SAN LUN, they use a PowerShell script that discovers the available target IPs, the storage NICs, the available volumes, and then initiates one connection per storage NIC to each volume. This saves an awful lot of repetitive mouse clicking work, and removes the scope for errors – eg. forgetting to make all the connections to a particular LUN.

LeftHand MPIO PowerShell script in action

I’ve adapted this script to work for LeftHand SAN volumes – but make no mistake, the wizardry is all Nimble’s – not mine.

Obviously test this before running it in your production environment. I’ve used it on Server 2012 R2 and it’s really made adding cluster volumes a lot easier.

Please don’t ask me how to get the PowerShell script to run if you come across execution policy issues. If you don’t know how to resolve that yourself, you shouldn’t be running scripts like this.

Download the script here.

Braindump: Moving clustered VMs from one Hyper-V cluster to another

This applies to Server 2012 and newer, but these directions are from when I moved VMs from a 2012 cluster to a 2012 R2 cluster.

  1. On the new cluster, right-click on the cluster and choose More Actions -> Copy Cluster Roles
  2. Choose the roles to copy, and then complete the wizard. This will set up the roles and their dependencies on the destination cluster.
  3. Shut down the VMs on the source cluster
  4. Take the relevant storage offline in Failover Cluster Manager (FCM)
  5. Disconnect the iSCSI volumes on all source cluster nodes
  6. Re-assign the LUNs to the new cluster nodes in your SAN of choice’s management console
  7. Connect the LUNs on the destination cluster nodes, but leave the disks offline in disk management
  8. In FCM, go to storage and bring the copied disks online. You should see the volume details reflected in the bottom details pane if all is successful.
  9. Start the VMs

Windows Server 2012 – Failover Clustering error “Cluster network name resource ‘Cluster Name’ failed registration of one or more associated DNS name(s) for the following reason: DNS bad key.”

Today I had to troubleshoot an issue on our 2012 (non-R2) cluster that was causing live migration to fail. Quick migration wasn’t affected.

The error I was receiving was:

Cluster network name resource 'Cluster Name' failed registration of one or more associated DNS name(s) for the following reason: DNS bad key.

I tried all of the suggestions on the various KB articles and blog posts, but none of them would help. A lot of them were related to externally-hosted DNS, but our DNS is a normal AD-integrated zone hosted internally. I tried removing the cluster node object’s DNS A record, and forcing a repair:

(By the way, just confirming that taking the Cluster Name resource offline doesn’t affect the Hyper-V workloads running on the cluster)
imageimage1

The repair recreated the CNO A-record with the correct permissions assigned to the cluster’s AD computer account. This still didn’t help.

I then edited the permissions on the CNO’s DNS A-record to allow the individual cluster nodes’ computer accounts write access, and the problem went away.

image2

I’ll be the first to admit that this is an annoying solution as I’m going to have to add the permissions for new cluster nodes as they’re added to the cluster in the future. That said, I think I’m going to build a new 2012 R2 cluster on the other two blades, move the workloads across, and then rebuild these nodes as well.

ShadowProtect filter driver causing CSV to run in redirected mode on Server 2012

Had an issue on some new Server 2012 Hyper-V clustered hosts. Started seeing the following error in the logs:

Cluster Shared Volume 'SERVERNAME' ('') has identified one or more active filter drivers on this device stack that could interfere with CSV operations. I/O access will be redirected to the storage device over the network through another Cluster node. This may result in degraded performance. Please contact the filter driver vendor to verify interoperability with Cluster Shared Volumes. 

Active filter drivers found:
???????????????????????

Yes, those are random characters in the error message, so it’s difficult to track down the filter driver in question.

This seems to match the same issue in Server 2008 R2 – Redirected mode is enabled unexpectedly in a Cluster Shared Volume when you are running a third-party application in a Windows Server 2008 R2-based cluster

In the Microsoft KB notes, they state one of the conditions as being:

  • The third-party application has a mini-filter driver that uses an altitude value to determine the load order of the mini-filter driver.
  • The altitude value contains a decimal point.

Running fltmc.exe on these hosts shows the following filter drivers loaded:

image

The filter driver with the decimal point is stcvsm, the “StorageCraft Volume Snapshot Driver” The problem has only been occurring recently, after I installed ShadowProtect.

I’ve had to uninstall ShadowProtect from these servers until it’s officially supported on Server 2012. According to StorageCraft, that should be 90 days after the official launch date of Server 2012. That means there should be a new version of ShadowProtect around the 3rd of December 2012.

Gotcha: HP Teaming and Hyper-V

A point to be aware of when setting up a new Hyper-V host on a HP server that’s going to have the Network Configuration Utility (NCU) installed; You must install the Hyper-V role before installing the NCU.

This is as per HP’s recommendation here:
Using HP ProLiant Network Teaming Software with Microsoft® Windows® Server 2008 Hyper-V or with Microsoft® Windows® Server 2008 R2 Hyper-V

The reason for this, according to HP:

If you install the teaming software before installing HyperV, the network adapters may stop passing traffic.

If you’ve already installed both, but in the wrong order, you’ll need to uninstall both the Hyper-V role as well as the NCU.

To uninstall the NCU (I always forget how to do this, and have to look it up), do the following:

  1. Start, Run, ncpa.cpl
  2. Go into the properties of one of the NICs
    2011-08-11 16-28-55_mRemoteNG - FH mRemote Connection List.xml
  3. Select “HP Network Configuration Utility”, and click Uninstall
    2011-08-11 16-29-54_mRemoteNG - FH mRemote Connection List.xml

NLB on a Hyper-V host

Just a quick gotcha I came up against when I was trying to set up a CAS array on my Exchange 2010 boxes. I kept getting errors when trying to set up a unicast NLB cluster, and the nodes wouldn’t converge.

I ended up figuring out that I had to enable MAC address spoofing on the virtual NICs. The cluster needs to assign the same MAC to both cluster NICs, and this isn’t possible without having that checkbox ticked. This feature is only available in Hyper-V since Server 2008 R2 was released.

I was going to write a more detailed article about it, but as usual, Paul Cunningham’s great blog has it covered already in detail with screenshots.