This second post in my series about monitoring HP ProLiant system health via PowerShell and WMI covers retrieving temperature sensor values and status.
As per the last post, this script is designed to work with PRTG but will work just as well in a standalone-manner if you comment out a line around #133:
# If you don't use PRTG, you could just un-comment the below line
#$customSensors | ft -AutoSize; exit;
Running in standalone mode will display the following:
The out-of-the-box output for PRTG will return XML.
The sensor’s warning threshold is determined by this line at the top of the script:
# I'm setting an arbitrary warning threshold at 75%
$warningThreshold = 0.75
I had to a bit of Select-String trickery to get a useful sensor name. This method won’t translate to other translations of the WBEM install (if HP even provide them), and will need to be tweaked:
Select-String -InputObject $sensor.description -Pattern '(?<=detects for )(.*)(?=. Temp)' -AllMatches).Matches.Value
# Turns this:
"Temperature Sensor 7 detects for CPU board. Temperature reported by the sensor is within normal operating range."
# Into this:
"Temperature Sensor 7: CPU board"
See this section of the previous post about how to set up the sensor in PRTG.
This is the first in a series of posts about monitoring HP ProLiant system health via PowerShell and WMI.
I chose to implement this as an alternative to SNMP since SNMP support in Windows Server is being EOL’ed.
These scripts were written to be used in conjunction with PRTG Network Monitor, but they can just as easily be adapted to display the information in a table by un-commenting a single line.
This functionality depends on the HP Insight Management WBEM providers being installed on the monitored server. The account you’re running the script as will also need permission to do remote WMI queries against the server in question.
Running the script in standalone mode will result in the following output:
Out of the box, the script will output the data in XML as needed by PRTG:
<channel>Port:1I Box:1 Bay:1 - SAS Disk P4W89D4A</channel>
<channel>Port:1I Box:1 Bay:2 - SAS Disk P4WA9UUA</channel>
<channel>Port:1I Box:1 Bay:3 - SAS Disk P4XWYJSA</channel>
<channel>Port:1I Box:1 Bay:4 - SAS Disk P4XX08ZA</channel>
<channel>Port:2I Box:1 Bay:5 - SAS Disk P4VX5AVA</channel>
<channel>Port:2I Box:1 Bay:6 - SAS Disk P4W9Y78A</channel>
Setting up the script for PRTG
- Copy the script to Custom SensorsEXEXML in your PRTG installation folder. Remember to do this on all probes if you have multiple.
- Add a new sensor of type EXE/Script Advanced Sensor to the device that represents the ProLiant server you wish to monitor.
- Make sure the following sensor settings are configured as per below:
- Parameters: %host
- Security Context: ‘Use Windows credentials of parent device’
The script is configured to put the entire sensor into an error state if there’s a problem with one of the channels. It will also set the sensor message to indicate which disk is experiencing the problem.
Here’s the script:
I’ve had an issue several times with a particular server (Server 2008 R2) where the Disk Management console and HP’s Array Config utility stop responding when a volume is in the process of being brought online. The volume also doesn’t come online completely during this time.
It usually resolves itself after a while, but today I had a scenario where the volume had magically knocked itself offline, and coming back online couldn’t wait. I ended up restarting the Virtual Disk service in Windows. That resolved the problem. The volume was instantly online, and the tools began responding after closing/reopening them.
Hope this helps someone eventually, as I couldn’t find anything on Google that sounded similar.
A point to be aware of when setting up a new Hyper-V host on a HP server that’s going to have the Network Configuration Utility (NCU) installed; You must install the Hyper-V role before installing the NCU.
This is as per HP’s recommendation here:
Using HP ProLiant Network Teaming Software with Microsoft® Windows® Server 2008 Hyper-V or with Microsoft® Windows® Server 2008 R2 Hyper-V
The reason for this, according to HP:
If you install the teaming software before installing HyperV, the network adapters may stop passing traffic.
If you’ve already installed both, but in the wrong order, you’ll need to uninstall both the Hyper-V role as well as the NCU.
To uninstall the NCU (I always forget how to do this, and have to look it up), do the following:
- Start, Run, ncpa.cpl
- Go into the properties of one of the NICs
- Select “HP Network Configuration Utility”, and click Uninstall