Per the topic at
http://www.groundworkopensource.com/community/forums/viewtopic.php?p=5220 :
The v1.1 WMI scripts appear to handle warning and critical states when querying multiple values only when the last value is one of those error states, as you've seen with your disk check, and as I've seen with memory checks. So it looks like the result code from the last check done by the command is the one that wins; if the last check was OK, like in your dcas0001 check, then the script returns OK. If the last check was not OK, like in your dcfs0001 check, then the script returns a non-zero errorlevel.
I sent a PM to dblunt before I discovered the bug tracker. Below is what I sent. It includes potential resolutions:
Basically, when using scripts like check_memory and check_cpu and checking "*" instances, the errorlevel returned by the scripts is incorrect when the last checked instance is OK but prior instances are not.
The issue in all the scripts is in the function f_GetInstance. If you look at the latest copy of check_memory_percentage_space_used.vbs from
http://archive.groundworkopensource.com/groundwork-exchange/trunk/plugins/nagios/wmi/ , at line 230 in the function f_GetInstance, we see intReturnTemp = intReturnTemp1 . The problem is that intReturnTemp1 is reinitialized to 0 every time f_Display is run. Therefore, when we specify the commandline argument -inst "*" to the script, the errorlevel returned by the script only represents the status of the last instance checked, instead of the errorlevel of the worst state of all the instances.
To resolve this, I changed the "if(instance = "*")" code as follows in f_GetInstance of check_memory_percentage_space:
'Replace every 8 underscores with a tab
________________if(instance = "*") then
________________________strName = "RAM"
________________________intValue = Int(percentPhysical)
________________________f_Display()
________________________strResultTemp1 = strResultTemp1 & strResultTemp3
________________________strResultTemp2 = strResultTemp2 & strResultTemp4
________________________intReturnTemp = intReturnTemp1
________________________
________________________strName = "PAGING"
________________________intValue = Int(percentVirtual)
________________________f_Display()
________________________strResultTemp1 = strResultTemp1 & strResultTemp3
________________________strResultTemp2 = strResultTemp2 & strResultTemp4
________________________if intReturnTemp < intReturnTemp1 then
________________________________intReturnTemp = intReturnTemp1
________________________end if
________________________strName = "_Total"
________________________intValue = Int(percentTotal)
________________________f_Display()
________________________
________________________strResultTemp1 = strResultTemp1 & strResultTemp3
________________________strResultTemp2 = strResultTemp2 & strResultTemp4
________________________if intReturnTemp < intReturnTemp1 then
________________________________intReturnTemp = intReturnTemp1
________________________end if
________________________
________________________Exit Function
Note that I compare the return value of each check against the prior return values. I think something similar should work for all the scripts, but in some cases it's not the best.
Since you know the scripts better than anyone, what do you think of the following remedies?
1. Move the initialization of intReturnTemp1 out of f_Display and up to the beginning of the script, where other global ariables are initialized? This seems to work on the memory script, and only requires moving one line, versus adding logic to f_GetInstance in each script.
2. Initialize intReturnTemp to 0 at the top of f_GetInstance, and then add the following under each f_Display for "*" instances:
________________________if intReturnTemp < intReturnTemp1 then
________________________________intReturnTemp = intReturnTemp1
________________________end if
I'm thinking option number 1 would work best (and simplest), but wanted to get your input.