Overview
The 7.2.1 Rollup Patch installer version 4125 includes fixes for issues in downtime management, SNMP trap handling, and the AWS Cloud Hub connector. It also addresses some rare conditions that can inhibit application of the patches.
The following is a description of the main issues that are addressed by applying the Rollup Patch installer version 7.2.2-gw4125, incrementally. If you have an unpatched 7.2.1 system, this rollup patch will include all changes and updates made through patch 4122, and the additional changes listed here. There is no need to install other patches first.
Each patch installer creates a backup directory that contains all the changed files for that patch. You may view the changed files on your system at:
/usr/local/groundwork/backup-gwNNNN/files
where NNNN is the number of the patch prior to the one currently installed. Thus, the files changed by patch 4122, for example, will appear when you install patch 4125 in
/usr/local/groundwork/backup-gw4122/files
and can be restored (rolled back to) from there. See instructions for roll back here.
Changes made in this patch installer
(beyond those included from patch 4122)
Issues addressed with RStools components
- Some orphaned downtimes were retained in the slareport database.
This patch fixes this issue with orphaned downtime deletion. An orphaned downtime is a future downtime that had been scheduled for a given host or service which is subsequently "orphaned" when that host or service is deleted from the system. Some orphaned downtimes were still being retained under some circumstances, and the deletion algorithm has been changed to address this. All orphaned downtimes will now be removed daily starting at 23:00 server time. - Downtime calculations have been sped up.
While addressing the previous item, we noticed that calculation of downtimes that were initially scheduled long ago was inefficient. The algorithm has been changed to speed it up. This has beneficial effects on the performance of the downtime user interface when there are many (more than 100 or so) downtimes scheduled.
SNMP trap handling fixes
- The gwprocesstrap.pl script was not working with PostgreSQL.
A simple string-literal quoting bug was preventing the gwprocesstrap.pl script from operating correctly. This has now been addressed.
AWS Cloud Hub connector fixes and enhancements
- The ELB.RequestCount metric was not reporting correctly.
We fixed an issue with this Elastic Load Balancer metric that made it report the value as "1" instead of the actual request count value. - Data gathering enhancement
We changed the way data is gathered from the CloudWatch API to be more complete. This has the effect of making the graphs in GroundWork more closely reflect those in CloudWatch, and gives a more accurate picture of the actual data over time, instead of a "snapshot" approach. Note that you may want to adjust the format of these metrics to show them in higher resolution. The default is integers, but we can support floating point numbers. - Added a UI for threshold overrides based on tags
We added a UI component to Cloud Hub, making it possible to specify the tag, value, and threshold overrides based on the tag keys and values you set in EC2. You will find this in the Metrics section when you Edit or define a new normal metric. This functionality was added in patch 4122, but required you to specify the tags and thresholds to use in the profile XML file.There is no support for multiple tag matches for this feature in this release. - Added support for Network and Application Elastic Load balancer metrics
While we previously supported the standard ELB (now called "classic" by Amazon), we didn't have specific support for Network or Application ELB metrics. These have now been added to the "Network" category. You will be able to see them if you select Network as a monitoring category on the AWS Cloud Hub connector Home screen, and if you have these load balancers configured in your environment, you will be able to add the metrics to the Network section on the Metrics screen after you click on the "Check for Updates" button.
Installer enhancements
- Cron service is now restarted if the installer or restoration script fails.
In the unlikely event that the patch installer fails, the crond daemon is restarted. Note that this doesn't include a system crash or bailing out ungracefully (such as a kill signal or OOM). - Shells with user nagios and ssh sessions now ignored
There was an issue where the necessary shutdown process was hanging when users were logged in and had shells open as user nagios. These are now ignored, as they are deemed harmless.
As always, should you have any questions about this rollup patch or any of the new features, please contact support for assistance.