Contents
This page references the GroundWork Cloud Hub and NetApp virtualization environment.
1.0 Managing a NetApp Connection
This section reviews how to add and configure the Cloud Hub connector NetApp to pull internal metrics into the GroundWork unified monitoring environment. NetApp is a storage management option for network attached storage. Each Cloud Hub connector requires a unique set of parameters (e.g. endpoint, credentials). You will need your GroundWork server and virtual environment parameters handy.
1.1 Adding a new connection
- Log in to GroundWork Monitor as an Administrator.
- Select GroundWork Administration > GroundWork Cloud Hub. The Cloud Hub Configuration Wizard screen will be displayed where you can add and configure the Cloud Hub for various virtual environments. For each of the established configurations you can start or stop the connection, modify the parameters, or choose to remove a connection.
- To start a new connection click the +Add icon next to the environment to add. You will create a new connector in this way for each region in NetApp that is to be monitored.
Figure: Cloud Hub Configuration Wizard
1.2 Configuring GroundWork server values
- Next, enter the GroundWork server values to access the region. You will need to point the Cloud Hub NetApp connector to a GroundWork server, indicate if it supports SSL, and give it an API key to transmit data.
Figure: GroundWork server values for NetApp (Example)
- Display Name: This is the configuration server display name.
- GroundWork Server Name: You will need to enter the name of the GroundWork server that will integrate the Cloud Hub messages. If Cloud Hub is running on the same server as the portal the name can be localhost, or as preferred the server name.
- Is SSL enabled on GroundWork Server?: Check this box if the GroundWork server is configured for secure HTTPS.
- GroundWork Web Services Username and Password: User and password configured to access the Web Services API. These can be obtained by opening a tab to the GroundWork Administration > GroundWork License page. These are the same credentials set within /usr/local/groundwork/config/ws_client.properties.
- Important for LDAP enabled systems: Make sure that it matches with the entry in the ws_client.properties file and the user is member of the Authenticated group and the WSUser (or GWUser) group in LDAP.
- Without 7.0.2 SP3: The Web Services user name may be different if you are using LDAP and GroundWork Monitor 7.0.2 without the SP3 patch. In this case make an adjustment to what you see in the image below to match what you have, and also fill in the accurate password.
- With 7.0.2 SP3: If you applied the SP3 patch the Web Services user will not have a password, instead you need to fill in the token from the GroundWork Administration > GroundWork License page. Under the title Webs Services API Account Info the default encrypted token can be copied into the Cloud Hub page.
- Merge hosts on GroundWork Server?: If checked, this option combines all metrics of same named hosts under one host. For example, if there is a Nagios configured host named demo1 and a Cloud Hub discovered host named demo1, the services for both configured and discovered hosts will be combined under the hostname demo1 (case-sensitive).
1.3 Configuring virtualization server values
- We continue with the second half of the configuration wizard by entering the values for the virtualization server. The data that the GroundWork server receives comes from the NetApp server, the information is pulled from the API on a periodic basis based on the check interval that is set. You can also select which views to include.
Figure: Values for a NetApp connection (Example)
- Is SSL enabled on NetApp Server?: Check this box if the NetApp server is configured for secure HTTPS.
- NetApp Server Name: NetApp server.
- NetApp Server Username and Password: This is the NetApp server user name and password.
- Check Interval (in mins): This is the polling interval for collecting monitoring data from the virtual instance and sending it to the GroundWork server. The value is in minutes.
- Connection Retries (-1 infinite): This entry is the number of retries for the connection and sets a limit on how many attempts are made after a failure. If you set this to -1 the retrying goes on forever. The number set indicates how many connections are attempted before the connection is left inactive (until you restart it).
- Views: The two radio buttons specify the views you would like to report. The Volume View and or the Aggregate View.
- Select SAVE which saves the current connection values and writes the entries to an XML file in the GroundWork server /usr/local/groundwork/conifg/cloudhub directory. When you choose to save the Cloud Hub connector is assigned an agent ID and that in turn becomes a record locator in Foundation when you begin monitoring.
- Then to validate the configuration select TEST CONNECTION which will check if the virtual instance is accessible with the given credentials. If successful you should see Connection successful! at the top of the screen.
- After the credentials have been validated select NEXT to display an associated connection metrics screen where you can determine the metrics to be monitored for NetApp, (the HOME option would take you back to the first page of the configuration wizard).
1.4 Determining metrics to be monitored
Each management system provides metrics for specific checks that can be defined for the instance or the container. The property name and the thresholds are defined in a monitoring profile in an XML format, (see section 3.2 below). In the UI, the available metrics are separated based on the controller, the aggregate disks, physical objects, and volumes. All of the thresholds monitoring can be turn off and on. By default Warning and Critical thresholds are set to -1 which turns them off and can allow you to get an idea by watching the data coming in and to set a threshold value that is appropriate for the environment.
- The metrics screen allows you to define if a metric should be monitored and graphed and lets you set the values for Warning and Critical thresholds at which to trigger alerts, these profile metric options are described below. The selections you make are applied to every instance discovered in the region. The set of selections is saved on the GroundWork server in the /usr/local/groundwork/config/cloudhub/profiles directory as a profile in an XML file. Upon saving, changes are written to the XML profile file and become effective both against new instances that may be discovered as well as already monitored instances. In the example below we show a metric cpu-busytime that has been edited to include a graph, and a Warning Threshold of 1 and a Critical Threshold of 99, further down in the Status view you can see that this threshold is for all controllers on that NetApp, so you don't have to configure each controller individually.
Figure: Cloud Hub Configuration wizard for NetApp - Controller thresholds and Aggregate and Volume thresholds
- Attribute: The name of the service attribute (the metric name reported by the virtualization server).
- Monitored: When on (checked) the service will be monitored.
- Graphed: When on (checked) the service will be graphed.
- Warning and Critical Thresholds: These values control the triggering of alerts. A Warning number larger than the Critical value will cause Cloud Hub to detect the metric as a trigger. Choosing a -1 in a threshold box will disable triggering on that alert.
- Service Name: CloudHub automatically creates service names based on the metric name gathered from a virtualization server. The Service Names option adds the ability to report the polled metrics under a unique name that is set by the administrator. Leaving the Service Name field blank defaults to the metric name reported by the virtualization server. All CloudHub connectors now support the editable Service Name feature.
If a Service Name is added for an attribute, along with the Graphed option on (checked), a performance graph will need to be configured for the new service name. You can easily do this by copying and editing the original performance graph entry. Go to Configuration > Performance, from the Select Service-Host entry drop-down list select the original service name, select Copy, and in the Service field replace the entry with the new service name you entered in Cloud Hub, select Create Copy. After a couple minutes the graph should display in Status. - Description: A description of the service attribute.
- When you are satisfied with the profile selections choose SAVE to write out the profile. Select HOME to return to the main Cloud Hub panel.
- Select START for the specific connector to begin the discovery and data collection process.
Figure: Cloud Hub Configuration
2.0 Unified Monitoring
So how does all this get represented in the unified monitoring context? The data for the monitored services selected are passed to the GroundWork REST API and are directly inserted into the Status and Event Console tables in the GroundWork Foundation database which makes them show up in the UI almost immediately.
2.1 Status view
After starting the connection, in a couple minutes the Status viewer application will display the automatically created host groups corresponding to the views chosen in setup. The monitoring can be adjusted by returning to the Cloud Hub configuration screen and modifying metrics collected (check/un-check) or modifying threshold values. You may assign the discovered host groups to Custom Groups (e.g. Virtual, NetApp) in order to organize the Status display. You will see the Controllers represented as Host Groups, the elements as Hosts, and the metrics are represented as services on the hosts, creating a hierarchy that fits into the GroundWork Monitor UI tree view. Similarly for volumes, you can define on the NetApp the volume metrics that will be displayed in Status as STOR:volumes. This is the same as you would see in the NetApp management console.
In our example, we show the cpu-busytime service Status Information as CRITICAL, as this is reflective of the current threshold set in the profile. In this view you can also see the graphs coming in under Service Availability and Performance Measurement, and the events being logged at the bottom of the screen.
Figure: Status view
2.2 Event Console
Here in Event Console, we have selected the system applications filter NETAPP, which lists events for the NETAPP application type. From here you can select specific events and apply various actions.
Figure: Event Console, by Application Type (NETAPP)
2.3 Dashboards
This view displays the Enterprise View dashboard and indicates the host aggr0_gwos_netappp_colo_02_0 status as Host Recently Recovered.
Figure: NetApp Connection - Dashboards, Enterprise View
2.4 NoMa
Below we show the NoMa log for notifications in which you can see alerts for the service cpu-busytime.
Figure: NoMa notification log
3.0 Monitoring Profile for the NetApp Virtual Environment
The master monitoring profiles for virtual environments are stored on the GroundWork server. Each time the user goes into the configuration screens for Cloud Hub the monitoring profile from the GroundWork server would be loaded into the Cloud Hub. This allows to you to manage and maintain the monitoring profiles for Cloud Hub in a central location.
3.1 Location of profiles
The location for Cloud Hub monitoring profiles is:
/usr/local/groundwork/core/vema/profiles/
Viewing the profiles directory:
[root@gwdemo~]# cd /usr/local/groundwork/core/vema/profiles [root@gwdemo profiles\]# lsamazon_monitoring_profile.xml openstack_monitoring_profile.xmldocker_monitoring_profile.xml rhev_monitoring_profile.xmlnetapp_monitoring_profile.xml vmware_monitoring_profile.xmlopendaylight_monitoring_profile.xml[root@gwdemo profiles|How to delete or remove hosts]#
The name of the NetApp monitoring profile is:
netapp_monitoring_profile.xml
If you wish, you may carefully edit netapp_monitoring_profile.xml to include additional numeric metrics.
If you edit PLEASE test immediately. Any metric test that is slightly misspelled or otherwise rejected short-circuits ALL the metrics from reporting silently and without raising flags. In general, we can't recommend adding additional numeric metrics, at the time of this writing all useful metrics have been included as part of the released XML file contents. |
3.2 Netapp monitoring profile: netapp_monitoring_profile.xml
<?xml version="1.0" encoding="UTF-8"?> <vema-monitoring> <profileType>netapp</profileType> <hypervisor> <metric name="cpu-busytime" description="Total time in seconds that CPU is busy on this controller node" monitored="true" graphed="false" warningThreshold="-1" criticalThreshold="-1" /> <metric name="env-failed-fan-count" description="The number of fans that are in failed status (zero if none)" monitored="true" graphed="false" warningThreshold="-1" criticalThreshold="-1" computeType="info" /> <metric name="env-failed-power-supply-count" description="The number of power supplies that are in failed status (zero if none)" monitored="true" graphed="false" warningThreshold="-1" criticalThreshold="-1" computeType="info" /> <metric name="env-over-temperature" description="Boolean indicating if the NetApp controller node has surpassed temperature limit" monitored="true" graphed="false" warningThreshold="-1" criticalThreshold="-1" computeType="info" /> <metric name="node-uptime" description="Total time in seconds that this controller node has been running" monitored="true" graphed="false" warningThreshold="-1" criticalThreshold="-1" /> <metric name="nvram-battery-status" description="Displays the NVRAM Battery status" monitored="true" graphed="false" warningThreshold="-1" criticalThreshold="-1" computeType="info" /> <metric name="product-version" description="The product version number for this Netapp controller" monitored="true" graphed="false" warningThreshold="-1" criticalThreshold="-1" computeType="info" /> <metric name="syn.cpu-controller-usage" description="Percentage of CPU Usage for this controller node" monitored="true" graphed="true" warningThreshold="-1" criticalThreshold="-1" /> <metric name="computed-failed-disks" description="Number of failed disks for this controller node" monitored="true" graphed="false" warningThreshold="-1" criticalThreshold="1" /> </hypervisor> <vm> <metric name="volume-inode-attributes.files-total" description="Total Files on Volume (INode)" monitored="false" graphed="false" warningThreshold="-1" criticalThreshold="-1" /> <metric name="volume-inode-attributes.files-used" description="Total Files Used on Volume (INode)" monitored="true" graphed="true" warningThreshold="-1" criticalThreshold="-1" /> <metric name="syn.volume.percent.files.used" description="Percentage of Volume Files Used" monitored="true" graphed="true" warningThreshold="-1" criticalThreshold="-1" /> <metric name="volume-space-attributes.size-total" description="Bytes Total on Volume" monitored="false" graphed="false" warningThreshold="-1" criticalThreshold="-1" /> <metric name="volume-space-attributes.size-used" description="Bytes Used on Volume" monitored="false" graphed="false" warningThreshold="-1" criticalThreshold="-1" /> <metric name="volume-space-attributes.size-available" description="Bytes Available on Volume" monitored="false" graphed="false" warningThreshold="-1" criticalThreshold="-1" /> <metric name="volume-space-attributes.percentage-size-used" description="Percentage of capacity used on Volume" monitored="true" graphed="true" warningThreshold="-1" criticalThreshold="-1" /> <metric name="syn.volume.percent.bytes.used" description="Percentage of Volume Used" monitored="true" graphed="true" warningThreshold="-1" criticalThreshold="-1" /> <metric name="syn.volume.gb.used" description="GB Used on Volume" monitored="true" graphed="true" warningThreshold="-1" criticalThreshold="-1" /> <metric name="syn.volume.gb.available" description="GB Available on Volume" monitored="true" graphed="true" warningThreshold="-1" criticalThreshold="-1" /> <metric name="aggr-raid-attributes.disk-count" description="Number of Disks on RAID" monitored="true" graphed="false" warningThreshold="-1" criticalThreshold="-1" /> <metric name="aggr-volume-count-attributes.flexvol-count" description="Number of Volumes on RAID" monitored="true" graphed="false" warningThreshold="-1" criticalThreshold="-1" /> <metric name="aggr-space-attributes.size-total" description="Total bytes on RAID" monitored="false" graphed="false" warningThreshold="-1" criticalThreshold="-1" /> <metric name="aggr-space-attributes.size-used" description="Total bytes USED on RAID" monitored="false" graphed="false" warningThreshold="-1" criticalThreshold="-1" /> <metric name="aggr-space-attributes.size-available" description="Total bytes AVAILABLE on RAID" monitored="false" graphed="false" warningThreshold="-1" criticalThreshold="-1" /> <metric name="aggr-space-attributes.percent-used-capacity" description="Percentage of capacity USED on RAID" monitored="true" graphed="true" warningThreshold="-1" criticalThreshold="-1" /> <metric name="syn.aggregate.gb.used" description="GB Used on RAID aggregate" monitored="true" graphed="true" warningThreshold="-1" criticalThreshold="-1" /> <metric name="syn.aggregate.gb.available" description="GB Available on RAID aggregate" monitored="true" graphed="true" warningThreshold="-1" criticalThreshold="-1" /> </vm> </vema-monitoring>
4.0 Removing Connectors from Monitoring
If you decide you do not want to monitor a particular region, simply navigate to GroundWork Administration > GroundWork Cloud Hub select STOP for the connector, then DELETE. All of the created host groups and the discovered and monitored instances for that region will be deleted from the Foundation database within a few minutes and monitoring access to the region endpoint will cease.
Additionally, see How to remove Cloud Hub hosts in the document How to delete or remove hosts.