LogBridge Feeder Overview

Overview

This page reviews the GroundWork LogBridge Feeder add-on.

CONTENTS

RELATED RESOURCES

WAS THIS PAGE HELPFUL?

1.0 About LogBridge Feeder

GroundWork's IT OPS Analytics is is based on GroundWork LogBridge and the log analytics product ELK to provide correlations and full compliance reporting for HIPAA, PCINet and Sarbanes-Oxley is available from the GroundWork Monitor custom BIRT Reports. The GroundWork LogBridge connects the monitoring system with the log analytics product ELK. The ELK stack consists of Elasticsearch, Logstash, and Kibana. Although they've all been built to work exceptionally well together, each one is a separate project that is driven by the open-source vendor Elastic http://elastic.co, which itself began as an enterprise search platform vendor.

The LogBridge feeder does two things:

  • Integrates Elasticsearch searches defined and stored in Kibana into GroundWork
  • Feeds GroundWork events into Elasticsearch

2.0 LogBridge Feeder Architecture

The LogBridge feeder is an add-on to the standard GroundWork system and consists of two RAPID-based applications - the LogBridge feeder application, and the GroundWork Events feeder application.

Figure: LogBridge feeder

Both feeder applications work together to integrate Elasticsearch searches into GroundWork and GroundWork events into Elasticsearch. Similar to other RAPID-based feeders, the LogBridge feeder applications can feed to and from any number of GroundWork systems.

3.0 Operation of the LogBridge Feeder

3.1 Normal Operation Mode

LogBridge Feeder Application

The LogBridge feeder application finds and executes Elasticsearch searches created in Kibana, and integrates the count of matches from these searches into the GroundWork data model as services. A configurable set of rules is used to define which searches are of interest, and over what time ranges they should be executed. These searches are automatically executed by this feeder application on a recurring configurable schedule.

GroundWork alerts are generated if the count of results for a search exceeds a defined limit for a given query time range. Such alerts are sent into the GroundWork Event Console, and passed into the notification and escalation subsystem that applies its rules to notify contacts.

Under normal operation, the LogBridge feeder application performs these steps during each operating cycle:

  • Parses a configuration file that defines rules about which Elasticsearch searches saved in Kibana to look for, the time ranges to execute them over, and results count thresholds to use
  • Builds GroundWork objects (host groups, hosts, services etc) based on those configured rules
  • Searches for and executes Elasticsearch searches that were found based on the configuration
  • Updates the GroundWork objects with the results, including creating GroundWork events and triggering notifications if necessary

The LogBridge feeder application configuration associates Elasticsearch searches with GroundWork hosts and services. These in turn are configured to be collected under a GroundWork host group. If the Elasticsearch search rules change, the associated hosts and services under the host group are kept in synchronization i.e. hosts and services may be added and/or removed in this process.

The Configuring the LogBridge Feeder Application section describes configuration in more detail, and includes a complete example.

GroundWork Events Feeder Application

The GroundWork events feeder application finds new GroundWork events from any number of GroundWork servers and injects them into Elasticsearch. These GroundWork events are then available for presentation and querying through Kibana that is integrated through GroundWork Monitor Log Analytics.

3.2 Common Operational Modes

There are other operational modes and elements common to all GroundWork RAPID-based feeders, such as feeder health services, failure mode with retry caching, and cleanup mode. Details of these common modes can be found in the Common Operations section.

4.0 LogBridge Feeder Metrics Services

The Common Health Services describes feeder metrics and health services, including those which are common to all RAPID-based feeders. Metrics services specific to the LogBridge feeder applications are described below. All metrics are updated once per cycle under normal conditions.

4.1 LogBridge Feeder Application

Feeder Metric Service Name Description
logbridge_feeder.cycle.elapsed.time Reports total time spent updating GroundWork endpoint with all datasets (ie any that were in the retry cache plus the current one), excluding time taken to get data from Elasticsearch, and excluding time taken to execute Elasticsearch searches.
logbridge_feeder.esearches.durations Reports two times (in milliseconds), summed across all processed datasets:
  1. Total ‘took’ time - sum of the time taken to execute Elasticsearch queries as reported by Elasticsearch
  2. Total elapsed time - sum of the time taken to execute Elasticsearch queries as reported by Elasticsearch, including ‘took’ time and overheads such as network latency, API latency etc
logbridge_feeder.esearches.run Reports total number of successfully and unsuccessfully run Elasticsearches performed, summed across all processed datasets.

4.2 GroundWork Events Feeder Application

Feeder Metric Service Name Description
gwevents_to_es.cycle.elapsed.time Reports total time spent getting events from GroundWork endpoint and sending into Elasticsearch.
gwevents_to_es.events.retrieved.on.last.cycle Reports how many events were retrieved from the GroundWork endpoint.
gwevents_to_es.events.retrieved.per.minute Reports how many GroundWork events were retrieved per minute, and how long it took in seconds to retrieve them all.
gwevents_to_es.events.sent.on.last.cycle Reports how many GroundWork events were sent into Elasticsearch.
gwevents_to_es.events.sent.per.minute Reports how many GroundWork events were sent into Elasticsearch per minute, and how long in total it took in seconds to send them all.
gwevents_to_es.last.event.id.processed This is a special service which is used to keep track of the id of the last GroundWork event processed. This value is actually the logmessageid value for the last logmessage table row processed from the gwcollagedb database ie an ‘event’ refers to a GroundWork log message.

5.0 Configuring the LogBridge Feeder

5.1 Introduction

This section describes how to configure the LogBridge feeder applications. Configuration of the feeder is divided into two sections:

  • Configuration common to all GroundWork RAPID-based feeders. Certain configuration applies to all GroundWork RAPID-based feeders. Details can be found in the Common Configuration section.
  • Configuration specific to the LogBridge feeder applications, which will be covered in this section.

A lot of the general configuration for the LogBridge feeder applications is described in the Common Configuration section, including how to enable the feeder applications, configuring multiple GroundWork server endpoints, and much more. The common configuration section should be read first before proceeding into the feeder-specific section.

5.2 Configuring the LogBridge Feeder Application

Configuration Files

The LogBridge feeder application is configured through the following configuration files:

Configuration Files Use
/usr/local/groundwork/config/logbridge_feeder.conf The feeder application’s master configuration file. This cannot be changed.
/usr/local/groundwork/config/logbridge_feeder_<endpoint>.conf
(Default is logbridge_feeder_localhost.conf)
The feeder application's GroundWork endpoint(s) configuration file(s), as defined in the master configuration file.
/usr/local/groundwork/config/ws_client<endpoint>.properties
(Default is ws_client.properties)
GroundWork web services properties file(s) as defined in each endpoint configuration file; each identifies a GroundWork server REST API URL and credentials.
/usr/local/groundwork/logbridge-groups.xml
(Default name)
This configuration provides central control for the Elasticsearch integration. It contains definitions for Elasticsearch search criteria. It defines which Kibana searches to look for and execute, time ranges over which to execute those searches, and what the search result count thresholds are for each search. This configuration file is defined in the master configuration file, through the groups_configuration setting. The default name can be changed.
Configuring Elasticsearch Nodes

Which Elasticsearch cluster nodes the feeder application works with are defined in the master configuration, logbridge_feeder.conf, using the Elasticsearch_nodes setting. Any number of Elasticsearch_nodes entries can be supplied, each defining an Elasticsearch cluster node.

Configuring the Elasticsearch Integration (logbridge-groups.xml)

Introduction

This part of the configuration provides central control for the Elasticsearch integration. It contains definitions for Elasticsearch search criteria. It defines which Kibana searches to look for and execute, time ranges over which to execute those searches, and what the search result count thresholds are for each search. This configuration is currently done through an xml file, which is by default */usr/local/groundwork/config/logbridge-groups.xml. It follows this general structure:

<?xml version="1.0" encoding="UTF-8"?>
     <log-bridge>
         <root-hg name='Hostgroup Name'>
             <hosts>
               <host name = 'Host Name'
                  prefix = 'elasticsearch Kibana search match prefix'
                  desc  = 'Description'
                  thold_TimeRangeSpecifier = 'value'
                  thold_...
                  thold_...
               />
               …
             </hosts>
         </root-hg>
         <root-hg name='Group 2'>
           ...
         </root-hg>
         ...
     </log-bridge>

XML Elements Syntax Description

Element: log-bridge
Description: XML root tag
Required: yes
How many: 1
Attributes: none

Element: root-hg
Description: Creates a host group in GroundWork, into which hosts defined within will be attached.
Required: yes
How many: 1 or more
Attributes:
name
Required: yes
Description: The name of the GroundWork host group

Element: hosts
Description: hosts contained
Required: yes
How many: 1
Attributes: none

Element: host
Description: Creates a host in GroundWork, into which services will be attached.
Required: yes
How many: at least one, and more than one is ok too
Attributes:
name
Required: yes
Description: the name of the GroundWork service

prefix
Required: yes
Description: the feeder uses saved Kibana searches beginning with this prefix value. The prefix is stripped automatically from the calculated GroundWork service name.

desc
Required : yes
Description : a meaningful description of this collection of searches, which will be used in the GroundWork service description.

thold_XXXXX

Required : no
Description: Kibana doesn't save Elasticsearch time filter ranges for queries for obvious reasons. Each thold_ attribute defines a time range filter for the search, and results in a GroundWork service being created. Here, 'XXXXX' is a valid Elasticsearch time filter, such as 'now-5m', or 'now-1h'. Any number of thold_ attributes may be specified - each resulting in a corresponding GroundWork service. The value of the attribute , for example the '10' in thold_now-1h = '10', defines a threshold. If the count of the documents from the search exceeds this threshold, the associated GroundWork service will be put into an critical state. There are no warning states. If no thold_'s are defined, the search is done without time filtering.

Configuration Example

This example demonstrates how the feeder can be configured to integrate Elasticsearch searches that are stored through Kibana, with the GroundWork data model. It requires configuring the Log Bridge feeder, and having appropriate saved search objects in Kibana.

Configuration of logbridge_feeder.conf

The LogBridge feeder application’s master configuration includes a pointer to one Elasticsearch integration configuration file:

...
groups_configuration = /usr/local/groundwork/config/logbridge-groups.xml
...

Configuration of logbridge-groups.xml

The Elasticsearch integration configuration file, logbridge-groups.xml, defines which Elasticsearch Kibana searches to look for and execute, time ranges over which to execute those searches, and what the search result count thresholds are for each search:

<?xml version="1.0" encoding="UTF-8"?>
<log-bridge>
    <root-hg name='Compliance'>
       <hosts>
        <host name='HIPAA'
                prefix='hipaa_'
                desc='Searches related to HIPAA compliance searches'
                thold_now-1h='10'
                thold_now-1d='100' />
        <host name='PCI'
                prefix='pci_'
                desc='Searches related to PCI compliance searches'
                thold_now-1d='200' />
        <host name='Forensic'
                prefix='forensic_'
                desc='Searches related forensic searches'
                thold_now-1h='30'
                thold_now-1d='300' />
        <host name='INFOSEC'
                prefix='infosec_'
                desc='Searches related to SECURITY information'
                thold_now-1d='100' />
        <host name='Correlation'
                prefix='correlation_'
                desc='Searches related to correlation searches'
                thold_now-1h='50'
                thold_now-1d='500' />
        <host name='Others'
                prefix='custom_'
                desc='Searches not matching any pre-defined rule sets' />
       </hosts>
    </root-hg>
</log-bridge>

Elasticsearch searches saved in Kibana

These Elasticsearch searches are defined and saved in Kibana:

  • hipaa_SecureRecordChanged
  • pci_s1
  • pci_s2
  • pci_s3
  • forensic_f1
  • infosec_i1
  • custom_search1
  • custom_search2
  • ( no correlation searches defined )

Note: for demonstration, there are no correlation searches defined in this example.

GroundWork objects rendered in Status

[host group] Compliance
    [Host] HIPAA
    State: Up
    Status: 1 Kibana search matched prefix hipaa_
        [Service] secureRecordChanged_now-1h
        State: Ok
        Status: 5 (critical threshold is 10)
        [Service] secureRecordChanged_now-1d
        State: Ok
        Status: 24 (critical threshold is 100)
    [Host] PCI
    State: Up
    Status: 3 Kibana searches matched prefix pci_
        [Service] s1_now-1d
        State: Ok
        Status: 10 (critical threshold is 200)
        [Service] s2_now-1d
        State: Ok
        Status: 10 (critical threshold is 200)
        [Service] s3_now-1d
        State: Ok
        Status: 10 (critical threshold is 200)
    [Host] Forensic
    State: Up
    Status: 1 Kibana search matched prefix pci_
        [Service] forensic_now-1h
        State: Ok
        Status: 0 (critical threshold is 30)
        [Service] forensic_now-1d
        State: Ok
        Status: 2 (critical threshold is 300)
    [Host] INFOSEC
    State: Up
    Status: 1 Kibana search matched prefix pci_
        [Service] infosec_now-1d
        State: Critical
        Status: 245 (critical threshold is 100)
    [Host] Correlation
    State: Unreachable
    Status: No Kibana searches matched prefix correlation_
    [Host] Others
    State: Up
    Status: 2 Kibana search matched prefix custom_
        [Service] search1
        State: Ok
        Status: 2443
        [Service] search2
        State: Ok
        Status: 23

5.3 Configuring the GroundWork Events Feeder Application

Configuration Files

The GroundWork Events feeder application is configured with the following configuration files:

Configuration Files Use
/usr/local/groundwork/config/gwevents_to_es.conf The feeder application’s master configuration file. This cannot be changed.
/usr/local/groundwork/config/gwevents_to_es_l<endpoint>.conf
(Default is gwevents_to_es_localhost.conf)
The feeder application’s GroundWork endpoint(s) configuration file(s), as defined in the master configuration file.
/usr/local/groundwork/config/ws_client<endpoint>.properties
(Default is ws_client.properties)
GroundWork web services properties file(s) as defined in each endpoint configuration file; each identifies a GroundWork server REST API URL and credentials.
Configuring Elasticsearch Nodes

Which Elasticsearch cluster nodes the feeder application works with are defined in each endpoint configuration using the elasticsearch_nodes setting. Any number of elasticsearch_nodes entries can be supplied, each defining an Elasticsearch cluster node. This allows the GroundWork Events feeder to feed from any set of GroundWork servers into any set of Elasticsearch nodes, on a per endpoint basis. This differs from the LogBridge feeder application, where this is defined in the master configuration, applying to all GroundWork endpoints.

Labels

logbridge logbridge Delete
feeder feeder Delete
Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.