Introduction to SL1 Publisher

Download this manual as a PDF file

This section describes SL1 Publisher.

Use the following menu options to navigate the SL1 user interface:

  • To view a pop-out list of menu options, click the menu icon ().
  • To view a page containing all the menu options, click the Advanced menu icon ().

The following video explains the SL1 Publisher service:

This section covers the following topics:

What is Publisher?

Publisher is a service that retrieves near real-time availability and interface performance data from SL1 Data Collectors or the SL1 Agent and delivers the data to a third-party destination for long-term data storage, analysis, or reporting. For example, Publisher can send data to Kafka topics.

Publisher supports the concept of data models, which specify where you want to retrieve ingested data from, and subscriptions, which allow you to specify what data to push and where to send it.

A library of Python functions (sl-schema-registry) is provided in the ScienceLogic Support Site for deserializing messages sent to Kafka topics by Publisher.

Publisher is installed and enabled by default in the SL1 Extended Architecture, however you must configure Publisher before it will send your data to a destination. Client software, such as Kafka, must be configured on your own hardware.

How Does Publisher Work?

Publisher ingests data specified by the data models and publishes this data to your Kafka topics as specified in your subscriptions. The Publisher service listens for Custom Resource Definitions (CRDs) that you establish in YAML files, discussed below.

Publisher requires the following two types of YAML files to collect and push data:

  • DataModel. Specifies where in the ingestion pipelines to extract data from.
  • Subscription. Specifies which Data Models to publish and the endpoint to which to publish the data.

You must create and apply these files using the examples shown in the next section.

If new data is available from the data model, Publisher creates a binary bundle and sends the bundle as specified in the subscription.

When your data is published to your Kafka topics, you can use the sl-schema-registry library to unpack the binary data bundles on your third-party system.

Prerequisites for Using Publisher

Before you can use Publisher, you must do the following:

  • Deploy SL1 version 10.2.0 and the SL1 Extended Architecture.
  • Enable the Collector Pipeline. For more information, see Enabling the Collector Pipeline.
  • Ensure you have SSH or console access to the Management Node so you can access Docker and Kubernetes.
  • Install sl-schema-registry on the system where you will unpack and consume the messages sent to your Kafka topic by Publisher. For more information, see Installing the sl-schema-registry Library.

Workflow for Using Publisher

The following steps represent the general workflow for implementing Publisher.

Step Description References

1. Enable the Collector Pipeline.

Collector Pipeline is required for Publisher. Enabling the Collector Pipeline
2. Define data models. Define data models in YAML files. Adding Supported Data Models
3. Define subscriptions. Define subscriptions in YAML files. Adding a Subscription
4. Install the sl-schema-registry library. Download the sl-schema-registry from the ScienceLogic Support Site and install it on the system where you will consume the messages sent from Publisher. The documentation for the library is contained in the .zip file with the library. Installing the sl-schema-registry Library
5. Use the sl-schema-registry library to unpack the binary data bundles sent from SL1. See the sl-schema-registry library documentation, included in the .zip file.

Enabling the Collector Pipeline

Collector Pipeline is a platform feature that allows horizontal scaling (adding more Data Collectors and Agent installations) without data loss or performance loss.

Collector Pipeline also supports Publisher and Anomaly Detection.

Currently, Collector Pipeline supports availability data, network interface data, and data from Performance Dynamic Applications. SL1 will add more data types in future releases.

If you want to use Anomaly Detection, enable Collector Pipeline with data from Performance Dynamic Applications.

Collector Pipeline requires the use of port 443 from the Collector to the Streamer service.

To enable Collector Pipeline for availability data, network interface data, and anomaly detection:

  1. Either go to the console of the Database Server or use SSH to access the Database Server. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
  2. To view information about the command, enter the following at the shell prompt:

/opt/em7/backend/set_cpl.py -help

  1. To enable Collector Pipeline for availability data, network interface data, and anomaly detection, enter the following at the shell prompt:

/opt/em7/backend/set_cpl.py -d availability ENABLE

/opt/em7/backend/set_cpl.py -d interface ENABLE

/opt/em7/backend/set_cpl.py -d da_perf ENABLE

 

  1. To disable Collector Pipeline for availability data, network interface data, and anomaly detection, enter the following at the shell prompt:

/opt/em7/backend/set_cpl.py -d availability DISABLE

/opt/em7/backend/set_cpl.py -d interface DISABLE

/opt/em7/backend/set_cpl.py -d da_perf DISABLE

Configuring Proxy Support for Collector Pipeline

Collector Pipeline uses two underlying services:

  • Streamer Push is a Docker container that runs on the Data Collector. This service "pushes" collected data to the Compute Node cluster.
  • Streamer is a Docker container that runs on the Compute Node cluster . This service processes incoming data from the Data Collector.

As an SL1 administrator using Collector Pipeline, I need to be able to configure a proxy when there is no direct line-of-sight between a Data Collector and the Compute Node cluster. This proxy allows Streamer Push and Streamer to communicate.

To enable this proxy configuration, SL1 includes three new endpoints associated with the Web Configuration Tool (sladmin).

You can send requests to the API endpoints from any server that has line-of-sight to the Data Collector for which you are configuring a proxy. The requests are sent to the Data Collector for which you want to create a proxy.

If you want to configure a proxy for all Data Collectors in a Collector Group, you must send these requests to each Data Collector in the Collector Group. Each Data Collector in a Collector Group can use the same proxy or each Data Collector in a Collector Group can also use a different proxy from other Data Collectors in that Collector Group.

Collector Pipeline requires the use of port 443 from the Collector to the Streamer service.

The following sections describe how to use each API endpoint.

GET /sladmin/v1.0/streamerpush/proxy

Returns the current proxy configuration info as a JSON file.

Example Request

curl

Make this request from a server that has line-of-sight to the SL1 Data Collector.

 

curl --user em7admin:em7admin -k "http://<IP_ADDRESS>:7700/sladmin/v1.0/streamerpush/proxy" -H "accept: application/json"

 

where:

  • IP_ADDRESS is the IP address of the Data Collector for which you want to create a proxy.

HTTPie

Make this request from a server that has line-of-sight to the SL1 Data Collector.

 

http --verify=no -a em7admin GET http://<IP_ADDRESS>:7700/sladmin/v1.0/streamerpush/proxy

 

where:

  • IP_ADDRESS is the IP address of the Data Collector for which you want to create a proxy.

Example Response

{
  "proxy_port": 3128,
  "proxy_url": "http://10.2.16.242",
  "last_updated": "2021-09-09T12:50:28",
  "use_proxy": true,
  "proxy_username": "test_user"
}

POST /sladmin/v1.0/streamerpush/proxy

Allows users to define the proxy parameters.

After you use this POST header, the use_proxy parameter is set to TRUE automatically.

Required Parameters:

  • proxy_url. The URL of your proxy server
  • proxy_port. The port on your proxy server.

Optional Parameters:

  • proxy_username. If the proxy server requires authentication, enter your username.
  • proxy_password. If the proxy server require authentication, enter the password.

Example Request

curl

Make this request from a server that has line-of-sight to the SL1 Data Collector.

curl --user em7admin:em7admin -k -X POST -d "proxy_url=http://google.com&proxy_port=3128" "http://<IP_ADDRESS>:7700/sladmin/v1.0/streamerpush/proxy" -H "accept: application/json"

where:

  • proxy_url is http://google.com
  • proxy_port is 3128
  • IP_ADDRESS is the IP address of the Data Collector for which you want to create a proxy.

Example Response

{

  "success": "Configured Streamer Push Proxy: http://10.2.16.242 Port 3128 User test_user"

}

POST /sladmin/v1.0/streamerpush/proxy/toggle

Allows users to toggle proxy on/off without changing the current configuration.

The toggle endpoint accepts values for true (1, True, true) or false (0, False, false).

This endpoint is useful for testing proxy configuration to ensure it is working correctly.

Example Request to Turn Off Proxy

curl

Make this request from a server that has line-of-sight to the SL1 Data Collector.

curl --user em7admin:em7admin -k -X POST -d "use_proxy=false" "http://<IP_ADDRESS>:7700/sladmin/v1.0/streamerpush/proxy/toggle" -H "accept: application/json"

 

where:

  • IP_ADDRESS is the IP address of the Data Collector for which you want to create a proxy.

Example Response:

{

  "success": "Updated Streamer Push to use_proxy: False"

}

 

Example Request to Turn On Proxy

curl

Make this request from a server that has line-of-sight to the SL1 Data Collector.

curl --user em7admin:em7admin -k -X POST -d "use_proxy=true" "http://<IP_ADDRESS>:7700/sladmin/v1.0/streamerpush/proxy/toggle" -H "accept: application/json"

 

where:

  • IP_ADDRESS is the IP address of the Data Collector for which you want to create a proxy.

Example Response:

{

  "success": "Updated Streamer Push to use_proxy: True"

}