Configuration and Discovery

Download this manual as a PDF file

Use the following menu options to navigate the SL1 user interface:

  • To view a pop-out list of menu options, click the menu icon ().
  • To view a page containing all of the menu options, click the Advanced menu icon ().

The following sections describe how to configure and discover Kubernetes clusters for monitoring by SL1 using the Kubernetes PowerPack:

Prerequisites for Monitoring Kubernetes Clusters

Before you can monitor Kubernetes clusters using the Kubernetes PowerPack, you must first meet the following requirements: 

Required:

Optional

These requirements must be met if you want to monitor the underlying Linux OS of the nodes using the Linux Base Pack:

  • Import and install the Linux Base Pack PowerPack version 111 or later.

  • Configure SSH credentials (username/password or username/private-key) on the Kubernetes cluster nodes. These credentials must be the same on all nodes.

  • Create a SSH Credential in SL1 (username/password or username/private-key). Once the credential is created, copy the Credential ID, which will be used during Guided Discovery. See Creating A SSH Credential (Optional) for more information.

Required Permissions for the Service Account Token

To create a token with the minimum permissions, copy the code below in a .yaml file with the name slmonitor-config.yaml and apply it. It will create a ClusterRole, ServiceAccount, bind the ClusterRole and the ServiceAccount, and create a secret with an authentication token.

To apply the file using kubectl use the command:

kubectl apply -f slmonitor-config.yaml

To get the authentication token use the command:

kubectl describe secret slmonitor-readonly-sa

Config File:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: slmonitor-readonly-clusterrole
rules:
- apiGroups:
  - ""
  - apps
  - batch
  - metrics.k8s.io
  - networking.k8s.io
  - autoscaling
  resources:
  - nodes
  - pods
  - componentstatuses
  - namespaces
  - persistentvolumes
  - events
  - replicationcontrollers
  - services
  - deployments
  - statefulsets
  - replicasets
  - daemonsets
  - cronjobs
  - jobs
  - ingresses
  - horizontalpodautoscalers
  verbs:
  - get
  - list
  - watch
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: slmonitor-readonly-sa
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: slmonitor-readonly-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: slmonitor-readonly-clusterrole
subjects:
- kind: ServiceAccount
  name: slmonitor-readonly-sa
  namespace: default
---
apiVersion: v1
kind: Secret
metadata:
  name: slmonitor-readonly-sa
  annotations:
    kubernetes.io/service-account.name: slmonitor-readonly-sa
type: kubernetes.io/service-account-token

Creating Credentials for Kubernetes Clusters

Creating a Kubernetes Credential

To define a Kubernetes credential:

  1. Go to the Credentials page (Manage > Credentials).

  2. Click on the Create New button and select Create Kubernetes Credential.

  3. Supply values in the following fields:

    • Name. Name of the credential. Can be any combination of alphanumeric characters, up to 64 characters.

    • All Organizations. Toggle on (blue) to align the credential to all organizations, or toggle off (gray) and then select one or more specific organizations from the What organization manages this service? drop-down field to align the credential with those specific organizations.

    • Timeout (ms). Time, in milliseconds, after which SL1 will stop trying to communicate with the device from which you want to retrieve data.

    • Kubernetes IP/Hostname. Enter the Kubernetes API IP, hostname, or endpoint.

    • Port. Enter the Kubernetes API port.

    • Kubernetes Service Account Token. Enter the Kubernetes Service Account Token created previously, which is used to authenticate in the Kubernetes API.

    • Enable Linux Monitoring. (Optional) If toggled on, will enable you to monitor and collect data about the underlying Linux operating systems of the cluster nodes. If enabled, enter the ID of an SSH credential in the field that appears below.

  4. Click Save & Test.

Creating an SSH Credential (Optional)

You only need to create an SSH credential if you plan to enable Linux Monitoring using the Linux Base Pack PowerPack. To define an SSH Credential:

  1. Go to the Credentials page (Manage > Credentials).

  2. Click on the Create New button and select Create SSH/Key Credential.

  3. Supply values in the following fields:

    • Name. Name of the credential. Can be any combination of alphanumeric characters, up to 64 characters.

    • All Organizations. Toggle on (blue) to align the credential to all organizations, or toggle off (gray) and then select one or more specific organizations from the What organization manages this service? drop-down field to align the credential with those specific organizations.

    • Timeout (ms). Time, in milliseconds, after which SL1 will stop trying to communicate with the device from which you want to retrieve data.

    • Hostname/IP. Type %D.

    • Username. Type the SSH account username. This will be used to connect to the nodes.

    • Password. Type the password for the SSH account.

    • Private Key (PEM Format). Type the SSH private key.

The available combinations for authentication are Username/Password, Username/Private Key, or Username/Private Key/Password.

  1. Click Save & Test.

After Creating the SSH Credential, you need to enter the Credential ID in the Enable Linux Monitoring option in the Kubernetes Credential.

Discovering a Kubernetes Cluster

To create and run a discovery session that will discover a Kubernetes Cluster, perform the following steps:

  1. On the Devices page () or the Discovery Sessions page (Devices > Discovery Sessions), click the Add Devices button. The Select page appears.

  2. Click on the Kubernetes button.

  3. Select the Kubernetes credential created previously or click the Create New button to define a new Kubernetes credential.

  4. Click on the Next button.

  5. Complete the required fields:

    • Root Device Name. The virtual device name that will be created with this Guided Discovery.

    • Organization. Select the organization where you want to discover the virtual device.

    • Collector Group Name. Select the collector group for the virtual device.

  6. Click on the Next button.

  7. Once the discovery process finishes, click on the Close button. Guided Discovery will create a virtual device and align the "Kubernetes: Cluster Discovery" Dynamic Application with the credential to start monitoring the Kubernetes Cluster.

Customizing Event Policies

The "Kubernetes: Event Configuration" Dynamic Application is a Journal Dynamic Application that collects the events reported by Kubernetes. The "Kubernetes: Normal event" and "Kubernetes: Warning event" are general event policies that are enabled by default.

Users can enable more specific event policies in the PowerPack after disabling the ""Kubernetes: Normal event" and "Kubernetes: Warning event" policies. To enable these event policies, perform the following steps:

  1. Go to the Event Policy Manager page (Registry > Events > Event Manager).
  2. Search for the "Kubernetes: Normal event" in the Event Policy Name field.
  3. Select the wrench icon () for the event policy to open the Event Policy Editor page.
  4. In the Operational State drop-down, select Disabled and then click the Save button. Repeat these steps for the "Kubernetes: Warning event" event policy.
  5. Once these event policies are disabled, find the event policies you want to use and enable them. You can enable more than one event policy at a time by selecting their checkboxes in the Event Policy Manager page, selecting ENABLE these event policies in the Select Action menu, and then clicking the Go button.

The following event policies are available:

Event Policy Device Severity State
Kubernetes: Normal event Any Notice Disabled
Kubernetes: Warning event Any Major Enabled
Kubernetes: Network Failure Any Critical Enabled
Kubernetes: Error Image Never Pull Pod Major Enabled
Kubernetes: Failed to Create Pod Container Pod Major Enabled
Kubernetes: Failed to Kill Pod Pod Major Enabled
Kubernetes: Failed Start Hook Pod Major Enabled
Kubernetes: Failed Sync Pod Major Enabled
Kubernetes: Failed Validation Pod Major Enabled
Kubernetes: Free Disk Space Failed Pod Major Enabled
Kubernetes: Image Pull Backoff Pod Major Enabled
Kubernetes: Pod Container Created Pod Notice Disabled
Kubernetes: Pod Exceeded Grace Period Pod Major Enabled
Kubernetes: Pod Failed Pod Notice Enabled
Kubernetes: Pod Image Inspect Failed Pod Notice Enabled
Kubernetes: Pod Image Pulled Pod Notice Disabled
Kubernetes: Pod Image Pulling Pod Major Disabled
Kubernetes: Pod Killing Pod Major Enabled
Kubernetes: Pod Network Not Ready Pod Major Enabled
Kubernetes: Pod Preempting Pod

Major

Enabled
Kubernetes: Pod Started Pod Notice Disabled
Kubernetes: Pod Unhealthy Pod Major Enabled
Kubernetes: Prestop Hook Pod Major Enabled
Kubernetes: Probe Warning Pod Major Enabled
Kubernetes: Already Mounted Volume Node Notice Enabled
Kubernetes: Container GC Failed Node Major Disabled
Kubernetes: Failed Attach Volume Node Critical Enabled
Kubernetes: Failed Create Pod Sandbox Node Notice Enabled
Kubernetes: Failed Map Volume Node Major Enabled
Kubernetes: Failed Mount Node Major Enabled
Kubernetes: Failed Node Allocatable Enforcement Node Major Enabled
Kubernetes: Failed Pod Sandbox Status Node Notice Enabled
Kubernetes: File System Resize Failed Node Major Enabled
Kubernetes: File System Resize Successful Node Notice Enabled
Kubernetes: Image GC Failed Node Major Enabled
Kubernetes: Invalid Disk Capacity Node Major Enabled
Kubernetes: Kubelet Setup Failed Node Critical Enabled
Kubernetes: Node Allocatable Enforced Node Notice Enabled
Kubernetes: Node Not Ready Node Major Enabled
Kubernetes: Node Not Schedulable Node Major Enabled
Kubernetes: Node Ready Node Notice Enabled
Kubernetes: Node Schedulable Node Notice Disabled
Kubernetes: Rebooted Node Critical Enabled
Kubernetes: Sandbox Changed Node Notice Enabled
Kubernetes: Starting Node Notice Enabled
Kubernetes: Successful Attach Volume Node Notice Disabled
Kubernetes: Successful Mount Volume Node Notice Disabled
Kubernetes: Volume Resize Failed Node Major Enabled
Kubernetes: Volume Resize Successful Node Notice Enabled
Kubernetes: Node Condition Healthy Node Healthy Enabled
Kubernetes: Node Condition Unhealthy Node Major Enabled
Kubernetes: Node Condition Unknown Node Major Enabled
Kubernetes: Cluster Creation

Pingable

Notice Enabled
Kubernetes: Component Healthy State Cluster Healthy Enabled
Kubernetes: Component No Healthy State Cluster Major Enabled
Kubernetes: Node Status Changed Cluster Notice Enabled
Kubernetes: Persistent Volume status Healthy Cluster Healthy Enabled
Kubernetes: Persistent Volume Status Unhealthy Cluster Major Enabled
Kubernetes: Restart Count Exceeded Threshold Deployment/Daemon Set Minor Enabled
Kubernetes: Restart Count Returned to Normal Deployment/Daemon Set Healthy Enabled

Viewing Component Devices

When SL1 performs collection for the Kubernetes cluster, SL1 will create component devices that represent each device and align other Dynamic Applications to those component devices. Some of the Dynamic Applications aligned to the component devices will also be used to create additional component devices. All component devices appear in the Devices page just like devices discovered using the ScienceLogic discovery process.

In addition to the Devices page, you can view the Kubernetes cluster and all associated component devices in the following places in the user interface:

  • The Device Investigator Map page (click Map in the Device Investigator page) displays a map of a particular device and all of the devices with which it has parent-child relationships. Double-clicking any of the listed devices reloads the page to make the selected device the primary device

  • The Device Components page (Devices > Device Components) displays a list of all root devices and component devices discovered by SL1 in an indented view, so you can easily view the hierarchy and relationships between child devices, parent devices, and root devices. To view the component devices associated with a Kubernetes cluster, find the cluster device and click its plus icon (+).

  • The Component Map page (Classic Maps > Device Maps > Components) allows you to view devices by root node and view the relationships between root nodes, parent components, and child components in a map. This makes it easy to visualize and manage root nodes and their components. SL1 automatically updates the Component Map as new component devices are discovered. SL1 also updates each map with the latest status and event information. To view the map for a Kubernetes cluster, go to Classic Maps > Device Maps > Components, and select the map from the list in the left NavBar. To learn more about the Component Map page, see the section on Maps.