Viewing Logs in SL1 PowerFlow

Download this manual as a PDF file 

This section describes the different types of logging available in PowerFlow.

Logging Data in PowerFlow

PowerFlow allows you to view log data locally, remotely, or through Docker.

Local Logging

PowerFlow writes logs to files on a host system directory. Each of the main components, such as the process manager or Celery workers, and each application that is run on the platform, generates a log file. The application log files use the application name for easy consumption and location.

In a clustered environment, the logs must be written to the same volume or disk being used for persistent storage. This ensures that all logs are gathered from all hosts in the cluster onto a single disk, and that each application log can contain information from separately located, disparate workers.

You can also implement log features such as rolling, standard out, level, and location setting, and you can configure these features with their corresponding environment variable or setting in a configuration file.

Although it is possible to write logs to file on the host for persistent debug logging, as a best practice, ScienceLogic recommends that you utilize a logging driver and write out the container logs somewhere else.

You can use the "Timed Removal" application in the PowerFlow user interface to remove logs from Couchbase on a regular schedule. On the Configuration pane for that application, specify the number of days before the next log removal. For more information, see Removing Logs on a Regular Schedule.

Remote Logging

If you use your own existing logging server, such as Syslog, Splunk, or Logstash, PowerFlow can route its logs to a customer-specified location. To do so, attach your service, such as logspout, to the microservice stack and configure your service to route all logs to the server of your choice.

Although PowerFlow supports logging to these remote systems, ScienceLogic does not officially own or support the configuration of the remote logging locations. Use the logging to a remote system feature at your own discretion.

Viewing Logs in Docker

You can use the Docker command line to view the logs of any current running service in the PowerFlow cluster. To view the logs of any service, run the following command:

docker service logs -f iservices_<service_name>

Some common examples include the following:

docker service logs –f iservices_couchbase

docker service logs –f iservices_steprunner

docker service logs –f iservices_contentapi

Application logs are stored on the central database as well as on all of the Docker hosts in a clustered environment. These logs are stored at /var/log/iservices for both single-node or clustered environments. However, the logs on each Docker host only relate to the services running on that host. For this reason, using the Docker service logs is the best way to get logs from all hosts at once.

The application logs stored locally on individual node filesystems can now be collected using the powerflowcontrol (pfctl) command-line utility collect_pf_logs action. For more information, see collect_pf_logs.

Logging Configuration

The following table describes the variables and configuration settings related to logging in PowerFlow:

Environment Variable/Config Setting Description Default Setting
logdir The directory to which logs will be written. /var/log/iservices
stdoutlog Whether logs should be written to standard output (stdout). True
loglevel Log level setting for PowerFlow application modules. warning(30). You can use info(20) or debug(10) to see more details about the operations executed over the API and steprunners services. This setting should be only enabled while troubleshooting issues.
celery_log_level The log level for Celery-related components and modules. warning(30). You can use info(20) or debug(10) to see more details about the operations executed over the API and steprunners services. This setting should be only enabled while troubleshooting issues.

PowerFlow Log Files

The logs from the gui, api, and rabbitmq services are available in the /var/log/iservices directory on which that particular service is running.

To aggregate logs for the entire cluster, ScienceLogic recommends that you use a tool like Docker Syslog: https://docs.docker.com/config/containers/logging/syslog/.

Logs for the gui Service

By default all nginx logs are written to stdout. Although not enabled by default, you can also choose to write access and error logs to file in addition to stdout.

To log to a persistent file, simply mount a file to /host/directory:/var/log/nginx, which will by default log both access and error logs.

For pypiserver logs, the logs are not persisted to disk by default. You can choose to persist pypiserver logs by setting the log_to_file environment variable to true.

If you choose to persist logs, you should mount a host volume to /var/log/devpi to access logs from a hos. For example: /var/log/iservices/devpi:/var/log/devpi.

Logs for the api Service

By default all log messages related to PowerFlow are written out to /var/log/iservices/contentapi.

PowerFlow-specific logging is controlled by existing settings listed in Logging Configuration. Although not enabled by default, you can also choose to write nginx access and error logs to file in addition to stdout.

To log to a persistent file, simply mount a file to /file/location/host:/var/log/nginx/nginx_error.log or /host/directory:/var/log/nginx depending if you want access or error logs.

Logs for the rabbitmq Service

All rabbitmq service logs are written to stdout by default. You can choose write to a logfile (stdout or logfile, not both).

To write to the logfile:

  1. Add the following environment variable for the rabbitmq service:

    RABBITMQ_LOGS: "/var/log/rabbitmq/rabbit.log"

  2. Mount a volume: /var/log/iservices/rabbit:/var/log/rabbitmq

The retention policy of these logs is 10 MB, for a total of five maximum logs written to file.

Working with Log Files

Use the following procedures to help you locate and understand the contents of the various log files related to PowerFlow.

Accessing Docker Log Files

The Docker log files contain information logged by all containers participating in PowerFlow. The information below is also available in the PowerPacks listed above.

To access Docker log files:

  1. Use SSH to connect to the PowerFlow instance.

  2. Run the following Docker command:

    docker service ls

  3. Note the Docker service name, which you will use for the <service_name> value in the next step.

  4. Run the following Docker command:

    docker service logs -f <service_name>

Accessing Local File System Logs

The local file system logs display the same information as the Docker log files. These log files include debug information for all of the PowerFlow applications and all of the Celery worker nodes.

To access local file system logs:

  1. Use SSH to connect to the PowerFlow instance.
  2. Navigate to the /var/log/iservices directory to view the log files.

Understanding the Contents of Log Files

The pattern of deciphering log messages applies to both Docker logs and local log files, as these logs display the same information.

The following is an example of a message in a Docker log or a local file system log:

"2018-11-05 19:02:28,312","FLOW","12","device_sync_sciencelogic_to_servicenow","ipaas_logger","142","stop Query and Cache ServiceNow CIs|41.4114570618"

You can parse this data in the following manner:

'YYYY-MM-DD' 'HH-MM-SS,ss' 'log-level' 'process_id' 'is_app_name' 'file' 'lineOfCode' 'message'

To extract the runtime for each individual task, use regex to match on a log line. For instance, in the above example, there is the following sub-string:

"stop Query and Cache ServiceNow CIs|41.4114570618"

Use regex to parse the string above:

"stop …… | …"

where:

  • Everything after the | is the time taken for execution.
  • The string between "stop" and | represents the step that was executed.

In the example message, the "Query and Cache ServiceNow CIs" step took around 41 seconds to run.

Managing journald Settings

The journald volatile storage takes part of the memory based on the environment memory size, which might cause undesired behavior in environments where the memory is highly used by PowerFlow services. PowerFlow uses journald volatile storage, which means that all logs are kept only in memory.

Total Memory Maximum memory used by journald
16 GB About 800 MB
24 GB About 1.2 GB
32 GB About 1.6 GB
64 GB About 3.2 GB

 

To check the size of journal logs on any PowerFlow version 2.2.x or later single node, run the following command:

du -sh /run/log/journal

For PowerFlow version 2.2.x, you can control those settings by updating the /etc/docker/daemon/json file and setting the log-opts max size in the json-file logging driver. For more information, see https://docs.docker.com/config/containers/logging/json-file/.

For PowerFlow version 2.3 or later nodes, you can clear logs with the following command (this is automatically done when you run the healthcheck action):

journalctl --vacuum-time=7d

You can also configure journald logs settings by using the following command to enforce small size and time limits:

sudo sed -i -e '/RuntimeMaxUse=/s/.*/RuntimeMaxUse=800M/' -e '/MaxRetentionSec/s/.*/MaxRetentionSec=2week/' /etc/systemd/journald.conf && sudo systemctl restart systemd-journald

PowerFlow updates journald volatile limits to the following values, which can be changed if you want retain fewer or more logs:

RuntimeMaxUse=800M

MaxRetentionSec=2week

Viewing the Step Logs and Step Data for a PowerFlow Application

The Step Log tab on an Application page displays the time, the type of log, and the log messages from a step that you selected in the main pane. All times that are displayed in the Message column of this pane are in seconds, such as "stop Query and Cache ServiceNow CI List|5.644190788269043".

You can view log data for a step on the Step Log tab while the Configuration pane for that step is open.

To view logs for the steps of an application:

  1. From the Applications tab, select an application. The Application page appears.

  2. Select a step in the application.

  3. Click the Step Log tab in the bottom left-hand corner of the screen. The Step Log tab appears at the bottom of the page, and it displays log messages for the selected step:

  1. For longer log messages, click the down arrow icon () to open the message.
  2. To copy a single log, click the Copy Message icon () next to that message.
  3. To copy all messages in the log, click the Copy button on the gray bar.
  4. To download all messages in the log, click the Download button on the gray bar. The file is saved in JSON format.
  5. To view the logs in full-screen mode, click the Full screen button on the gray bar. Click the Full screen button again to turn off full-screen mode.
  6. Click the Step Data tab to display the JSON data (where relevant) that was generated by the selected step.
  7. Click the Step Log tab to close the tab.
  8. To generate more detailed logs when you run this application, hover over the Run button and select Debug Run.

Log information for a step is saved for the duration of the result_expires setting in the PowerFlow system. The result_expires setting is defined in the opt/iservices/scripts/docker-compose.yml file. The default value for log expiration is 7 days. This environment variable is set in seconds.

Removing Logs on a Regular Schedule

The "Timed Removal" application in the PowerFlow user interface lets you remove logs from Couchbase on a regular schedule.

To schedule the removal of logs:

  1. In the PowerFlow user interface, go to the Applications tab and select the "Timed Removal" application.
  2. Click the Configure button. The Configuration pane appears.
  3. Complete the following fields:
  • Configuration. Select the relevant configuration object to align with this application. Required.
  • time_in_days. Specify how often you want to remove the logs from Couchbase. The default is 7 days. Required.
  • remove_dex_sessions. Select this option if you want to remove Dex Server sessions from the logs.
  1. Click the Save button and close the Configuration pane.
  2. Click the Run button to run the "Timed Removal" application.