Troubleshooting the Service Graph ConnectorSyncPack

Download this manual as a PDF file 

This section includes troubleshooting resources and procedures to use with the ServiceNow Service Graph Connector SyncPack.

Initial Troubleshooting Steps

PowerFlow acts as a middle server between data platforms. For this reason, the first steps should always be to ensure that there are no issues with the data platforms with which PowerFlow is talking. There might be additional configurations or actions enabled on ServiceNow or SL1 that result in unexpected behavior.

For detailed information about how to perform the steps below, see Resources for Troubleshooting.

SL1 PowerFlow

  1. Run docker service ls on the PowerFlow server.
  2. Note the Docker container version, and verify that the Docker services are running.
  3. If a certain service is failing, make a note the service name and version.
  4. If a certain service is failing, run docker service ps <service_name> to see the historical state of the service and make a note of this information. For example: docker service ps iservices_contentapi.
  5. Make a note of any logs impacting the service by running docker service logs <service_name>. For example: docker service logs iservices_couchbase.

ServiceNow

  1. Make a note of the ServiceNow version and SyncPack version, if applicable.
  2. Make a note of whether the user is running a version of the Certified/Scoped application for a SyncPack, if relevant.
  3. Make a note of the SyncPack application that is failing on PowerFlow.
  4. Make a note of what step is failing in the application, try running the application in debug mode, and capture any traceback or error messages that occur in the step log.

For more information about specific issues with relationships and mappings, see Known Issues.

Troubleshooting Specific to this SyncPack

This section contains troubleshooting information specific to the ServiceNow Service Graph Connector SyncPack.

Enabling Verbose Logging

To enable verbose logging in ServiceNow:

  1. Navigate to System Import Sets > Administration > Robust Import Set Transformers (sys_robust_import_set_transformer).

  2. Note that the endpoints used by the ScienceLogic for Service Graph are all prefixed with SG-ScienceLogic:

    • SG-ScienceLogic Physical Device Source

    • SG-ScienceLogic VMware Device Source

  3. Select each of the endpoints and check the Verbose box on the Robust Import Set Transformer page to turn on logging within ServiceNow.

  4. When you are done troubleshooting, de-select the Verbose box for each endpoint.

Using the Results of Verbose Logging

The import table contains a Reference link to the import set associated with the record being imported:

Import Tables
SG-ScienceLogic Physical Device Source x_sclo_devicesync_device_import
SG-ScienceLogic VMware Device Source x_sclo_devicesync_device_import_vmware

 

When you select the import set associated with the record in question, you can view a related list of Import Set Runs. Each run of a specific import set associated will have its own set of import Set Row Errors and Import Logs that you can review.

Narrowing the Import Set

When you run the "Sync Devices from SL1 to ServiceNow via SGC" PowerFlow application, the default payload size sent to ServiceNow is 5000 objects.

If you are having issues, ScienceLogic recommends that you run this application in Debug Mode and copy the JSON of what is being sent to ServiceNow.

Also, you can run the following query to find the device object and any associated virtual type “dcmr” objects, which are parent relationships that should be listed directly after the device in question:

{
  "devices": [
    {
      "virtual_type": "PowerFlow",
    },
    {
      "virtual_type": "dcmr",
    }
  ]
}

Sending this limited version of the payload will make looking at the verbose logging much easier. The endpoints are listed below.

Import Table Endpoints

/api/now/import/x_sclo_devicesync_device_import/insertMultiple

/api/now/import/x_sclo_devicesync_device_import_vmware/insertMultiple

Resources for Troubleshooting

This section contains port information for PowerFlow and troubleshooting commands for Docker, Couchbase, and the PowerFlow API.

Useful PowerFlow Ports

  • https://<IP of PowerFlow>:8091. Provides access to Couchbase, a NoSQL database for storage and data retrieval.
  • https://<IP of PowerFlow>:15672. Provides access to the RabbitMQ Dashboard, which you can use to monitor the service that distributes tasks to be executed by PowerFlow workers.
  • https://<IP of PowerFlow>/flower/dashboard. Provides access to Flower, a tool for monitoring and administrating Celery clusters.
  • https://<IP of PowerFlow>:3141. Provides access to the pypiserver service. which you can use to see if SyncPacks have been correctly uploaded to Devpi container.

Port 5556 must be open for both PowerFlow and the client.

Helpful Docker Commands

PowerFlow is a set of services that are containerized using Docker. For more information about Docker, see the Docker tutorial.

Use the following Docker commands for troubleshooting and diagnosing issues with PowerFlow:

Viewing Container Versions and Status

To view the PowerFlow version, SSH to your instance and run the following command:

rpm -qa | grep powerflow

To view the individual services with their respective image versions, SSH to your PowerFlow instance and run the following command:

docker service ls

In the results, you can see the container ID, name, mode, status (see the replicas column), and version (see the image column) for all the services that make up PowerFlow:

Restarting a Service

Run the following command to restart a single service:

docker service update --force <service_name>

Stopping all PowerFlow Services

Run the following command to stop all PowerFlow services:

docker stack rm iservices

Restarting Docker

Run the following command to restart Docker:

systemctl restart docker

Restarting Docker does not clear the queue.

Diagnosis Tools

Multiple diagnosis tools exist to assist in troubleshooting issues with the PowerFlow platform:

  • Docker PowerPack. This PowerPack monitors your Linux-based PowerFlow server with SSH (the PowerFlow ISO is built on top of an Oracle Linux Operating System). This PowerPack provides key performance indicators about how your PowerFlow server is performing. For more information on the Docker PowerPack and other PowerPacks that you can use to monitor PowerFlow, see the Using SL1 to Monitor SL1 PowerFlow.
  • Flower. This web interface tool can be found at the /flower endpoint. It provides a dashboard displaying the number of tasks in various states as well as an overview of the state of each worker. This tool shows the current number of active, processed, failed, succeeded, and retried tasks on the PowerFlow platform. This tool also shows detailed information about each of the tasks that have been executed on the platform. This data includes the UUID, the state, the arguments that were passed to it, as well as the worker and the time of execution. Flower also provides a performance chart that shows the number of tasks running on each individual worker.

  • Debug Mode. All applications can be run in "debug" mode via the PowerFlow API. Running applications in debug mode may slow down the platform, but they will result in much more detailed logging information that is helpful for troubleshooting issues. For more information on running applications in Debug Mode, see Retrieving Additional Debug Information.

  • Application Logs. All applications generate a log file specific to that application. These log files can be found at /var/log/iservices and each log file will match the ID of the application. These log files combine all the log messages of all previous runs of an application up to a certain point. These log files roll over and will get auto-cleared after a certain point.

  • Step Logs. Step logs display the log output for a specific step in the application. These step logs can be accessed via the PowerFlow user interface by clicking on a step in an application and bringing up the Step Log tab. These step logs display just the log output for the latest run of that step.

  • Service Logs. Each Docker service has its own log. These can be accessed via SSH by running the following command:

    docker service logs -f <service_name>

Retrieving Additional Debug Information (Debug Mode)

The logs in PowerFlow use the following loglevel settings, from most verbose to least verbose:

  • 10. Debug Mode.
  • 20. Informational.
  • 30. Warning. This is the default settings if you do not specify a loglevel.
  • 40. Error.

If you run applications in Debug Mode ("loglevel": 10), those applications will take longer to run because of increased I/O requirements. Enabling debug logging using the following process is the only recommended method. ScienceLogic does not recommend setting "loglevel": 10 for the whole stack with the docker-compose file.

To run an application in Debug Mode using the PowerFlow user interface:

  1. Select the PowerFlow application from the Applications page.
  2. Hover over the Run button and select Custom Run from the pop-up menu. The Custom Run window appears.
  3. Select the Logging Level. Debug is the most verbose and will take longer to run.
  4. Specify the configuration object for the custom run in the Configuration field, and add any JSON parameters in the Custom Parameters field, if needed.
  5. Click Run.

To run an application in Debug Mode using the API:

  1. POST the following to the API endpoint:

    https://<PowerFlow_IP>/api/v1/applications/run

  2. Include the following in the request body:

    {
      "name": "<application_name>",
      "params": {
      "loglevel": 10
      }
    }

After running the application in Debug Mode, review the step logs in the PowerFlow user interface to see detailed debug output for each step in the application. This information is especially helpful when trying to understand why an application or step failed:

You can also run an application in debug using curl via SSH:

  1. SSH to the PowerFlow instance.

  2. Run the following command:

    curl -v -k -u isadmin:<password> -X POST "https://<your_hostname>/api/v1/applications/run"
    -H 'Content-Type: application/json' -H 'cache-control: no-cache' -d '{"name":
    "interface_sync_sciencelogic_to_servicenow","params": {"loglevel": 10}}'