Troubleshooting the CMDB SyncPack

Download this manual as a PDF file 

This section includes troubleshooting resources and procedures to use with the "ServiceNow CMDB" SyncPack.

Initial Troubleshooting Steps

PowerFlow acts as a middle server between data platforms. For this reason, the first steps should always be to ensure that there are no issues with the data platforms with which PowerFlow is talking. There might be additional configurations or actions enabled on ServiceNow or SL1 that result in unexpected behavior. For detailed information about how to perform the steps below, see Resources for Troubleshooting.

SL1 PowerFlow

  1. Run docker service ls on the PowerFlow server:
  • Note the Docker container version.
  • Verify that the Docker services are running.
  1. If a certain service is failing, make a note of the service name and version.
  2. If a certain service is failing, run docker service ps <service_name> to see the historical state of the service and make a note of this information. For example: docker service ps iservices_contentapi.
  3. Make a note of any logs impacting the service by running docker service logs <service_name>. For example: docker service logs iservices_couchbase.

ServiceNow

  1. Make a note of the ServiceNow version and SyncPack version, if applicable.
  2. Make a note if you are running a ServiceNow certified application or a Service Graph SyncPack.
  3. Make a note of the SyncPack application that is failing in PowerFlow.
  4. Make a note of what step is failing in the application, try running the application in debug mode, and capture any traceback or error messages that occur in the step log.

Resources for Troubleshooting

This section contains port information for PowerFlow and troubleshooting commands for Docker, Couchbase, and the PowerFlow API.

Useful PowerFlow Ports

  • https://<IP of PowerFlow>:8091. Provides access to Couchbase, a NoSQL database for storage and data retrieval.
  • https://<IP of PowerFlow>:15672. Provides access to the RabbitMQ Dashboard, which you can use to monitor the service that distributes tasks to be executed by PowerFlow workers.
  • https://<IP of PowerFlow>/flower/workers. Provides access to Flower, a tool for monitoring and administrating Celery clusters.
  • https://<IP of PowerFlow>:3141. Provides access to the pypiserver service. which you can use to see if SyncPacks have been correctly uploaded to Devpi container.

Port 5556 must be open for both PowerFlow and the client.

Helpful Docker Commands

PowerFlow is a set of services that are containerized using Docker. For more information about Docker, see the Docker tutorial.

Use the following Docker commands for troubleshooting and diagnosing issues with PowerFlow:

Viewing Container Versions and Status

To view the PowerFlow version, SSH to your instance and run the following command:

rpm -qa | grep powerflow

To view the individual services with their respective image versions, SSH to your PowerFlow instance and run the following command:

docker service ls

In the results, you can see the container ID, name, mode, status (see the replicas column), and version (see the image column) for all the services that make up PowerFlow:

Restarting a Service

Run the following command to restart a single service:

docker service update --force <service_name>

Stopping all PowerFlow Services

Run the following command to stop all PowerFlow services:

docker stack rm iservices

Restarting Docker

Run the following command to restart Docker:

systemctl restart docker

Restarting Docker does not clear the queue.

Diagnosis Tools

Multiple diagnosis tools exist to assist in troubleshooting issues with the PowerFlow platform:

  • Docker PowerPack. This PowerPack monitors your Linux-based PowerFlow server with SSH (the PowerFlow ISO is built on top of an Oracle Linux Operating System). This PowerPack provides key performance indicators about how your PowerFlow server is performing. For more information on the Docker PowerPack and other PowerPacks that you can use to monitor PowerFlow, see the Using SL1 to Monitor SL1 PowerFlow.
  • Flower. This web interface tool can be found at the /flower endpoint. It provides a dashboard displaying the number of tasks in various states as well as an overview of the state of each worker. This tool shows the current number of active, processed, failed, succeeded, and retried tasks on the PowerFlow platform. This tool also shows detailed information about each of the tasks that have been executed on the platform. This data includes the UUID, the state, the arguments that were passed to it, as well as the worker and the time of execution. Flower also provides a performance chart that shows the number of tasks running on each individual worker.

  • Debug Mode. All applications can be run in "debug" mode via the PowerFlow API. Running applications in debug mode may slow down the platform, but they will result in much more detailed logging information that is helpful for troubleshooting issues. For more information on running applications in Debug Mode, see Retrieving Additional Debug Information.

  • Application Logs. All applications generate a log file specific to that application. These log files can be found at /var/log/iservices and each log file will match the ID of the application. These log files combine all the log messages of all previous runs of an application up to a certain point. These log files roll over and will get auto-cleared after a certain point.

  • Step Logs. Step logs display the log output for a specific step in the application. These step logs can be accessed via the PowerFlow user interface by clicking on a step in an application and bringing up the Step Log tab. These step logs display just the log output for the latest run of that step.

  • Service Logs. Each Docker service has its own log. These can be accessed via SSH by running the following command:

    docker service logs -f <service_name>

Retrieving Additional Debug Information (Debug Mode)

The logs in PowerFlow use the following loglevel settings, from most verbose to least verbose:

  • 10. Debug Mode.
  • 20. Informational.
  • 30. Warning. This is the default settings if you do not specify a loglevel.
  • 40. Error.

If you run applications in Debug Mode ("loglevel": 10), those applications will take longer to run because of increased I/O requirements. Enabling debug logging using the following process is the only recommended method. ScienceLogic does not recommend setting "loglevel": 10 for the whole stack with the docker-compose file.

To run an application in Debug Mode using the PowerFlow user interface:

  1. Select the PowerFlow application from the Applications page.
  2. Hover over the Run button and select Custom Run from the pop-up menu. The Custom Run window appears.
  3. Select the Logging Level. Debug is the most verbose and will take longer to run.
  4. Specify the configuration object for the custom run in the Configuration field, and add any JSON parameters in the Custom Parameters field, if needed.
  5. Click Run.

To run an application in Debug Mode using the API:

  1. POST the following to the API endpoint:

    https://<PowerFlow_IP>/api/v1/applications/run

  2. Include the following in the request body:

    {
      "name": "<application_name>",
      "params": {
      "loglevel": 10
      }
    }

After running the application in Debug Mode, review the step logs in the PowerFlow user interface to see detailed debug output for each step in the application. This information is especially helpful when trying to understand why an application or step failed:

You can also run an application in debug using curl via SSH:

  1. SSH to the PowerFlow instance.

  2. Run the following command:

    curl -v -k -u isadmin:<password> -X POST "https://<your_hostname>/api/v1/applications/run"
    -H 'Content-Type: application/json' -H 'cache-control: no-cache' -d '{"name":
    "interface_sync_sciencelogic_to_servicenow","params": {"loglevel": 10}}'

Troubleshooting CMDB Sync

This section contains specific troubleshooting steps for the CMDB SyncPack.

Issues Creating CIs in ServiceNow

If you can successfully send data to your ServiceNow system, but you encounter issues with creating CIs in the ServiceNow CMDB, this section provides troubleshooting steps to help you test the payload and identify possible issues. These steps might be helpful if you have set up datasource precedence rules.

  1. In ServiceNow, search for "import" in the filter navigator.
  2. Select ScienceLogic > Device > Imports. The Device Import window appears.
  3. From the list, select the Device Import log entry you want to view.
  4. Copy the data from the Payload field in the log entry and decode the data from its Base64 encoding.
  5. In the decoded string of data, remove the square brackets from the first and last line: ("[", "]")
  6. Copy this modified JSON payload, and then use the filter navigator to search for "Identification Simulation" or select Configuration > Identification Simulation.
  7. On the Identification Simulation page, click the Start button in the Start with Existing Payload section. The Insert JSON Payload page appears.
  8. In the Source field, select ScienceLogic as the data source.
  9. In the Please insert payload below field, paste the JSON payload you edited in step 5.
  10. Click the Execute button and review the payload to identify any potential issues.

Enabling Debugging of the CI Payload

You must have administrator-level permissions in ServiceNow to access the system properties and enable debugging of the Configuration Item (CI) payload in the ServiceNow Identification and Reconciliation module.

To enable debugging of the CI payload in ServiceNow:

  1. On the ServiceNow system, check to see if the glide.cmdb.logger.source.identification_engine record exists in sys_properties.list.
  • If the record exists, set this value to (* or debugVerbose)
  • If the record does not exist, you will need to create the record.
  1. To create the record, complete the following fields:
  • Name. glide.cmdb.logger.source.identification_engine
  • Description. Enable and configure the type of details the system logs when using the Identification and Reconciliation module outside the scope of identification simulation, such as when using an API, an ECC queue, or scheduled jobs (info, warn, error, debug, or debugVerbose).
  • Type. String.
  • Value: * or debugVerbose

    Set the system property of Value back to error when troubleshooting is complete.

  1. Run the "Sync Devices from SL1 to ServiceNow" application. The system logs will have “identfication_engine” as the source, and the log messages will contain identification_engine : Input.
  2. Copy the payload beginning from {"items" to the end of the message. For example:

    Message: {"items":[{"className":"","values":{"discovery_source":"ScienceLogic","mac_address":"9E:0F:04:0A:12:C7",
    "name":"Postman Test Server 1","x_sclo_scilogic_id":"1","serial_number":"gJ3Bwkzc8r","model_id":"",
    "ip_address":"10.10.10.102","manufacturer":"ScienceLogic, Inc.","ram":"16000",
    "x_sclo_scilogic_region":"Postman"},"lookup":[],"related":[]}],"relations":[]}

  1. You can run this message through the ScienceLogic endpoint by putting the {"items"} bracket within []. For example, send the following message to the endpoint
    /api/x_sclo_scilogic/v1/sciencelogic/IdentificationEngine:

    Message: [{"items":[{"className":"","values":{"discovery_source":"ScienceLogic","mac_address":"9E:0F:04:0A:12:C7",
    "name":"Postman Test Server 1","x_sclo_scilogic_id":"1","serial_number":"gJ3Bwkzc8r",
    "model_id":"","ip_address":"10.10.10.102","manufacturer":"ScienceLogic, Inc.","ram":"16000",
    "x_sclo_scilogic_region":"Postman"},"lookup":[],"related":[]}],"relations":[]}]

    The endpoint is different in a domain-separated environment.

    After the identification run is complete, the ServiceNow logs contain additional data about the run.