Installing and Configuring SL1 PowerFlow

Download this manual as a PDF file 

This section describes how to install, upgrade, and configure PowerFlow, and also how to set up security for PowerFlow.

This section covers the following topics:

PowerFlow Architecture

This topic describes the different aspects of PowerFlow architecture.

PowerFlow Container Architecture

PowerFlow is a collection of purpose-built containers that are charged to pass information to and from SL1. Building PowerFlow architecture in containers allows you to add more processes to handle the workload as needed.

The following diagram describes the container architecture for PowerFlow:

Image of the PowerFlow container architecture

PowerFlow includes the following containers:

  • GUI. The GUI container provides the user interface for PowerFlow.
  • REST API. The REST API container provides access to the Content Store on the PowerFlow instance.
  • Content Store. The Content Store container is basically a database service that contains all the reusable steps, applications, and containers in the PowerFlow instance.
  • Step Runners. Step Runner containers execute steps independently of other Step Runners. All Step Runners belong to a Worker Pool and can run steps in order, based on the instructions in the applications. By default there are five Step Runners (worker nodes) include in the PowerFlow platform. PowerFlow users can scale up or scale down the number of worker nodes, based on the workload requirements.

Integration Workflow

The following high-level diagram for a ServiceNow Integration provides an example of how PowerFlow communicates with both the SL1 Central Database and the third-party (ServiceNow) APIs:

Diagram of the ServiceNow integration workflow

The workflow includes the following components and their communication methods:

  • SL1 Central Database. PowerFlow communicates with the SL1 database over port 7706.
  • SL1 REST API. PowerFlow communicates with the SL1 REST API over port 443.
  • GraphQL. PowerFlow communicates with GraphQL over port 443.
  • ServiceNow Base PowerPack. In this example, the Run Book Automations from the ServiceNow Base PowerPack (and other SL1 PowerPacks) communicate with PowerFlow over port 443.
  • PowerFlow. PowerFlow communicates with both the SL1 Central Database and an external endpoint.
  • ServiceNow API. In this example, the ServiceNow applications in PowerFlow communicate with the ServiceNow API over port 443.

PowerFlow both pulls data from SL1 and has data pushed to it from SL1. PowerFlow both sends and retrieves information to and from ServiceNow, but PowerFlow is originating the requests.

High-Availability, Off-site Backup, and Proxy Architecture

You can deploy PowerFlow as a High Availability cluster, which requires at least three nodes to achieve automatic failover. While PowerFlow can be deployed as a single node, the single-node option does not provide redundancy through High Availability. PowerFlow also supports off-site backup and connection through a proxy server.

The following diagram describes these different configurations:

Diagram of the PowerFlow Architecture

  • High Availability for PowerFlow is a cluster of PowerFlow nodes with a Load Balancer managing the workload. In the above scenario, if one PowerFlow node fails, the workload will be redistributed to the remaining PowerFlow nodes. High Availability provides local redundancy. For more information, see Appendix A: Configuring PowerFlow for High Availability.
  • Off-site Backup can be configured by using PowerFlow to back up and recover data in the Couchbase database. The backup process creates a backup file and sends that file using Secure Copy Protocol (SCP) to a user-defined, off-site destination system. You can then use the backup file from the remote system and restore its content. For more information, see Backing up Data.
  • A Proxy Server is a dedicated computer or software system running as an intermediary. The proxy server in the above scenario handles the requests between PowerFlow and the third-party application. For more information, see Configuring a Proxy Server.

In addition, you can deploy PowerFlow in a multi-tenant environment that supports multiple customers in a highly available fashion. After the initial High Availability (HA) core services are deployed, the multi-tenant environment differs in the deployment and placement of workers and use of custom queues. For more information, see Appendix B: Configuring PowerFlow for Multi-tenant Environments.

There is no support for active or passive Disaster Recovery. ScienceLogic recommends that your PowerFlow Disaster Recovery plans include regular backups and restoring from backup. For more information, see Backing up Data.

Reviewing Your Deployment Architecture

Review the following aspects of your architecture before deploying PowerFlow:

  1. How many SL1 stacks will you use to integrate with the third-party platform (such as ServiceNow, Cherwell, or Restorepoint)?
  2. What is a good estimate of the number of devices across all of your SL1 stacks?
  3. How many data centers will you use?
  4. Specify the location of each data center.
  5. What is the latency between each data center? (Latency must be less than 80 ms.)
  6. How many SL1 stacks are in each data center?
  7. Are there any restrictions on data replication across regions?
  8. What is the location of the third-party platform (if applicable)?
  9. What is the VIP for Cluster Node Management?

Based on the above list, ScienceLogic recommends the following deployment paths:

  • Deploy separate PowerFlow clusters per region. This deployment requires more management of PowerFlow clusters, but it ensures that the data is completely separated between regions. This deployment also ensures that if a single region goes down, you only lose operations for that region.
  • Deploy a single PowerFlow cluster in the restrictive region. This deployment is easier to manage, as you are only dealing with a single PowerFlow cluster. As an example, if Europe has a law that requires that data in Europe cannot be replicated to the United States, but that law does not prevent data from the United States from coming into Europe, you can deploy a single PowerFlow cluster in Europe to satisfy the law requirements.
  • If you are deploying a multi-tenant configuration, check to see if your environment meets one the following:
  • You have three or more data centers and the latency between each data center is less than 80 ms (question E), consider deploying a multi-tenant PowerFlow where each node is in a separate data center to ensure data center resiliency. This deployment ensures that if a single data center goes down, PowerFlow will remain operational.
  • You have only two data centers and the latency between data centers is less than 80 ms, consider deploying a multi-tenant PowerFlow where two nodes are in one data center and the other node is in the other data center. This deployment does not ensure data center resiliency, but it does provide standard High Availability if a single node goes down. If the data center with one node goes down, PowerFlow will remain operational. However, if the data center with two nodes goes down, PowerFlow will no longer remain operational.
  • You have only two data centers but the latency between data centers is more than 80 ms. In this situation, you can still deploy a multi-tenant PowerFlow, but all nodes must be located in a single data center. This deployment still provides standard High Availability so that, if a single node goes down, the other two nodes ensure PowerFlow operations. If you require more resiliency than a single-node failure, you can deploy five nodes, which will ensure resiliency with two down nodes. However, if the data center goes down, PowerFlow will not be operational.
  • You only have one data center, you can still deploy a multi-tenant PowerFlow, but all nodes are located in a single data center. This deployment still provides standard High Availability so that, if a single node goes down, the other two nodes ensure PowerFlow operations. If you require more resiliency than a single-node failure, you can deploy five nodes, which will ensure resiliency with two down nodes. However, if the data center goes down, PowerFlow will not be operational.

System Requirements

PowerFlow itself does not have specific minimum required versions for SL1 or AP2. However, certain Synchronization PowerPacks for PowerFlow have minimum version dependencies. Please see the documentation for those Synchronization PowerPacks for more information on those dependencies.

The following table lists the port access required by PowerFlow:

Source IP PowerFlow Destination PowerFlow Source Port Destination Port Requirement
PowerFlow SL1 API Any TCP 443 SL1 API Access
SL1 Run Book Action PowerFlow Any TCP 443 Send SL1 data to PowerFlow
Devpi PowerFlow Any TCP 3141 Internal Python package repository for Synchronization PowerPacks; check for self-certification for PowerFlow
Dex Server PowerFlow Any TCP 5556 Enable authentication for PowerFlow
PowerFlow SL1 Database Any TCP 7706 SL1 Database Access
powerflowcontrol (pfctl, formerly called iservicecontrol) command-line utility PowerFlow Any 22 (on all host nodes) Log in and perform admin tasks on nodes
Encapsulated Security Protocol (ESP) PowerFlow IP Protocol 50 n/a Security; ESP should be open and available between cluster nodes
Couchbase Dashboard PowerFlow 8091 n/a Couchbase Dashboard (use your PowerFlow credentials)
RabbitMQ Dashboard PowerFlow 15672 n/a RabbitMQ Dashboard (use guest/guestfor credentials)

ScienceLogic highly recommends that you disable all firewall session-limiting policies. Firewalls will drop HTTPS requests, which results in data loss.

PowerFlow clusters do not support vMotion or snapshots while the cluster is running. Performing a vMotion or snapshot on a running PowerFlow cluster will cause network interrupts between nodes, and will render clusters inoperable.

The site administrator is responsible for configuring the host, hardware, and virtualization configuration for the PowerFlow server or cluster. If you are running a cluster in a VMware environment, be sure to install open-vm-tools and disable vMotion.

You can configure one or more SL1 systems to use PowerFlow to sync with a single instance of a third-party application like ServiceNow or Cherwell. You cannot configure one SL1 system to use PowerFlow to sync with multiple instances of a third-party application like ServiceNow or Cherwell. The relationship between SL1 and the third-party application can be either one-to-one or many-to-one, but not one-to-many.

The following table illustrates the different configurations available with PowerFlow:

ServiceNow, Restorepoint, Cherwell, or Other Third-party Applications PowerFlow SL1 
Single Third-party System: Allowed Single PowerFlow System Single SL1 System: Allowed
Multiple Third-party Systems: Not allowed   Multiple SL1 Systems: Allowed

You can use a single PowerFlow system to manage multiple pairings between one or more SL1 systems and third-party applications like ServiceNow and Cherwell. The pairings must always be one-to-one or many-to-one: one or more SL1 systems connected to only one third-party application.

The default internal network used by PowerFlow services is 172.21.0.1/16. Please ensure that this range does not conflict with any other IP addresses on your network. If needed, you can change this subnet in the docker-compose.yml file.

For more information about system requirements for your PowerFlow environment, see the System Requirements page at the ScienceLogic Support site.

Hardened Operating System

The operating system for PowerFlow is pre-hardened by default, with firewalls configured only for essential port access and all services and processes running inside Docker containers, communicating on a secure, encrypted overlay network between nodes. Please refer to the table, above, for more information on essential ports.

You can apply additional Linux hardening policies or package updates as long as Docker and its network communications are operational.

The PowerFlow operating system is an Oracle Linux distribution, and all patches are provided within the standard Oracle Linux repositories. The patches are not provided by ScienceLogic.

Additional Prerequisites for PowerFlow

To work with PowerFlow, ScienceLogic recommends that you have knowledge of the following:

  • Linux and vi (or another text editor).

  • Python.

  • Postman or another API tool for interacting with the PowerFlow API.

  • Couchbase. For more information, see Helpful Couchbase Commands.

  • Docker. For more information, see Helpful Docker Commands and https://docs.docker.com/engine/reference/commandline/cli/.

    The most direct way of accessing the most recent containers of PowerFlow is by downloading the latest RPM file from the ScienceLogic Support Portal. As a separate option, you can also access the PowerFlow containers directly through Docker Hub. To access the containers through Docker Hub, you must have a Docker Hub ID and enable permissions to pull the containers from Docker Hub. To get permissions, contact your ScienceLogic Customer Success Manager.

Installing PowerFlow

You can install PowerFlow for the first time in the following ways:

If you are upgrading an existing version of the PowerFlow, see Upgrading PowerFlow.

If you are installing PowerFlow in a clustered environment, see Configuring the PowerFlow System for High Availability.

The site administrator is responsible for configuring the host, hardware, and virtualization configuration for the PowerFlow server or cluster. If you are running a cluster in a VMware environment, be sure to install open-vm-tools and disable vMotion.

Installing PowerFlow via ISO

Locating the ISO Image

To locate the PowerFlow ISO image:

  1. Go to the ScienceLogic Support site at https://support.sciencelogic.com/s/.
  2. Click the Product Downloads tab and select PowerFlow. The PowerFlow page appears.
  3. Click the link to the current release. The Release Version page appears.
  4. In the Release Files section, click the ISO link for the PowerFlow image. A Release File page appears.
  5. Click Download File at the bottom of the Release File page.

Installing from the ISO Image

When installing PowerFlow from an ISO, you can now install open-vm-tools by selecting Yes to "Installing Into a Vmware Environment" option during the installation wizard.

To install PowerFlow via ISO image:

  • Download the latest PowerFlow ISO file to your computer or a virtual machine center.
  • Using your hypervisor or bare-metal (single-tenant) server of choice, mount and boot from the PowerFlow ISO. The PowerFlow Installation window appears:

    Image of the first Installation page of PowerFlow

  • Select Install PowerFlow. After the installer loads, the Network Configuration window appears:

  • Complete the following fields:
  • IP Address. Type the primary IP address of the PowerFlow server.
  • Netmask. Type the netmask for the primary IP address of the PowerFlow server.
  • Gateway. Type the IP address for the network gateway.
  • DNS Server. Type the IP address for the primary nameserver.
  • Hostname. Type the hostname for PowerFlow.
  1. Press Continue. The Root Password window appears:

  2. Type the password you want to set for the root user on the PowerFlow host (and the service account password) and press Enter. The password must be at least six characters and no more than 24 characters, and all special characters are supported.

    You use this password to log into the PowerFlow user interface, to SSH to the PowerFlow server, and to verify API requests and database actions. This password is set as both the "Linux host isadmin" user and in the /etc/iservices/is_pass file that is mounted into the PowerFlow stack as a "Docker secret". Because it is mounted as a secret, all necessary containers are aware of this password in a secure manner. For more information, see Changing the PowerFlow Password.

  3. Type the password for the root user again and press Enter. The PowerFlow installer runs, and the system reboots automatically. This process will take a few minutes.

  4. After the installation scripts run and the system reboots, SSH into your system using PuTTY or a similar application. The default username for the system is isadmin.

  5. To start the Docker services, change directory to run the following commands:

    cd /opt/iservices/scripts

    ./pull_start_iservices.sh

    This process will take a few minutes to complete.

  6. To validate that iservices is running, run the following command to view each service and the service versions for services throughout the whole stack:

    docker service ls

  7. Navigate to the PowerFlow user interface using your browser. The address of the PowerFlow user interface is:

    https://<IP address entered during installation>

  8. Log in with the default username of isadmin and the password you specified in step 6.

  9. After installation, you must license your PowerFlow system if you want to enable all of the features. For more information, see Licensing PowerFlow.

  10. If you are setting up High Availability for the PowerFlow on a multiple-node cluster, see Preparing the PowerFlow System for High Availability.

    The HOST_ADDRESS value in the /etc/iservices/isconfig.yml file should be the fully qualified domain name (FQDN) of either the host if there is no load balancer, or the FQDN of the load balancer if one exists. If you change the HOST_ADDRESS value, you will need to restart the PowerFlow stack.

Troubleshooting the ISO Installation

To verify that your stack is deployed, view your Couchbase logs by executing the following command:

docker service logs --follow iservices_couchbase

If no services are found to be running, run the following command to start them:

docker stack deploy -c docker-compose.yml iservices

To add or remove additional workers, run the following command: 

docker service scale iservices_steprunner=10

Installing PowerFlow via RPM to a Cloud-based Environment

Locating the RPM file

To locate the PowerFlow RPM file:

  1. Go to the ScienceLogic Support site at https://support.sciencelogic.com/s/.
  2. Click the Product Downloads tab and select PowerFlow. The PowerFlow page appears.
  3. Click the link to the current release. The Release Version page appears.
  4. In the Release Files section, click the RPM link for the PowerFlow image. A Release File page appears.
  5. Click Download File at the bottom of the Release File page.

Installing from the RPM File

The following procedure describes how to install PowerFlow via RPM to Amazon Web Service (AWS) EC2. You can also install PowerFlow on other cloud-based environments, such as Microsoft Azure. For other cloud-based deployments, the process is essentially the same as the following steps: PowerFlow provides the containers, and the cloud-based environment provides the operating system and server.

You can install PowerFlow on any Oracle Linux 7 or later operating system, even in the cloud, as long as you meet all of the operating system requirements. These requirements include CPU, memory, Docker and a docker-compose file installed, and open firewall settings. When these requirements are met, you can install the RPM and begin to deploy the stack as usual.

If you install the PowerFlow system on any operating system other than Oracle Linux 7, ScienceLogic will only support the running application and associated containers. ScienceLogic will not assist with issues related to host configuration for operating systems other than Oracle Linux 7.

If you are deploying PowerFlow without a load balancer, you can only use the deployed IP address as the management user interface. If you use another node to log in to the PowerFlow system, you will get an internal server error. Also, if the deployed node is down, you must redeploy the system using the IP address for another active node to access the management user interface.

The HOST_ADDRESS value in the /etc/iservices/isconfig.yml file should be the fully qualified domain name (FQDN) of either the host if there is no load balancer, or the FQDN of the load balancer if one exists. If you change the HOST_ADDRESS value, you will need to restart the PowerFlow stack.

If you are installing the RPM in a cluster configuration, and you want to distribute traffic between the nodes, a load balancer is required.

If you install the PowerFlow system in a cloud-based environment using a method other than an ISO install, you are responsible for setting up and configuring the requirements of the cloud-based environment.

To install a single-node PowerFlow via RPM to a cloud-based environment (using AWS as an example):

  1. In Amazon Web Service (AWS) EC2, click Launch instance. The Choose an Amazon Machine Image (AMI) page appears.

    If you are installing PowerFlow to another cloud-based environment, such as Microsoft Azure, set up the operating system and server, and then go to step 7.

  2. Deploy a new Oracle Linux 7.6 virtual machine by searching for OL7.6-x86_64-HVM in the Search for an AMI field.

  3. Click the results link for Community AMIs.

  4. Click Select for a virtual machine running Oracle Linux 7.6 or greater for installation. The following image shows an example of an OL7.6-x86_64-HVM-* AMI file:

  1. From the Choose an Instance Type page, select at least a t2.xlarge AMI instance, depending on your configuration:
  • Single-node deployments. The minimum is t2.xlarge (four CPUs with 16 GB memory), and ScienceLogic recommends t2.2xlarge (8 CPUs with 32 GB memory).
  • Cluster deployments. Cluster deployments depend on the type of node you are deploying. Refer to the separate multi-tenant environment guide for more sizing information. ScienceLogic recommends that you allocate at least 50 GB or more for storage.

  1. Go to the Step 6: Configure Security Group page and define the security group:
  • Only inbound port 443 needs to be exposed to any of the systems that you intend to integrate.
  • For PowerFlow version 1.8.2 and later, port 8091 is exposed through https. ScienceLogic recommends that you make port 8091 available externally to help with troubleshooting:

  1. Upload the sl1-powerflow-2.x.x-1.x86_64.rpm file to the PowerFlow server using SFTP or SCP.

  2. Enable the necessary repositories by running the following commands on the PowerFlow system:

    sudo yum install yum-utils

    sudo yum-config-manager --enable ol7_latest

    sudo yum-config-manager --enable ol7_optional_latest

  3. Run the following commands to update and install the host-level packages, and to upgrade to Python 3.6:

    sudo yum remove python34-pip python34-setuptools python3

    sudo yum --setopt=obsoletes=0 install python36-pip python36 python36-setuptools python36-devel openssl-devel gcc make kernel

    sudo yum update

    sudo yum install python36-pip

  4. Ensure that the latest required packages are installed by running the following commands on the server instance:

    sudo yum install -y wget

    sudo pip install --upgrade pip==20.2.4

    sudo pip install docker-compose

    wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-19.03.5-3.el7.x86_64.rpm && sudo yum install docker-ce-19.03.5-3.el7.x86_64.rpm

    You will need to update both instances of the Docker version in this command if there is a more recent version of Docker CE on the Docker Download page: https://download.docker.com/linux/centos/7/x86_64/stable/Packages/.

    You might need to remove spaces from the code that you copy and paste from this manual. For example, in instances such as the wget command, above, line breaks were added to long lines of code to ensure proper pagination in the document.

  5. Create the Docker group:

    sudo groupadd docker

  6. Add your user to the Docker group:

    sudo usermod -aG docker $USER

  7. Log out and log back in to ensure that your group membership is re-evaluated.

  8. Run the following commands for the configuration updates:

    sudo setenforce 0

    sudo vim /etc/sysconfig/selinux

    SELINUX=permissive

    sudo systemctl enable docker

    sudo systemctl start docker

    sudo yum install yum-utils

    sudo yum-config-manager --enable ol7_addons ol7_optional_latest ol7_latest

    sudo yum-config-manager --disable ol7_ociyum_config

    wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

    sudo rpm -Uvh epel-release-latest-7*.rpm

    sudo yum update

    sudo pip install docker-compose

    sudo yum install firewalld

    systemctl enable firewalld

    systemctl start firewalld

    systemctl disable iptables

  9. Run the following firewall commands as "sudo":

    sudo firewall-cmd --add-port=2376/tcp --permanent

    sudo firewall-cmd --add-port=2377/tcp --permanent

    sudo firewall-cmd --add-port=7946/tcp --permanent

    sudo firewall-cmd --add-port=7946/udp --permanent

    sudo firewall-cmd --add-port=4789/udp --permanent

    sudo firewall-cmd --add-protocol=esp --permanent

    sudo firewall-cmd -–reload

    To view a list of all ports, run the following command: firewall-cmd --list-all

  10. Copy the PowerFlow RPM to the instance of installation and install the RPM:

    sudo yum install sl1-powerflow

    systemctl restart docker

  11. Create a password for PowerFlow:

    sudo printf <password> > /etc/iservices/is_pass

    where <password> is a new, secure password.

  12. Pull and start iservices to start PowerFlow:

    /opt/iservices/scripts/pull_start_iservices.sh

For an AWS deployment, ScienceLogic recommends that you switch to an Amazon EC2 user as soon as possible instead of running all the commands on root.

For a clustered PowerFlow environment, you must install the PowerFlow RPM on every server that you plan to cluster into PowerFlow. You can load the Docker images for the services onto each server locally by running /opt/iservices/scripts/pull_start_iservices.sh. Installing the RPM onto each server ensures that the PowerFlow containers and necessary data are available on all servers in the cluster.

After installation, you must license your PowerFlow system to enable all of the features. Licensing is required for production systems only, not for test systems. For more information, see Licensing PowerFlow.

Troubleshooting a Cloud Deployment of PowerFlow

After completing the AWS setup instructions, if none of the services start and you see the following error during troubleshooting, you will need to restart Docker after installing the RPM installation.

sudo docker service ps iservices_couchbase --no-trunc

"error creating external connectivity network: Failed to Setup IP tables: Unable to enable SKIP DNAT rule: (iptables failed: iptables --wait -t nat -I DOCKER -i docker_gwbridge -j RETURN: iptables: No chain/target/match by that name."

Upgrading Oracle Linux Operating System Packages from ISO

ScienceLogic releases a major update to PowerFlow every six months via ISO and RPM. ScienceLogic also releases a monthly maintenance release (MMR) as needed to address major customer-facing bugs via ISO and RPM. If there are no major bugs to be addressed via MMR, the MMR will not be produced for the month. Security updates are included in an MMR only if an MMR is planned to be released.

All ISO builds of PowerFlow (major updates and MMRs) include the most recent, stable version of the Oracle Linux 7 operating system (OS). If there are OS vulnerabilities discovered in PowerFlow, you will need to either patch the vulnerability yourself using yum or wait for the next PowerFlow ISO.

When a yum update is performed, there is no risk of PowerFlow operations being affected as long as Docker or networking services are not included in the updates.

Upgrading OS packages for an offline deployment require the following manual steps to mount the ISO and update the packages packaged with the ISO.

  1. Mount the PowerFlow ISO onto the system:

    mount -o loop /dev/cdrom /mnt/tmpISMount

  2. After you mount the ISO, add a new repository file to access the ISO as if it were a yum repository. Create a /etc/yum.repos.d/localiso.repo file with the following contents:

    [localISISOMount]

    name=Locally mounted IS ISO for packages

    enabled=1

    baseurl=file:///mnt/tmpISMount

    gpgcheck=0

    After you create and save this file, the Linux system can install packages from the PowerFlow ISO.

  3. Optionally, you can import the latest GNU Privacy Guard (GPG) key to verify the packages by running the following commands:

    rpm --import /mnt/repo_keys/RPM-GPG-KEY-Oracle

    rpm --import /mnt/tmpISMount/repo_keys/RPM-GPG-KEY-Docker-ce

    rpm --import /mnt/tmpISMount/repo_keys/RPM-GPG-KEY-EPEL-7

  4. Run the following command to update and install the host-level packages:

    yum update

Upgrading PowerFlow

Upgrading to the latest version of the PowerFlow platform will involve some downtime of PowerFlow. Before upgrading your version of PowerFlow, ScienceLogic recommends that you make a backup of your PowerFlow system. For more information, see Backing up Data.

As a best practice, you should always upgrade to the most recent version of PowerFlow that is currently available at the PowerFlow Support page.

Another best practice is to ensure that the PowerFlow system is in a good state by running the powerflowcontrol (pfctl) healthcheck autoheal. For more information, see healthcheck and autoheal.

If you are deploying PowerFlow without a load balancer, you can only use the deployed IP address as the management user interface. If you use another node to log in to the PowerFlow system, you will get an internal server error. Also, if the deployed node is down, you must redeploy the system using the IP address for another active node to access the management user interface.

If you made any customizations to default applications or steps that shipped with previous versions of PowerFlow, you will need to make those customizations compatible with Python 3.6 or later before upgrading to version 2.0.0 or later of PowerFlow.

If you made any modifications to the nginx configuration or to other service configuration files outside of the docker-compose.yml file, you will need to modify or back up those custom configurations before upgrading, or contact ScienceLogic Support to prevent the loss of those modifications.

During the upgrade, ensure that the /opt/iservices/scripts/docker-compose-override.yml file has the customer custom configurations and the correct Couchbase and RabbitMQ versions.

If you are installing the RPM in a cluster configuration, and you want to distribute traffic between the nodes, a load balancer is required.

Running a sudo yum update on your PowerFlow system on a regular basis will ensure that any OS-level security updates are applied.

To help address any issues with the upgrade, see Troubleshooting Upgrade Issues.

Upgrading from Version 2.0.0 or Later

This release includes updates that address the common vulnerabilities and exposures (CVEs) identified since the last release of PowerFlow. If you are using PowerFlow version 2.3.0 or older, you can run a sudo yum update to address these CVEs.

If you do not have Internet access to the Oracle Linux 7 repos used by the yum update, you can manually download the packages directly from the repository links below and copy them to your PowerFlow system:

  • https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/glibc-2.17-325.0.3.el7_9.x86_64.rpm

  • https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/glibc-common-2.17-325.0.3.el7_9.x86_64.rpm

  • https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/glibc-devel-2.17-325.0.3.el7_9.x86_64.rpm

  • https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/glibc-headers-2.17-325.0.3.el7_9.x86_64.rpm

To upgrade to the latest version of PowerFlow from version 2.0.0 or later, you will need to install the audit package used for audit logging in the latest release.

  1. If your PowerFlow system is connected to the Internet, run the following command to download and install the audit package:

    sudo yum install pf-iso

  2. If your PowerFlow system is not connected to the Internet, you can attach the 2.x.x ISO file and upgrade using the packages on the ISO file:

    1. Mount the 2.x.x ISO file onto your PowerFlow system and run the following commands:

      sudo yum install 2.x.xiso-iso-mount-point/audit-2.8.5-4.el7.x86_64.rpm

      sudo yum install 2.x.xiso-iso-mount-point/audit-libs-2.8.5-4.el7.x86_64.rpm

      sudo yum install 2.x.xiso-iso-mount-point/audit-libs-python-2.8.5-4.el7.x86_64.rpm

    2. Create the following file: /etc/yum.repos.d/local-yum-repo.repo.

    3. Add the following lines to the new local-yum-repo.repo file:

      [local-yum-repo]

      name=yum repo from mounted ISO

      baseurl=file:///mnt/tmp_install_mount/

      enabled=1

      gpgcheck=0

    4. Run the following command:

      sudo yum update --disablerepo=* --enablerepo=local-yum-repo

  3. You can also manually download and install the audit package from the following links:

    • https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/audit-2.8.4-4.el7.x86_64.rpm

    • https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/audit-libs-2.8.4-4.el7.x86_64.rpm

    • https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/audit-libs-python-2.8.4-4.el7.x86_64.rpm

  4. Continue to the following procedure, which is the standard upgrade process used in previous releases.

To complete the upgrade from version 2.0.0 or later:

When you are upgrading to version 2.3.0 or later, you must perform the steps in the following procedure, in order, to ensure a proper upgrade.

  • Download the PowerFlow RPM and copy the RPM file to the PowerFlow system.

  • Either go to the console of the PowerFlow system or use SSH to access the server.

  • Log in as isadmin with the appropriate (root) password. You must be root to upgrade using the RPM file.

  • Type the following at the command line:

    sudo rpm -Uvh full_path_of_rpm

    where full_path_of_rpm is the name and path of the RPM file, such as /home/isadmin/sl1-powerflow-2.x.x-1.x86_64.

    If you are running PowerFlow in a clustered environment, install the RPM on all nodes in the cluster before continuing with the remaining steps.

  • After the RPM is installed, run the following Docker command:

    docker stack rm iservices

    If you want to upgrade your services in place, without bringing them down, you may skip this step. Please note that skipping this step might cause the services to take longer update.

    After you run this command, the stack is no longer running.

  • If the upgrade process recommends restarting Docker, run the following command:

    systemctl restart docker

    If you restart Docker for this step, you should skip step 9, below.

  • Re-deploy the Docker stack to update the containers:

    docker stack deploy -c /opt/iservices/scripts/docker-compose.yml iservices

  • After you re-deploy the Docker stack, the services automatically update themselves. Wait a few minutes to ensure that all services are updated and running before using the system.

  • If the upgrade process recommends restarting Docker, run the following command:

    systemctl restart docker

    If you restarted Docker after the RPM was installed and before the stack was redeployed with the new docker-compose file, you can run the following powerflowcontrol (pfctl) action to correct potential upgrade issues:

    pfctl --host pf-node-ip '<username>:<password>' node-action --action modify_iservices_volumes_owner

    After you run the above pfctl action, you will need to restart the syncpacks_steprunner service.

  • To view updates to each service and the service versions for services throughout the whole stack, type the following at the command line:

    docker service ls

    Each service now uses the new version of PowerFlow.

Upgrading from Version 1.8.x with the Upgrade Script

As a best practice, you should always upgrade to the most recent version of PowerFlow that is currently available at the PowerFlow Support page.

To upgrade to the latest version of PowerFlow from version 1.8.x:

  1. Upgrade the host packages and Python 3.6 (previous versions of PowerFlow used Python 2.6).

  2. Upgrade to Oracle 7.3 or later.

  3. Upgrade to Docker version 18.09.2 or later.

    PowerFlow version 2.0.0 or later requires the docker-ce 18.09.2 or later version of Docker. The PowerFlow ISO installs the docker-ce 19.03.5-3 version of Docker by default, but if you are upgrading to this version from the RPM, you must upgrade Docker before you upgrade PowerFlow with the RPM.

  1. Install the PowerFlow upgrade RPM.
  2. Update the PowerFlow system from Basic Authentication to OAuth 2.0. For more information, see Configuring Authentication with PowerFlow.
  3. Set up licensing for PowerFlow. You must license your PowerFlow system to enable all of the features. If you are not deploying PowerFlow on a production or pre-production environment, you can skip licensing. For more information, see Licensing PowerFlow.

If you are upgrading from a version before 1.8.3, be sure to review the release notes for the older version for any relevant update considerations before upgrading. For example, there is a small port change that you might need to apply if you are upgrading a customized cluster from a version of PowerFlow before version 1.8.3.

You will need to run the is_upgrade_to_v2.sh script to perform the upgrade steps automatically. The script upgrades the PowerFlow system from 1.8.x to 2.0.0 or later.

To locate the upgrade script:

  1. At the ScienceLogic Support site, click the Product Downloads tab and select PowerFlow. The PowerFlow Release page appears.
  2. Click the relevant "Integration Service 2.0" link. The Release Version page appears.
  3. In the Release Files section, click the "1.8.X to 2.X.X Upgrade Script" link. A Release File Details page appears.
  4. Click Download File on the Release File Details page. The is_upgrade_to_v2.sh script is in the is_upgrade_tools.zip file.

The upgrade script runs the following steps:

  1. Checks to see if the Oracle version is greater or equal to 7.3. If not, the script stops and displays a message that you need to update to Oracle 7.3 or later.
  2. Sets the requirements location and either mounts the ISO or verifies that the RPM exists. For this step, the script asks if the installation will be offline or online, and it also asks you for the location of the RPM.
  3. Installs Python 3.6.
  4. Installs Docker 19.03.5.
  5. Installs the PowerFlow RPM.
  6. Runs the pull_start_iservices.sh script to deploy and initialize PowerFlow.
  7. If the upgrading process was offline, cleans the changes made for the upgrading process, unmounts the ISO, and removes the localiso.repo file.

To run the upgrade script:

  1. Download the is_upgrade_to_v2.sh script and add it to a directory on the PowerFlow system.

  2. Download the sl1-powerflow-2.x.x-1.x86_64.rpm file or the ISO file and add it to a directory on the PowerFlow system. Make a note of this directory, because you will need it for Step 2 in the script.

    Alternately, instead of downloading the RPM file, you can specify an online location for the RPM file.

  1. If needed, run the following command on the PowerFlow system to give the script execution permissions:

    sudo chmod +x is_upgrade_to_v2.sh

    If you are installing a version of PowerFlow later than 2.0.0, you will need to update the version number in the command, above.

  2. Change directory to the directory containing the is_upgrade_to_v2.sh script, such as /home/isadmin/, and then run the following command to execute the upgrade script:

    sudo ./is_upgrade_to_v2.sh

    If you are installing a version of PowerFlow later than 2.0.0, you will need to update the version number in the command, above.

  3. For Step 2 of the script, you will need to specify if you want to run the upgrade online, or run it offline. Type "1" if you have Internet access, or "2" if you want to run the update offline.

  4. For Step 2 of the script, you will need to specify the location of the RPM or ISO file for 2.0.x. You can use a location on the PowerFlow system for the RPM or ISO file, or an online location for the RPM. For example: /home/isadmin/sl1-powerflow-2.1.0-1.x86_64.rpm

  5. After the upgrade script completes, perform the following steps to verify the upgrade:

  • Review the docker-compose.yml file and ensure that all environmental changes are in place.
  • If the docker-compose.yml file is ready to be deployed, you can re-deploy the PowerFlow stack.
  • After the PowerFlow stack is up and running, run the healthcheck action with the powerflowcontrol (pfctl) command-line utility to verify that you had a healthy deployment.
  • If needed, run the healthcheck action with the powerflowcontrol (pfctl) utility to automatically fix any remaining inconsistencies after the upgrade.

    For more information about the iservicecontrol (pfctl) utility, see Using the powerflowcontrol (pfctl) Command-line Utility.

  1. To view updates to each service and the service versions for services throughout the whole stack, type the following at the command line:

    docker service ls

  2. As needed, update the PowerFlow system from Basic Authentication to OAuth 2.0. For more information, see Configuring Authentication with PowerFlow.

  3. As needed, set up licensing for the PowerFlow. For more information, see Licensing PowerFlow.

If you are upgrading a clustered PowerFlow environment from 1.8.x to 2.0.0 or later, see Updating Cluster Settings when Upgrading from 1.8.x to 2.0.0 or Later.

Manually Upgrading from Version 1.8.x

Instead of running the upgrade script, you can manually upgrade to the latest version from 1.8.x by following the detailed instructions below. 

If you are upgrading from a version before 1.8.3, be sure to review the release notes for the older version for any relevant update considerations before upgrading. For example, there is a small port change that you might need to apply if you are upgrading a customized cluster from a version of PowerFlow before version 1.8.3.

Step 1. Upgrading Host Packages and Python 3.6

To access the host packages online:

  1. To make sure that all repositories can access the required host-level packages, enable the necessary repositories by running the following commands on the PowerFlow system:

    sudo yum install yum-utils

    sudo yum-config-manager --enable ol7_latest

    sudo yum-config-manager --enable ol7_optional_latest

  2. Run the following commands to update and install the host-level packages, and to upgrade to Python 3.6:

    sudo yum remove python34-pip python34-setuptools python3

    sudo yum --setopt=obsoletes=0 install python36-pip python36 python36-setuptools python36-devel openssl-devel gcc make kernel

    sudo yum update

  3. Continue the upgrade process by upgrading to Oracle 7.3 or later.

If you need to upgrade the host packages offline, without Internet access, you can mount the latest PowerFlow ISO file onto the system and create a yum repository configuration that points to the local mount point in /etc/yum.repos.d. After the ISO is mounted, you can import the latest GNU Privacy Guard (GPG) key used by the repository.

To access the host packages offline:

  1. Mount the PowerFlow ISO onto the system:

    mount -o loop /dev/cdrom /mnt/tmpISMount

  2. After you mount the ISO, add a new repository file to access the ISO as if it were a yum repository. Create a /etc/yum.repos.d/localiso.repo file with the following contents:

    [localISISOMount]

    name=Locally mounted IS ISO for packages

    enabled=1

    baseurl=file:///mnt/tmpISMount

    gpgcheck=0

    After you create and save this file, the Linux system can install packages from the PowerFlow ISO.

  3. Optionally, you can import the latest GNU Privacy Guard (GPG) key to verify the packages by running the following command:

    rpm --import /mnt/repo_keys/RPM-GPG-KEY-Oracle

    rpm --import /mnt/tmpISMount/repo_keys/RPM-GPG-KEY-Docker-ce

    rpm --import /mnt/tmpISMount/repo_keys/RPM-GPG-KEY-EPEL-7

  4. If you cannot install Docker or Python offline, delete the other repository references by running the following command (ScienceLogic recommends that you back up those file first):

    rm -rf /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel-testing.repo /etc/yum.repos.d/public-yum-ol7.repo

  5. Run the following commands to update and install the host-level packages, and to upgrade to Python 3.6:

    sudo yum remove python34-pip python34-setuptools python3

    sudo yum --setopt=obsoletes=0 install python36-pip python36 python36-setuptools python36-devel openssl-devel gcc make kernel

    sudo yum update

  6. Continue the upgrade process by upgrading to Oracle 7.3 or later.

Step 2. Upgrading to Oracle 7.3 or Later

ScienceLogic recommends that you update the version of Oracle Linux running on the PowerFlow to 64-bit version 7.3 or later.

If you want to upgrade to Oracle Linux to 7.6, see https://docs.oracle.com/en/operating-systems/oracle-linux/7/relnotes7.6/.

To upgrade to Oracle 7.3 or later:

  1. To check the current version of Oracle Linux on your PowerFlow system, run the following command on the PowerFlow system:

    cat /etc/oracle-release

  2. To install Oracle, choose one of the following procedures:

    • Using a public Oracle Linux repository, run the yum update command.
    • By mounting the latest PowerFlow ISO to the system and installing the latest packages from the ISO.
  3. Continue the upgrade process by updating Docker.

Step 3. Upgrading to Docker 18.09.2 or later

PowerFlow systems before version 2.0.0 included Docker 18.06. If you are running a version of the PowerFlow before version 2.0.0, you will need to update Docker 18.09.2 or later to be able to upgrade to PowerFlow version 2.0.0 or later. For more information about the security updates included in Docker 18.09.2, see https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736.

Before upgrading Docker, ScienceLogic recommends that you review the following information from the Docker product manual: https://docs.docker.com/ee/upgrade/.

Run the following process on Docker Swarm node, starting with the manager nodes.

For clustered configurations, see the information in Installing Docker in Clustered Configurations before running the upgrade steps below.

To upgrade to Docker 18.09.2 or later:

  1. Review the steps in the Docker product manual: https://docs.docker.com/ee/docker-ee/oracle/#install-with-a-package.

  2. To install docker-io, run the following command on the PowerFlow instance:

    sudo yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.3.7-3.1.el7.x86_64.rpm https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-19.03.5-3.el7.x86_64.rpm

    https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-cli-19.03.5-3.el7.x86_64.rpm

  3. You will need to update the Docker versions in this command if there are more recent versions on the Docker Download page:https://download.docker.com/linux/centos/7/x86_64/stable/Packages/.

    You might need to remove spaces from the code that you copy and paste from this manual. For example, in instances such as the yum command, above, line breaks were added to long lines of code to ensure proper pagination in the document.

  1. To install docker-ce offline using the IS 2.0.x ISO, run the following commands:

    rpm -e --nodeps docker-ce

    sudo yum install -y /mnt/tmpISMount/Packages/containerd.io-1.3.7-3.1.el7.x86_64.rpm

    sudo yum install -y /mnt/tmpISMount/Packages/docker-ce-cli-19.03.5-3.el7.x86_64.rpm

    sudo yum install -y /mnt/tmpISMount/Packages/docker-ce-19.03.5-3.el7.x86_64.rpm

    sudo systemctl enable docker

    sudo systemctl start docker

If the node is a member of a cluster, wait for a few minutes, and the node will automatically rejoin the swarm cluster, and re-deploy the services running on that node. Wait until all services are operational, then proceed to upgrade the next node. For more information, see Installing Docker in Clustered Configurations.

  1. Continue the upgrade process by installing the PowerFlow RPM.

Installing Docker in Clustered Configurations

Follow the best practices for upgrading a cluster described in the Docker product manual: https://docs.docker.com/ee/upgrade/#cluster-upgrade-best-practices.

You should upgrade all manager nodes before upgrading worker nodes. Upgrading manager nodes sequentially is recommended if live workloads are running in the cluster during the upgrade. After you upgrade the manager nodes, you should upgrade worker nodes, and then the Swarm cluster upgrade is complete.

Docker recommends that you drain manager nodes of any services running on those nodes If a live migration is expected, all workloads must be running on swarm workers, not swarm managers, or the manager under upgrade needs to be completely drained.

Also, a new Python Package Index (PyPI) service was added to the PowerFlow stack. When deploying PowerFlow in a cluster setup, and not using network-aware volumes, the PyPI server must be "pinned" to a specific node with constraints. Pinning the PyPI server to a single node ensures that its persistent volume containing the Synchronization PowerPacks will always be available to PowerFlow.

Step 4. Installing the PowerFlow RPM

To update PowerFlow using the RPM:

  • Download the PowerFlow RPM and copy the RPM file to the PowerFlow system.

  • Log in as isadmin with the appropriate (root) password. You must be root to upgrade the RPM file.

  • Type the following at the command line:

    sudo rpm -Uvh full_path_of_rpm

    where full_path_of_rpm is the full path and name of the RPM file.

  • Run the pull_start_iservices.sh script to deploy and initialize 2.x.x:

    /opt/iservices/scripts/pull_start_iservices.sh

    Do not run the pull_start_iservices.sh script if you are using PowerFlow in a clustered environment.

  1. To view updates to each service and the service versions for services throughout the whole stack, type the following at the command line:

    docker service ls

  2. Verify that each service now uses the new version of PowerFlow.

  3. As needed, update the PowerFlow system from Basic Authentication to OAuth 2.0. For more information, see Configuring Authentication with PowerFlow.

  4. As needed, set up licensing for PowerFlow. For more information, see Licensing PowerFlow.

If you are upgrading a clustered PowerFlow environment from 1.8.x to 2.0.0 or later, see Updating Cluster Settings when Upgrading from 1.8.x to 2.0.0 or Later.

Uploading Custom Dependencies to the PyPI Server with the iscli Tool

You can use the PowerFlow command-line tool (iscli) to upload customer dependencies to the PowerFlow local Python Package Index (PyPI) Server:

  1. Copy the Python package to the pypiserver container.

  2. Exec into the container and run the following commands:

    devpi login isadmin

    devpi use http://127.0.0.1:3141/isadmin/dependencies

    devpi upload <location of your package dependencies>

Updating Cluster Settings when Upgrading from 1.8.x to 2.0.0 or Later

After upgrading PowerFlow cluster nodes from 1.8.x to 2.0.0 or later, you need to verify the following details in the docker-compose file:

  1. Verify that the pypiserver is pinned to the node that contains the Synchronization PowerPacks: node.hostname == <node hostname from docker node ls>. pypiserver is not a critical service, so it is not replicated.

  2. Verify that the dexserver is replicated three times to keep the pypiserver service from moving from one node to another and so you no longer have the persisted Synchronization PowerPack storage.

You can use the healthcheck action with the powerflowcontrol (pfctl) command-line utility to validate the docker-compose file. The healthcheck action will show a message if the pypiserver or dexserver services are not well-configured in the docker-compose file. You can fix these issues manually, or you can fix them using the autoheal action with the powerflowcontrol utility. The utility corrects the docker-compose file and copies it to all the nodes in the cluster environment.

NOTE: When using the version of the powerflowcontrol command-line utility that comes with PowerFlow version 2.1.0 or later, the autocluster action validates and fixes the pypiserver and dexserver services definitions in the docker-compose file.

For more information, see Using the powerflowcontrol (pfctl) Command-line Utility.

Troubleshooting Upgrade Issues

The following topics describe issues that might occur after the upgrade to version 2.0.0 or later, and how to address those issues.

Cannot mount the virtual environment, or the virtual environment is not accessible

If the docker container does not properly mount the virtual environment, or the virtual environment is not accessible to the environment, you might need to remove and re-deploy the service to resolve the issue.

To roll back to a version before PowerFlow 2.0.0 or later

After a schedule is accessed or modified on the 2.0.0 or later PowerFlow API or scheduler, that schedule will not be accessible again in a 1.x version. If you upgrade to 2.0.0 from 1.x and you want to go back to the 1.x release, you must delete the schedule and recreate the schedule in 1.x (if the schedule was modified in 2.0.0).

Cannot access PowerFlow or an Internal Server Error occurs

PowerFlow version 2.0.0 and later use a new type of authentication session. This change might cause problems if your browser attempts to load the PowerFlow user interface using a "stale" cache from version 1.8.4. If you have issues accessing the user interface, or if you see an "Internal server error" message when you log in, be sure to clear the local cache of your browser.

After upgrading, the syncpack_steprunner fails to run

This error flow tends to happen when the syncpack_steprunner is deployed, but the database is not yet updated with the indexes necessary for the Synchronization PowerPack processes to query the database. In most deployments, the index should be automatically created. If the index is not automatically created, which it might do in a clusterd configuration, you can resolve this issue by manually creating the indexes.

In this situation, if you check the logs, you will most likely see the following message:

couchbase.exceptions.HTTPError: <RC=0x3B[HTTP Operation failed. Inspect status code for details], HTTP Request failed. Examine 'objextra' for full result, Results=1, C Source=(src/http.c,144), OBJ=ViewResult<rc=0x3B[HTTP Operation failed. Inspect status code for details], value={'requestID': '57ad959d-bafb-46a1-9ede-f80f692b0dd7', 'errors': [{'code': 4000, 'msg': 'No index available on keyspace content that matches your query. Use CREATE INDEX or CREATE PRIMARY INDEX to create an index, or check that your expected index is online.'}], 'status': 'fatal', 'metrics': {'elapsedTime': '5.423085ms', 'executionTime': '5.344487ms', 'resultCount': 0, 'resultSize': 0, 'errorCount': 1}}, http_status=404, tracing_context=0, tracing_output=None>, Tracing Output={":nokey:0": null}>

To address this issue, wait a few minutes for the index to be populated. If you are still getting an error after the database has been running for a few minutes, you can manually update the indexes by running the following command:

initialize_couchbase -s

Creating a primary index is only for troubleshooting, and primary indexes should not be left on the system.

Licensing PowerFlow

Before users can access all of the features of version 2.0.0 or later of PowerFlow, the Administrator user must license the PowerFlow instance through the ScienceLogic Support site. For more information about accessing PowerFlow files at the ScienceLogic Support site, see the following Knowledge Base article: SL1 PowerFlow Download and Licensing.

When you log in to the PowerFlow system, a notification appears at the bottom right of the screen that states how much time is left in your PowerFlow license. The notification displays with a green background if your license is current, yellow if you have ten days or less in your license, and red if your license has expired. You need to click the Close icon () to close this notification.

You can also track your licensing information on the About page (username menu > About). You can still log into a system with an expired license, but you cannot create or schedule PowerFlow applications.

The administrator and all users cannot access certain production-level capabilities until the administrator licenses the instance. For example, users cannot create schedules or upload PowerFlow applications and steps that are not part of a Synchronization PowerPack until PowerFlow has been licensed.

If you are not deploying PowerFlow on a production or pre-production environment, you can skip the licensing process.

If you are licensing a PowerFlow High Availability cluster, you can run the following licensing process on any node in the cluster. The node does not have to be the leader, and the licensing process does not have to be run on all nodes in the Swarm.

Licensing a PowerFlow System

To license a PowerFlow system:

  1. Run the following command on your PowerFlow system to generate the .iskey license file:

    iscli --license --customer "<user_name>" --email <user_email>

    where <user_name> is the first and last name of the user, and <user_email> is the user's email address. For example:

    iscli --license --customer "John Doe" --email jdoe@sciencelogic.com

  2. Run an ls command to locate the new license file: customer_key.iskey.

  3. Using WinSCP or another utility, copy the .iskey license file to your local machine.

  4. Go to the PowerFlow License Request page at the ScienceLogic Support site: https://support.sciencelogic.com/s/integration-service-license-request:

  5. For Step 2 of the "Generate License File" process, select the PowerFlow record you want to license.

    You already covered Step 1 of the "Generate License File" process in steps 1-3 of this procedure.

  6. Scroll down to Step 3 of the "Generate License File" process and upload the .iskey license file you created in steps 1-3 of this procedure.

  7. Click Upload Files.

  8. After uploading the license file, click Generate PowerFlow License. A new Licensing page appears:

  9. Click the .crt file in the Files pane to download the new .crt license file.

  10. Using WinSCP or another file-transfer utility, copy the .crt license file to your PowerFlow system.

  11. Upload the .crt license file to the PowerFlow server by running the following command on that server:

    iscli -l -u -f ./<license_name>.crt -H <IP_address> -U <user_name> -p <user_password>

    where <license_name> is the system-generated name for the .crt file, <IP_address> is the IP address of the PowerFlow system, <user_name> is the user name, and <user_password> is the user password. For example:

    iscli -l -u -f ./aCx0x000000CabNCAS.crt -H 10.2.33.1 -U isadmin -p passw0rd

ScienceLogic determines the duration of the license key, not the customer.

If you have any issues licensing your PowerFlow system, please contact your ScienceLogic Customer Success Manager (CSM) or open a new Service Request case under the "Integration Service" category.

Licensing Solution Types

The licensing for the PowerFlow platform was separated into three solution types:

  • Standard: This solution lets you import and install Synchronization PowerPacks published by ScienceLogic and ScienceLogic Professional Services, and to run and schedule PowerFlow applications from those Synchronization PowerPacks. You cannot customize or create PowerFlow applications or steps with this solution type. Features that are not available display in gray text in the user interface.

  • Advanced: This solution contains all of the Standard features, and you can also build your own Synchronization PowerPacks and upload custom applications and steps using the command-line interface. You can create PowerFlow applications using the PowerFlow command-line interface, but you cannot create and edit applications or steps using the PowerFlow builder in the user interface.

  • Premium: This solution contains all of the Advanced features, and you can also use the PowerFlow builder, the low-code/no-code, drag-and-drop interface, to create and edit PowerFlow applications and steps.

If you are upgrading from PowerFlow version 2.x.x to version 2.2.0, you will need to upgrade your license to the Advanced or Premium solution to be able to upload custom content (such as steps and applications) for Synchronization PowerPacks, or if you want to use the PowerFlow builder. Please note that this licensing update does not impact any solutions that are already installed on the PowerFlow system, and you can continue to run and schedule existing content as needed. For more information, see ScienceLogic Pricing.

A yellow text box appears in the PowerFlow user interface when the license is close to expiring, displaying how many days are left before the license expires. The license status and expiration date also displays on the About page in the PowerFlow user interface.

An unlicensed system will not be able to create PowerFlow applications, steps, or schedules. Unlicensed systems will only be able to run applications that are installed manually through Synchronization PowerPacks.

Features that are locked by licensing solution type are grayed out. If you click on a grayed-out feature, the user interface will display a notification prompting you to upgrade your license.

Configuring a Proxy Server

To configure PowerFlow to use a proxy server:

  1. Either go to the console of the PowerFlow system or use SSH to access the PowerFlow server.
  2. Log in as isadmin with the appropriate password.
  3. Using a text editor like vi, edit the file /opt/iservices/scripts/docker-compose-override.yml.

    PowerFlow uses a docker-compose-override.yml file to persistently store user-specific configurations for containers, such as proxy settings, replica settings, additional node settings, and deploy constraints. The user-specific changes are kept in this file so that they can be re-applied when the /opt/iservices/scripts/docker-compose.yml file is completely replaced on an RPM upgrade, ensuring that no user-specific configurations are lost.

  1. In the environment section of the steprunner service, add the following lines:

    services:
      steprunner:
        environment:
    	https_proxy: "<proxy_host>"
    	http_proxy: "<proxy_host>"
    	no_proxy: ".isnet"

    If you do not want to use more than one proxy location, you can use the no_proxy setting to specify all of the locations, separated by commas and surrounds by quotation marks. For example: no_proxy: ".isnet,10.1.1.100,10.1.1.101"

    If you want to access external pypi packages while using a proxy, be sure to include pypi.org and files.pythonhosted.org to this section to ensure the proxy enables those locations.

  • In the environment section of the syncpack_steprunner service, add the following lines:

    services:
      syncpack_steprunner:
        environment:
    	https_proxy: "<proxy_host>"
    	http_proxy: "<proxy_host>"
    	no_proxy: ".isnet"

    If you want to access external pypi packages while using a proxy, be sure to include pypi.org and files.pythonhosted.org to this section to ensure the proxy enables those locations.

  • Save the settings in the file and then run the /opt/iservices/scripts/compose_override.sh script.

    The compose_override.sh script validates that the configured docker-compose.yml and docker-compose-override.yml files are syntactically correct. If the settings are correct, the script applies the settings to your existing docker-compose.yml file that is used to actually deploy.

  • Re-deploy the steprunners to use this change by typing the following commands:

    docker service rm iservices_steprunner

    docker stack deploy -c /opt/iservices/scripts/docker-compose.yml iservices

Changing the PowerFlow System Password

The PowerFlow system uses two primary passwords. For consistency, both passwords are the same after you install PowerFlow, but you can change them to separate passwords as needed.

PowerFlow uses the following passwords:

  • The PowerFlow Administrator (isadmin) user password. This is the password that you set during the PowerFlow ISO installation process, and it is only used by the default local Administrator user (isadmin). You use this password to log into the PowerFlow user interface and to verify API requests and database actions. This password is set as both the "Linux host isadmin" user and in the /etc/iservices/is_pass file that is mounted into the PowerFlow stack as a "Docker secret". Because it is mounted as a secret, all necessary containers are aware of this password in a secure manner. Alternatively, you can enable third-party authentication, such as LDAP or AD, and authenticate with credentials other than isadmin. However, you will need to set the user policies for those LDAP users first with the default isadmin user. For more information, see Managing Users in PowerFlow.
  • The Linux Host OS SSH password. This is the password you use to SSH and to log in to isadmin. You can change this password using the standard Linux passwd command or another credential management application to manage this user. You can also disable this Linux user and add your own user if you want. The PowerFlow containers and applications do not use or know this Linux login password, and this password does not need to be the same between nodes in a cluster. This is a standard Linux Host OS password.

To change the PowerFlow Administrator (isadmin) user password:

  1. You can change the mounted isadmin password secret (which is used to authenticate via API by default) and the Couchbase credentials on the PowerFlow stack by running the ispasswd script on any node running PowerFlow in the stack:

    /opt/iservices/scripts/ispasswd

  2. Follow the prompts to reset the password. The password must be at least six characters and no more than 24 characters, and all special characters are supported.

    Running the ispasswd script automatically changes the password for all PowerFlow application actions that require credentials for the isadmin user.

  3. If you have multiple nodes, copy /etc/iservices/is_pass file, which was just updated by the ispasswd script, to all other manager nodes in the cluster. You need to copy this password file across all nodes in case you deploy from a different node than the node where you changed the password. The need to manually copy the password to all nodes will be removed in a future release of PowerFlow.

If a PowerFlow user makes multiple incorrect login attempts, PowerFlow locks out the user. To unlock the user, run the following command: unlock_user -u <username>

Configuring Security Settings

This topic explains how to change the HTTPS certificate used by PowerFlow, and it also describes password and encryption key security.

Changing the HTTPS Certificate

The PowerFlow user interface only accepts communications over HTTPS. By default, HTTPS is configured using an internal, self-signed certificate.

You can specify the HTTPS certificate to use in your environment by mounting the following two files in the user interface (gui) service:

  • /etc/iservices/is_key.pem
  • /etc/iservices/is_cert.pem

The SSL certificate for the PowerFlow system only requires the HOST_ADDRESS field to be defined in the certificate. That certificate and key must be identical across all nodes. If needed, you can also add non-HOST_ADDRESS IPs to the Subject Alternative Name field to prevent an insecure warning when visiting the non-HOST_ADDRESS IP.

If you are using a load balancer, the certificates installed on the load balancer should use and provide the hostname for the load balancer, not the PowerFlow nodes. The SSL certificates should always match the IP or hostname that exists in the HOST_ADDRESS setting in /etc/iservices/isconfig.yml. If you are using a load balancer, the HOST_ADDRESS must also be the IP address for the load balancer .

If you are using a clustered configuration for PowerFlow, you will need to copy the key and certificate to the same location on the node.

To specify the HTTPS certificate to use in your environment:

  1. Copy the key and certificate to the PowerFlow host.

  2. Modify the /opt/iservices/scripts/docker-compose-override.yml file and mount a volume to the gui service. The following code is an example of the volume specification:

    volumes: 
      - "<path to IS key>:/etc/iservices/is_key.pem"
      - "<path to IS certificate>:/etc/iservices/is_cert.pem"
  1. Run the following script to validate and apply the change:

    /opt/iservices/scripts/compose_override.sh

  1. Re-deploy the gui service by running the following commands:

    docker service rm iservices_gui

    /opt/iservices/scripts/pull_start_iservices.sh

Using Password and Encryption Key Security

When you install the PowerFlow platform, you specified the PowerFlow root password. This root password is also the default isadmin password:

  • The root/admin password is saved in a root read-only file here: /etc/iservices/is_pass
  • A backup password file is also saved in a root read-only file here: /opt/iservices/backup/is_pass

The user-created root password is also the default PowerFlow password for couchbase (:8091) and all API communications. The PowerFlow platform generates a unique encryption key for every platform installation:

  • The encryption key exists in a root read-only file here: /etc/iservices/encryption_key
  • A backup encryption key file is also saved in a root read-only file here: /opt/iservices/backup/encryption_key

This encryption key is different from the HTTPS certificate key discussed in the previous topic.

You can use the encryption key to encrypt all internal passwords and user-specified data. You can encrypt any value in a configuration by specifying "encrypted": true, when you POST that configuration setting to the API. There is also an option in the PowerFlow user interface to select encrypted. Encrypted values use the same randomly-generated encryption key.

User-created passwords and encryption keys are securely exposed in the Docker containers using Docker secrets at https://docs.docker.com/engine/swarm/secrets/ to ensure secure handling of information between containers.

The encryption key must be identical between two PowerFlow systems if you plan to migrate from one to another. The encryption key must be identical between High Availability or Disaster Recovery systems as well.

PowerFlow supports all special characters in passwords.

NOTE: For detailed security information about the configuration of Docker Enterprise, see the SL1 PowerFlow: System Security Plan for Docker Enterprise document.

Configuring PowerFlow for Scalability

SL1 PowerFlow version 2.2.0 introduces the enable_state_backend setting in the environments section of the docker-compose.yml file. This setting is set to true (enabled) by default. This setting provides a more consistent, scalable location for storing task metadata, enabling a more rapid query for task states, as well as preventing any task state logs from being ejected from the system and losing the state. The enable_state_backend setting does this by writing metadata for tasks to the Couchbase database.

This setting addresses the following scalability issues:

  • Having to increase the redis memory limit to prevent early ejection of logs and task states (logs for a previously run PowerFlow application always return pending, even though the application already ran.

  • Slow lookups when querying for large-scale integration results (slow load times for Device Sync).

  • Having to increase the contentapi memory limit to prevent timeouts when querying for large sets of results.

Because PowerFlow version 2.2.0 and later now writes task metadata to the Couchbase database, the user experience of working with previously run integrations and data is improved. However, this new setting also results in a slightly increased performance impact on Couchbase.

If you are using a new deployment of PowerFlow version 2.2.0 or later, no action is needed; you can install version 2.2.0 and the enable_state_backend setting in the environments section is set to true (enabled) by default. You can then scale up as needed.

If you are using a version of PowerFlow before version 2.2.0, or if you are upgrading to version 2.2.0, ScienceLogic recommends that you assess the current utilization of your PowerFlow system, because there will be a greater impact on the Couchbase database by default. If you find the database, or database node, is running at max CPU (80% or greater), you should be prepared to increase CPUs allocated to the Couchbase nodes, or deploy an additional database node to accommodate for the additional load.

Alternatively, if the benefits of this feature are not desired, and you cannot afford to increase CPU in a large scale environment, you can disable it by setting enable_state_backend: false on both the contentapi and steprunner services in the docker-compose.yml file. Disabling this feature will no longer store metadata in the Couchbase database, which removes the additional overhead on Couchbase, but you will not experience the benefits listed above.

This release significantly improves performance related to task status querying, reducing the API memory requirements and increasing the responsiveness of the PowerFlow user interface. When a PowerFlow system queries for the state of a task, instead of pulling all relevant data from that task, the system only pulls the metadata that is needed at that time.

Signs that the Database Needs More CPU

The following behaviors are indications that your database needs more CPU:

  • The following error occurs in dexserver logs: "Storage health check failed: create auth request: insert auth request: operation has timed out" errors in dexserver logs"
  • Frequent timeout errors in logs
  • Unable to consistently authenticate and log into the PowerFlow user interface, such as one request succeeds, but the next five requests keep bringing you back to authentication
  • API requests are failing with 500 errors
  • Couchbase CPU utilization is frequently greater than 80%

To further ensure the database is not doing more work than necessary, ScienceLogic recommends the following actions to further offset this additional database impact:

  • Make sure that all of your PowerFlow applications that are running are activated and installed from a Synchronization PowerPack. Content from Synchronization PowerPacks allows PowerFlow to read code directly from virtual environments and save queries to the database.
  • Reduce or remove "replica" count settings for the logs bucket. While 2 replicas is important for Synchronization PowerPack content, as it is your configuration and integration data, it is not as important for logs. Replicating bucket data between nodes can be expensive, especially for a highly active bucket like logs. For this reason, ScienceLogic recommends you have a maximum of 1 replica for the logs bucket, or disable replicas on logs completely.

Timeout and Retry Configuration Variables

There are three configuration variables you can use to configure the number of retries for saving the application logs, step logs, and the status of both in Couchbase. To improve the access to Couchbase when saving the application and step logs, you can edit the default values of the following configuration variables in the docker-compose.yml file:

  • logs_timeout_retries. Use this configuration variable as env_var in the docker-compose.yml file for the workers, if any other number of retries is needed. The default is 3.
  • logs_delay_retries. The delay of retries. The default is 2.
  • state_timeout_retries. Number of retries for Celery. This value can be changed using the env_var in the docker-compose.yml file for the workers, if any other number of retries is needed. The default is 3.

Configuring Additional Elements of PowerFlow

If you have multiple workers running on the same PowerFlow system, you might want to limit the amount of memory allocated for each worker. This helps prevent memory leaks, and also prevents one worker using too many resources and starving other workers. You can apply these limits in two ways:

  • Set a hard memory limit in Docker (this is the default)
  • Set a soft memory limit in the worker environment

Setting a Hard Memory Limit in Docker

Setting a memory limit for the worker containers in your docker-compose.yml file sets a hard limit. If you set a memory limit for the workers in the docker-compose file and a worker exceeds the limit, the container is terminated via SIGKILL.

If the currently running task caused memory usage to go above the limit, that task might not be completed, and the worker container is terminated in favor of a new worker. This setting helps to prevent a worker from endlessly running and consuming all memory on the PowerFlow system.

You can configure the hard memory limit in the steprunner service of the docker-compose.yml file:

deploy:

resources:

limits:

memory: 2G

 

Setting a Soft Memory Limit in the Worker Environment

You can set the memory limit for a worker application, and not at the Docker level. Setting the memory limit at the application level differs from the hard memory limit in Docker in that if a worker exceeds the specified memory limit, that worker is not immediately terminated via SIGKILL.

Instead, if a worker exceeds the soft memory limit, the worker waits until the currently running task is completed to recycle itself and start a new process. As a result, tasks will complete if a worker crosses the memory limit, but if a task is running infinitely with a memory leak, that task might consume all memory on the host.

The soft memory limit is less safe from memory leaks than the hard memory limit.

You can configure the soft memory limit with the worker environment variables. The value is in KiB (1024 bytes). Also, each worker instance contains three processes for running tasks. The memory limit applies to each individual instance, and not the container as a whole. For example, a 2 GB memory limit for the container would translate to 2 GB divided by three, or about 700 MB for each worker:

steprunner:

image: repository.auto.sciencelogic.local:5000/is-worker:1.8.1

environment:

additional_worker_args: ' --max-memory-per-child 700000'

 

PowerFlow Management Endpoints

This section provides technical details about managing PowerFlow. The following information is also available in the PowerPacks in Using SL1 to Monitor SL1 PowerFlow.

Flower API

Celery Flower is a web-based tool for monitoring PowerFlow tasks and workers. Flower lets you see task progress, details, and worker status:

The following Flower API endpoints return data about the Flower tasks, queues, and workers. The tasks endpoint returns data about task status, runtime, exceptions, and application names. You can filter this endpoint to retrieve a subset of information, and you can combine filters to return a more specific data set.

/flower/api/tasks. Retrieve a list of all tasks.

/flower/api/tasks?app_id={app_id}. Retrieve a list of tasks filtered by app_id.

/flower/api/tasks?app_name={app_name}. Retrieve a list of tasks filtered by app_name.

/flower/api/tasks?started_start=1539808543&started_end=1539808544. Retrieve a list of all tasks received within a time range.

/flower/api/tasks?state=FAILURE|SUCCESS. Retrieve a list of tasks filtered by state.

/flower/api/workers. Retrieve a list of all queues and workers

To view this information in the Flower user interface navigate to <hostname_of_PowerFlow_system>/flower/dashboard.

For more information, see the Flower API Reference.

If you use the ScienceLogic: PowerFlow PowerPack to collect this task information, the PowerPack will create events in SL1 if a Flower task fails. For more information, see Using SL1 to Monitor PowerFlow.

Couchbase API

The Couchbase Server is an open-source database software that can be used for building scalable, interactive, and high-performance applications. Built using NoSQL technology, Couchbase Server can be used in either a standalone or cluster configuration.

The following image shows the CouchBase user interface, which you can access at port 8091:

The following Couchbase API endpoints return data about the Couchbase service. The pools endpoint represents the Couchbase cluster. In the case of PowerFlow, each node is a Docker service, and buckets represent the document-based data containers. These endpoints return configuration and statistical data about each of their corresponding Couchbase components.

<hostname_of_PowerFlow_system>:8091/pools/default. Retrieve a list of pools and nodes.

<hostname_of_PowerFlow_system>:8091/pools/default/buckets. Retrieve a list of buckets.

To view this information in the Couchbase Administrator user interface, navigate to <hostname_of_PowerFlow_system>:8091.

For more information, see the Couchbase API Reference.

You can also use the Couchbase PowerPack to collect this information. For more information, see Using SL1 to Monitor PowerFlow.

RabbitMQ

RabbitMQ is an open-source message-broker software that originally implemented the Advanced Message Queuing Protocol and has since been extended with a plug-in architecture to support Streaming Text Oriented Messaging Protocol, Message Queuing Telemetry Transport, and other protocols. 

The following image shows the RabbitMQ user interface, which you can access at port 15672:

Docker Statistics

You can collect Docker information by using SSH to connect to the Docker socket. You cannot currently retrieve Docker information by using the API.

To collect Docker statistics:

  1. Use SSH to connect to the PowerFlow instance.

  2. Run the following command:

    curl --unix-socket /var/run/docker.sock http://docker<PATH>

    where <PATH> is one of the following values:

  • /info
  • /containers/json
  • /images/json
  • /swarm
  • /nodes
  • /tasks
  • /services

You can also use the Docker PowerPack to collect this information. For more information, see Using SL1 to Monitor PowerFlow.