Using the powerflowcontrol (pfctl) Command-line Utility

Download this manual as a PDF file 

This section describes how to use the powerflowcontrol (pfctl) command-line utility to run automatic cluster healthcheck and autoheal actions that will verify the configuration of your PowerFlow cluster or a single PowerFlow node. The powerflowcontrol utility also includes an autocluster action that performs multiple administrator-level actions on either the node or the cluster. You can use this action to automate the configuration of a three-node cluster.

The powerflowcontrol command-line utility was called iservicecontrol in previous release of SL1 PowerFlow. You can use either "iservicecontrol" or "pfctl" in commands, but "iservicecontrol" will eventually be deprecated in favor of "pfctl".

The following video explains how to use the powerflowcontrol (pfctl) command-line utility:

This section covers the following topics:

What is the powerflowcontrol (pfctl) Utility?

The powerflowcontrol (pfctl) command-line utility included in PowerFlow contains automatic cluster healthcheck and autoheal actions that will verify the configuration of your cluster or single node. The utility also includes an autocluster action that performs multiple administrator-level actions on either the node or the cluster.

The powerflowcontrol utility is included in PowerFlow version 2.0.0 or later. If you are using an older version of PowerFlow, you can also access the latest version at the ScienceLogic Support site.

To download the latest version of the powerflowcontrol (pfctl) command-line utility:

  1. Go to the ScienceLogic Support site at https://support.sciencelogic.com/s/.
  2. Click the Product Downloads tab and select PowerFlow. The PowerFlow page appears.
  3. Click the link to the current release. The Release Version page appears.
  4. In the Release Files section, click the link for the powerflowcontrol (pfctl) command-line utility. A Release File page appears.
  5. Click Download File at the bottom of the Release File page.

The powerflowcontrol command-line utility requires port 22 on all host nodes.

You can use key-based authentication instead of username and password authentication for the powerflowcontrol command-line utility.

The powerflowcontrol command-line utility was updated to allow any user provided to run the powerflowcontrol utility. Please note that the default isadmin user already meets these requirements, and this update is relevant only if your PowerFlow environment uses custom users or processes.

The user requirements for working with powerflowcontrol include the following:

  • The user must belong to the iservices group

  • The user must belong to the docker group

  • The user must belong to the systemd-journal group, or have permission to view journalctl logs (to check for errors in Docker services)

  • The user must have sudo permission (to set PowerFlow configuration file group ownership)

If the isadmin (host) password contains a special character, such as an "@" or "#" symbol, the password must be escaped in the iservicecontrol commands by adding single quotes, such as 'user:password'. For example: pfctl --host 10.10.10.100 'isadmin:testing@is' --host 10.10.10.102 'isadmin:testing@is' --host 10.10.10.105 'isadmin:testing@is' autocluster

For a list of all of the actions you can run on a single node, SSH to the PowerFlow server and run the following command:

powerflowcontrol node-action --help

For a list of all of the actions you can run on a clustered system, run the following command:

powerflowcontrol cluster-action --help

healthcheck and autoheal

The powerflowcontrol (pfctl) command-line utility performs multiple administrator-level actions in a clustered PowerFlow environment. The powerflowcontrol utility contains automatic cluster healthcheck and autoheal capabilities that you can use to prevent issues with your PowerFlow environment:

  • The healthcheck action executes various commands to verify configurations, proxies, internal connectivity, queue cluster, database cluster, indexes, NTP settings, Docker versions on all clusters, and more. Any previously reported troubleshooting issues are addressed with the healthcheck action.
  • The autoheal action automatically takes corrective action on your cluster.

After deploying any clusters in a PowerFlow system, or if you are troubleshooting an existing cluster, you should first run the healthcheck action to generate immediate diagnostics of the entire cluster and all services and containers associated with the cluster. If the healthcheck action finds any issues, you can run the autoheal action to attempt to address those issues.

healthcheck

The following commands show the formatting for a healthcheck action for a single node, followed by an example:

pfctl --host <host> <username>:<password> node-action --action healthcheck

pfctl --host 10.2.11.222 isadmin:isadmin222 node-action --action healthcheck

The following commands show the formatting for a healthcheck action for a clustered environment, followed by an example:

pfctl --host <host> <username>:<password> --host <host> <username>:<password> --host <host> <username>:<password> cluster-action --action healthcheck

pfctl --host 10.2.11.222 isadmin:isadmin222 --host 10.2.11.232 isadmin:isadmin232 --host 10.2.11.244 isadmin:isadminpass cluster-action --action healthcheck

As a best practice, run the healthcheck action once a day on your PowerFlow to identify and address any potential issues with the system before those issues impact operations.

autoheal

The following commands show the formatting for an autoheal action for a single node, followed by an example:

pfctl --host <host> <username>:<password> node-action --action autoheal

pfctl --host 10.2.11.222 isadmin:isadmin222 node-action --action autoheal

The following commands show the formatting for an autoheal action for a clustered environment, followed by an example:

pfctl --host <host> <username>:<password> --host <host> <username>:<password> --host <host> <username>:<password> cluster-action --action autoheal

pfctl --host 10.2.11.222 isadmin:isadmin222 --host 10.2.11.232 isadmin:isadmin232 --host 10.2.11.244 isadmin:isadminpass cluster-action --action autoheal

Example Output

The following section lists example healthcheck output:

verify db host for cluster 10.2.11.222...........[OK]
check dex connectivity 10.2.11.222...............[OK]
check rabbit cluster count 10.2.11.222...........[OK]
check rabbit cluster alarms 10.2.11.222..........[OK]
verify cmd in container 10.2.11.222..............[OK]
verify cmd in container 10.2.11.222..............[OK]
verify cmd in container 10.2.11.222..............[OK]
verify cmd in container 10.2.11.222..............[OK]
verify cmd in container 10.2.11.222..............[OK]
verify cmd in container 10.2.11.222..............[OK]
verify cmd in container 10.2.11.232..............[Skipped - iservices_steprunner not found on 10.2.11.232]
verify cmd in container 10.2.11.232..............[OK]
verify cmd in container 10.2.11.232..............[OK]
verify cmd in container 10.2.11.232..............[Skipped - iservices_contentapi not found on 10.2.11.232]
verify cmd in container 10.2.11.232..............[OK]
verify cmd in container 10.2.11.232..............[Skipped - iservices_steprunner not found on 10.2.11.232]
verify cmd in container 10.2.11.232..............[OK]
verify cmd in container 10.2.11.232..............[OK]
verify cmd in container 10.2.11.232..............[Skipped - iservices_contentapi not found on 10.2.11.232]
verify cmd in container 10.2.11.232..............[OK]
verify cmd in container 10.2.11.244..............[OK]
verify cmd in container 10.2.11.244..............[OK]
verify cmd in container 10.2.11.244..............[OK]
verify cmd in container 10.2.11.244..............[OK]
verify cmd in container 10.2.11.244..............[OK]
verify cmd in container 10.2.11.244..............[OK]
get file hash 10.2.11.222........................[OK]
get file hash 10.2.11.232........................[OK]
/etc/iservices/isconfig.yml does not match between 10.2.11.222 and 10.2.11.232
get file hash 10.2.11.222........................[OK]
get file hash 10.2.11.232........................[OK]
get file hash 10.2.11.244........................[OK]
/opt/iservices/scripts/docker-compose.yml does not match between 10.2.11.222 and 10.2.11.244
get file hash 10.2.11.222........................[OK]
get file hash 10.2.11.232........................[OK]
get file hash 10.2.11.244........................[OK]
get file hash 10.2.11.222........................[OK]
get file hash 10.2.11.232........................[OK]
get file hash 10.2.11.244........................[OK]
get file hash 10.2.11.222........................[OK]
get file hash 10.2.11.232........................[OK]
get file hash 10.2.11.244........................[OK]
check cpu 10.2.11.222............................[OK]
check disk 10.2.11.222...........................[OK]
check memory 10.2.11.222.........................[Failed]
check cpu 10.2.11.232............................[OK]
check disk 10.2.11.232...........................[OK]
check memory 10.2.11.232.........................[OK]
check cpu 10.2.11.244............................[OK]
check disk 10.2.11.244...........................[OK]
check memory 10.2.11.244.........................[OK]
Utilization warnings in the cluster:
{'10.2.11.222': ['There is less than 2000mb memory available']}
verify ntp sync 10.2.11.222......................[OK]
verify ntp sync 10.2.11.232......................[OK]
verify ntp sync 10.2.11.244......................[OK]
check replica count logs 10.2.11.222.............[OK]
check replica count content 10.2.11.222..........[Failed]
Identified missing replicas on some buckets: ['Replica count for bucket: content is not the expected 2']
verify pingable addr 10.2.11.222.................[OK]
verify pingable addr 10.2.11.232.................[OK]
verify pingable addr 10.2.11.244.................[OK]
get exited container count 10.2.11.222...........[OK]
get exited container count 10.2.11.232...........[OK]
get exited container count 10.2.11.244...........[OK]
6 exited (stale) containers found cluster-wide
verify node indexes 10.2.11.222..................[Failed]
Some nodes are missing required indexes. Here are the nodes with the missing indeces: 
Missing the following indexes: {'couchbase.isnet': ['idx_casbin'], 'couchbase-worker2.isnet': 
['idx_content_configuration']}

Using powerflowcontrol healthcheck on the docker-compose file

You can also validate the docker-compose file with the powerflowcontrol healthcheck action. The action will show a message if pypiserver or dexserver services are not configured properly in the docker-compose file. You can fix these settings manually or with the powerflowcontrol autoheal action, which corrects the docker-compose file and copies it to all the nodes in the clustered environment.

When using version 1.3.0 or later of the powerflowcontrol (pfctl) command-line utility, the autocluster action validates and fixes the pypiserver and dexserver services definitions in the docker-compose file.

The healthcheck action in the powerflowcontrol command-line utility for PowerFlow clusters will check the Docker version for each cluster to ensure that the Docker version is the same in all the hosts.

autocluster

You can use the powerflowcontrol (pfctl) command-line utility to perform multiple administrator-level actions on your PowerFlow cluster. You can use the autocluster action with the powerflowcontrol command to automate the configuration of a three-node cluster.

If you are using another cluster configuration, the deployment process should be manual, because the powerflowcontrol utility only supports the automated configuration of a three-node cluster.

The autocluster action will completely reset and remove all data from the system. When you run this action, you will get a prompt verifying that you want run the action and delete all data.

To automate the configuration of a three-node cluster, run the following command:

pfctl --host <PowerFlow_host1> <username>:<password> --host <PowerFlow_host2> <username>:<password> --host <PowerFlow_host3> <username>:<password> autocluster

For example:

pfctl --host 192.11.1.1 isadmin:passw0rd --host 192.11.1.2 isadmin:passw0rd --host 192.11.1.3 isadmin:passw0rd autocluster

Running this command will configure your PowerFlow three-node cluster without any additional manual steps required.

You can use the generate_haproxy_config cluster-action in the powerflowcontrol (pfctl) utility to create an HAProxy configuration template that lets you easily set an HAProxy load balancer for a three-node cluster. For example: pfctl cluster-action --action generate_haproxy_config

upgrade

This topic explains how to use the upgrade action in the powerflowcontrol utility to upgrade a clustered environment from PowerFlow version 1.8.x to 2.x.x. The upgrade action is only for upgrading a cluster from a 1.8.x installation, and should not be run otherwise.

The powerflowcontrol utility cannot be installed on a 1.8.4 PowerFlow system because the utility is not compatible with Python 2.7. If you want to use the utility on a 1.8.4 PowerFlow system, you will need to run the is_upgrade_to_v2.sh on any of the cluster nodes. This script will update the nodes to Python 3.x, and then you can download and install the powerflowcontrol utility on the upgraded node, and upgrade of the other nodes from that node.

If you have an environment that has Python 3.6 or later available, the powerflowcontrol package can be installed in that system (a local environment, virtual machine, or another PowerFlow system). That environment must have an SSH connection to the PowerFlow nodes that you want to upgrade. Also, be aware that only certain actions, like the upgrade action, can be run from an external system, but most of the actions that the powerflowcontrol use will only run inside the cluster nodes.

To run the upgrade action in a clustered environment:

  1. Back up your PowerFlow data. For more information, see Backing up Data.
  2. Run the upgrade action with the powerflowcontrol utility:

    You will need to know if the upgrade process will be done offline or online, and you will need the url of the PowerFlow RPM file or the local path of the PowerFlow RPM or ISO file.

    For an offline upgrade, run the following command on the PowerFlow instance:

    pfctl --host <swarm-node1-ip> <is-username>:<is-password> --host <swarm-node2-ip> <is-username>:<is-password> node-action --action upgrade --upgrade_args offline <PowerFlow-iso-local-path>

    where <PowerFlow-iso-local-path> is a local path such as /home/isadmin/sl1-powerflow-2.1.2.iso.

    For an online upgrade, run the following command on the PowerFlow instance:

    pfctl --host <swarm-node1-ip> <is-username>:<is-password> --host <swarm-node2-ip> <is-username>:<is-password> node-action --action upgrade --upgrade_args online <PowerFlow-rpm-url-or-local-path>

  3. Manually open the firewall ports you need, or use the powerflowcontrol open-firewall-ports action.
  4. Synchronize NTP in all the nodes. For more information, see Preparing the PowerFlow System for High Availability.
  5. Manually verify that NTP is synchronized in all the nodes manually, or use the powerflowcontrol node action to verify that.
  6. To deploy PowerFlow, run the autocluster action using the powerflowcontrol utility if your clustered environment contains three nodes.

    If you are using another cluster configuration, the deployment process should be manual, because the powerflowcontrol utility only supports the automated configuration of a three-node cluster.

  7. Run the healthcheck cluster action using the powerflowcontrol utility to check that the cluster environment was deployed correctly.
  8. Run the autoheal cluster action using the powerflowcontrol utility to fix any errors found during the healthcheck action run. You should only run the autoheal action if the healthcheck action found errors.

open_firewall_ports

To open firewall ports for a single node, SSH to the PowerFlow server and run the following command:

pfctl --host <is_host> isadmin:<password> node-action --action open_firewall_ports

Many of the other powerflowcontrol actions use the same format as the open_firewall_ports action, above. The only change you need to make for those commands is to replace the name of the action at the end of the command. For example: pfctl --host <is_host> isadmin:<password> node-action --action pull_latest_images

password

To encrypt a password using the powerflowcontrol (pfctl) command-line utility, SSH to the PowerFlow server and run the following command:

pfctl password encrypt

To view the current node password unencrypted, run the following command:

pfctl password decrypt

This command displays the decrypted password on standard output. It does not alter the contents of /etc/iservices/is_pass in place, but just decrypts to stdout.

This command is used locally when the decrypted password is needed for certain tasks that are, in turn, remotely started. On certain systems where the password is encrypted, this involves decrypting the password using the encryption_key file contents. For remote nodes, the nodes must also support the same version or later of the powerflowcontrol (pfctl) command-line utility, as that command is executed locally.