This
The powerflowcontrol command-line utility was called iservicecontrol in previous release of SL1 PowerFlow. You can use either "iservicecontrol" or "pfctl" in commands, but "iservicecontrol" will eventually be deprecated in favor of "pfctl".
The following video explains how to use the powerflowcontrol (pfctl) command-line utility:
What is the powerflowcontrol (pfctl) Utility?
The powerflowcontrol (pfctl) command-line utility included in PowerFlow contains automatic cluster healthcheck and autoheal actions that will verify the configuration of your cluster or single node. The utility also includes an autocluster action that performs multiple administrator-level actions on either the node or the cluster.
The powerflowcontrol utility is included in the latest release of PowerFlow. If you need a newer version, you can download the latest version from the ScienceLogic Support site at https://support.sciencelogic.com/s/.
To install the powerflowcontrol utility:
-
Go to the ScienceLogic Support site at https://support.sciencelogic.com/s/.
-
Click the PowerFlow. The PowerFlow page appears.
tab and select -
Click the link for the current release. The Release Version page appears.
-
In the Release Files section, click the link for the version of PowerFlow Control you want to download. The Release File Details page appears.
-
Click the .whl file for the powerflowcontrol utility.
button to download the -
Using WinSCP or another file-transfer utility, copy the .whl file to a directory on the PowerFlow system.
-
Go to the console of the PowerFlow system or use SSH to access the PowerFlow system.
-
To install the utility, run the following command:
sudo pip3 install iservicecontrol-x.x.x-py3-none-any.whl
where x.x.x is the pfctl version number.
-
To check the version number of the utility, run the following command:
pip3 show iservicecontrol
The powerflowcontrol command-line utility requires port 22 on all host nodes.
You can use key-based authentication instead of username and password authentication for the powerflowcontrol command-line utility.
The powerflowcontrol command-line utility was updated to let any user run the powerflowcontrol utility. The default isadmin user already meets these requirements, and this update is relevant only if your PowerFlow environment uses custom users or processes.
The user requirements for working with powerflowcontrol include the following:
- The user must belong to the iservices group
- The user must belong to the docker group
- The user must belong to the systemd-journal group, or have permission to view journalctl logs (to check for errors in Docker services)
- The user must have sudo permission (to set PowerFlow configuration file group ownership)
If the isadmin (host) password contains a special character, such as an "@" or "#" symbol, the password must be escaped in the iservicecontrol commands by adding single quotes, such as 'user:password'. For example: pfctl --host 10.10.10.100 'isadmin:testing@is' --host 10.10.10.102 'isadmin:testing@is' --host 10.10.10.105 'isadmin:testing@is' autocluster
For a detailed list of all of the actions you can run on a single node, SSH to the PowerFlow server and run the following command:
pfctl node-action --help
For a detailed list of all of the actions you can run on a clustered system, run the following command:
pfctl cluster-action --help
Starting with version 2.7.2 of the pfctl utility, you can run the following command to view updated and expanded help text:
pfctl --help
healthcheck and autoheal
The powerflowcontrol (pfctl) command-line utility performs multiple administrator-level actions in a clustered PowerFlow environment. The powerflowcontrol utility contains automatic cluster healthcheck and autoheal capabilities that you can use to prevent issues with your PowerFlow environment:
- The healthcheck action executes various commands to verify configurations, proxies, internal connectivity, queue cluster, database cluster, indexes, NTP settings, Docker versions on all clusters, and more. Any previously reported troubleshooting issues are addressed with the healthcheck action.
- The autoheal action automatically takes corrective action on your cluster.
After deploying any clusters in a PowerFlow system, or if you are troubleshooting an existing cluster, you should first run the healthcheck action to generate immediate diagnostics of the entire cluster and all services and containers associated with the cluster. If the healthcheck action finds any issues, you can run the autoheal action to attempt to address those issues.
healthcheck
The following commands show the formatting for a healthcheck action for a single node, followed by an example:
pfctl --host <pf_host_ip_address> <username>:<password> node-action --action healthcheck
pfctl --host 10.2.11.222 isadmin:isadmin222 node-action --action healthcheck
The following commands show the formatting for a healthcheck action for a clustered environment, followed by an example:
pfctl --host <pf_host_ip_address> <username>:<password> --host <host> <username>:<password> --host <pf_host_ip_address> <username>:<password> cluster-action --action healthcheck
pfctl --host 10.2.11.222 isadmin:isadmin222 --host 10.2.11.232 isadmin:isadmin232 --host 10.2.11.244 isadmin:isadminpass cluster-action --action healthcheck
As a best practice, run the healthcheck action once a day on your PowerFlow to identify and address any potential issues with the system before those issues impact operations.
autoheal
The following commands show the formatting for an autoheal action for a single node, followed by an example:
pfctl --host <host> <username>:<password> node-action --action autoheal
pfctl --host 10.2.11.222 isadmin:isadmin222 node-action --action autoheal
The following commands show the formatting for an autoheal action for a clustered environment, followed by an example:
pfctl --host <host> <username>:<password> --host <host> <username>:<password> --host <host> <username>:<password> cluster-action --action autoheal
pfctl --host 10.2.11.222 isadmin:isadmin222 --host 10.2.11.232 isadmin:isadmin232 --host 10.2.11.244 isadmin:isadminpass cluster-action --action autoheal
Example Output
The following section lists example healthcheck output:
verify db host for cluster 10.2.11.222...........[OK] check dex connectivity 10.2.11.222...............[OK] check rabbit cluster count 10.2.11.222...........[OK] check rabbit cluster alarms 10.2.11.222..........[OK] verify cmd in container 10.2.11.222..............[OK] verify cmd in container 10.2.11.222..............[OK] verify cmd in container 10.2.11.222..............[OK] verify cmd in container 10.2.11.222..............[OK] verify cmd in container 10.2.11.222..............[OK] verify cmd in container 10.2.11.222..............[OK] verify cmd in container 10.2.11.232..............[Skipped - iservices_steprunner not found on 10.2.11.232] verify cmd in container 10.2.11.232..............[OK] verify cmd in container 10.2.11.232..............[OK] verify cmd in container 10.2.11.232..............[Skipped - iservices_contentapi not found on 10.2.11.232] verify cmd in container 10.2.11.232..............[OK] verify cmd in container 10.2.11.232..............[Skipped - iservices_steprunner not found on 10.2.11.232] verify cmd in container 10.2.11.232..............[OK] verify cmd in container 10.2.11.232..............[OK] verify cmd in container 10.2.11.232..............[Skipped - iservices_contentapi not found on 10.2.11.232] verify cmd in container 10.2.11.232..............[OK] verify cmd in container 10.2.11.244..............[OK] verify cmd in container 10.2.11.244..............[OK] verify cmd in container 10.2.11.244..............[OK] verify cmd in container 10.2.11.244..............[OK] verify cmd in container 10.2.11.244..............[OK] verify cmd in container 10.2.11.244..............[OK] get file hash 10.2.11.222........................[OK] get file hash 10.2.11.232........................[OK] /etc/iservices/isconfig.yml does not match between 10.2.11.222 and 10.2.11.232 get file hash 10.2.11.222........................[OK] get file hash 10.2.11.232........................[OK] get file hash 10.2.11.244........................[OK] /opt/iservices/scripts/docker-compose.yml does not match between 10.2.11.222 and 10.2.11.244 get file hash 10.2.11.222........................[OK] get file hash 10.2.11.232........................[OK] get file hash 10.2.11.244........................[OK] get file hash 10.2.11.222........................[OK] get file hash 10.2.11.232........................[OK] get file hash 10.2.11.244........................[OK] get file hash 10.2.11.222........................[OK] get file hash 10.2.11.232........................[OK] get file hash 10.2.11.244........................[OK] check cpu 10.2.11.222............................[OK] check disk 10.2.11.222...........................[OK] check memory 10.2.11.222.........................[Failed] check cpu 10.2.11.232............................[OK] check disk 10.2.11.232...........................[OK] check memory 10.2.11.232.........................[OK] check cpu 10.2.11.244............................[OK] check disk 10.2.11.244...........................[OK] check memory 10.2.11.244.........................[OK] Utilization warnings in the cluster: {'10.2.11.222': ['There is less than 2000mb memory available']} verify ntp sync 10.2.11.222......................[OK] verify ntp sync 10.2.11.232......................[OK] verify ntp sync 10.2.11.244......................[OK] check replica count logs 10.2.11.222.............[OK] check replica count content 10.2.11.222..........[Failed] Identified missing replicas on some buckets: ['Replica count for bucket: content is not the expected 2'] verify pingable addr 10.2.11.222.................[OK] verify pingable addr 10.2.11.232.................[OK] verify pingable addr 10.2.11.244.................[OK] get exited container count 10.2.11.222...........[OK] get exited container count 10.2.11.232...........[OK] get exited container count 10.2.11.244...........[OK] 6 exited (stale) containers found cluster-wide verify node indexes 10.2.11.222..................[Failed] Some nodes are missing required indexes. Here are the nodes with the missing indeces: Missing the following indexes: {'couchbase.isnet': ['idx_casbin'], 'couchbase-worker2.isnet': ['idx_content_configuration']}
Using powerflowcontrol healthcheck on the docker-compose file
You can also validate the docker-compose file with the powerflowcontrol healthcheck action. The action will show a message if pypiserver or dexserver services are not configured properly in the docker-compose file. You can fix these settings manually or with the powerflowcontrol autoheal action, which corrects the docker-compose file and copies it to all the nodes in the clustered environment.
When using version 1.3.0 or later of the powerflowcontrol (pfctl) command-line utility, the autocluster action validates and fixes the pypiserver and dexserver services definitions in the docker-compose file.
The healthcheck action in the powerflowcontrol command-line utility for PowerFlow clusters will check the Docker version for each cluster to ensure that the Docker version is the same in all the hosts.
autocluster
You can use the powerflowcontrol (pfctl) command-line utility to perform multiple administrator-level actions on your PowerFlow cluster. You can use the autocluster action with the powerflowcontrol command to automate the configuration of a three-node cluster.
If you are using another cluster configuration, the deployment process should be manual, because the powerflowcontrol utility only supports the automated configuration of a three-node cluster.
The autocluster action will completely reset and remove all data from the system. When you run this action, you will get a prompt verifying that you want run the action and delete all data.
To automate the configuration of a three-node cluster, run the following command:
pfctl --host <pf_host1> <username>:<password> --host <pf_host2> <username>:<password> --host <pf_host3> <username>:<password> autocluster
For example:
pfctl --host 192.11.1.1 isadmin:passw0rd --host 192.11.1.2 isadmin:passw0rd --host 192.11.1.3 isadmin:passw0rd autocluster
Running this command will configure your PowerFlow three-node cluster without any additional manual steps required.
You can use the generate_haproxy_config cluster-action in the powerflowcontrol (pfctl) utility to create an HAProxy configuration template that lets you easily set an HAProxy load balancer for a three-node cluster. For example: pfctl cluster-action --action generate_haproxy_config
collect_pf_logs
To collect additional logs for troubleshooting, SSH to the PowerFlow server and run the following command:
pfctl --host <pf_host_IP_address> <username>:<password> node-action --action collect_pf_logs
You can also gather Couchbase data by running the following command:
pfctl --host <pf_host_ip_address> <username>:<password> --json couchbase --cli_type statsDirectory --bucket_name logs
Many of the other powerflowcontrol actions use the same format as the collect_pf_logs action, above. The only change you need to make for those commands is to replace the name of the action at the end of the command. For example: pfctl --host <pf_host_IP_address> isadmin:<password> node-action --action pull_latest_images
open_firewall_ports
To open firewall ports for a single node, SSH to the PowerFlow server and run the following command:
pfctl --host <pf_host_IP_address> isadmin:<password> node-action --action open_firewall_ports
password
To encrypt a password using the powerflowcontrol (pfctl) command-line utility, SSH to the PowerFlow server and run the following command:
pfctl password encrypt
To view the current node password unencrypted, run the following command:
pfctl password decrypt
This command displays the decrypted password on standard output. It does not alter the contents of /etc/iservices/is_pass in place, but just decrypts to stdout.
This command is used locally when the decrypted password is needed for certain tasks that are, in turn, remotely started. On certain systems where the password is encrypted, this involves decrypting the password using the encryption_key file contents. For remote nodes, the nodes must also support the same version or later of the powerflowcontrol (pfctl) command-line utility, as that command is executed locally.