This
The powerflowcontrol command-line utility was called iservicecontrol in previous release of SL1 PowerFlow. You can use either "iservicecontrol" or "pfctl" in commands, but "iservicecontrol" will eventually be deprecated in favor of "pfctl".
The following video explains how to use the powerflowcontrol (pfctl) command-line utility:
What is the powerflowcontrol (pfctl) Utility?
The powerflowcontrol (pfctl) command-line utility included in PowerFlow contains automatic cluster healthcheck and autoheal actions that will verify the configuration of your cluster or single node. The utility also includes an autocluster action that performs multiple administrator-level actions on either the node or the cluster.
The powerflowcontrol utility is included in the latest release of PowerFlow. If you need a newer version, you can download the latest version from the ScienceLogic Support site at https://support.sciencelogic.com/s/.
The powerflowcontrol command-line utility requires port 22 on all host nodes.
You can use key-based authentication instead of username and password authentication for the powerflowcontrol command-line utility.
If the isadmin (host) password contains a special character, such as an "@" or "#" symbol, the password must be escaped in the iservicecontrol commands by adding single quotes, such as 'user:password'. For example: pfctl --host 10.10.10.100 'isadmin:testing@is' --host 10.10.10.102 'isadmin:testing@is' --host 10.10.10.105 'isadmin:testing@is' autocluster
The powerflowcontrol command-line utility was updated to let any user run the powerflowcontrol utility. The default isadmin user already meets these requirements, and this update is relevant only if your PowerFlow environment uses custom users or processes.
User Requirements for using the powerflowcontrol (pfctl) utility
The user requirements for working with powerflowcontrol include the following:
-
The user must belong to the iservices group.
-
The user must belong to the docker group.
-
The user must belong to the systemd-journal group, or have permission to view journalctl logs (to check for errors in Docker services).
-
The user must have sudo permission (to set PowerFlow configuration file group ownership).
The pfctl utility does not require sudo permission to execute cluster and node actions. If you do run pfctl once as sudo, it is expected that you would need to continue using sudo to modify files. ScienceLogic recommends that you interact with pfctl without sudo, by using a non-root user (like isadmin) that is part of the iservices group. To reset the file ownership, clear out the files from /tmp and re-run the pfctl utility without sudo, as a user that belongs to the iservices group.
Installing the powerflowcontrol (pfctl) utility
To download and install the powerflowcontrol utility:
-
Go to the ScienceLogic Support site at https://support.sciencelogic.com/s/.
-
Click the PowerFlow. The PowerFlow page appears.
tab and select -
Click the link for the current release. The Release Version page appears.
-
In the Release Files section, click the link for the version of PowerFlow Control you want to download. The Release File Details page appears.
-
Click the .whl file for the powerflowcontrol utility.
button to download the -
Using WinSCP or another file-transfer utility, copy the .whl file to a directory on the PowerFlow system.
-
Go to the console of the PowerFlow system or use SSH to access the PowerFlow system.
-
To install the utility, run the following command:
sudo pip3 install iservicecontrol-x.x.x-py3-none-any.whl
where x.x.x is the pfctl version number.
-
To check the version number of the utility, run the following command:
pip3 show iservicecontrol
Getting Help with the powerflowcontrol (pfctl) utility
For a detailed list of all of the actions you can run on a single node, SSH to the PowerFlow server and run the following command:
pfctl node-action --help
For a detailed list of all of the actions you can run on a clustered system, run the following command:
pfctl cluster-action --help
To view updated and expanded help text, run the following command :
pfctl --help
To check the installed pfctl version, run the following command:
pfctl --version
healthcheck and autoheal
The powerflowcontrol (pfctl) command-line utility performs multiple administrator-level actions in a clustered PowerFlow environment. The powerflowcontrol utility contains automatic cluster healthcheck and autoheal capabilities that you can use to prevent issues with your PowerFlow environment:
- The healthcheck action executes various commands to verify configurations, proxies, internal connectivity, queue cluster, database cluster, indexes, NTP settings, Docker versions on all clusters, and more. Any previously reported troubleshooting issues are addressed with the healthcheck action.
- The autoheal action automatically takes corrective action on your cluster.
After deploying any clusters in a PowerFlow system, or if you are troubleshooting an existing cluster, you should first run the healthcheck action to generate immediate diagnostics of the entire cluster and all services and containers associated with the cluster. If the healthcheck action finds any issues, you can run the autoheal action to attempt to address those issues.
You can view the current PowerFlow version and the installed pfctl version if you add --json at the start of the healthcheck command.
healthcheck
The following commands show the formatting for a healthcheck action for a single node, followed by an example:
pfctl --host <pf_host_ip_address> <username>:<password> node-action --action healthcheck
pfctl --host 10.2.11.222 isadmin:isadmin222 node-action --action healthcheck
The following commands show the formatting for a healthcheck action for a clustered environment, followed by an example:
pfctl --host <pf_host_ip_address> <username>:<password> --host <host> <username>:<password> --host <pf_host_ip_address> <username>:<password> cluster-action --action healthcheck
pfctl --host 10.2.11.222 isadmin:isadmin222 --host 10.2.11.232 isadmin:isadmin232 --host 10.2.11.244 isadmin:isadminpass cluster-action --action healthcheck
As a best practice, run the healthcheck action once a day on your PowerFlow to identify and address any potential issues with the system before those issues impact operations.
Additional Features with the healthcheck Action
Starting with version 2.7.4 of the pfctl utility, the healthcheck node-actions and cluster-actions include the following features:
- check_debug_run. Checks if you have run any debug-level runs of PowerFlow applications in the past day and provides a notification if you have.
- check_schedule_debug_enable. Checks if you have scheduled any debug-level runs of PowerFlow applications and provides a notification if you have.
autoheal
The following commands show the formatting for an autoheal action for a single node, followed by an example:
pfctl --host <host> <username>:<password> node-action --action autoheal
pfctl --host 10.2.11.222 isadmin:isadmin222 node-action --action autoheal
The following commands show the formatting for an autoheal action for a clustered environment, followed by an example:
pfctl --host <host> <username>:<password> --host <host> <username>:<password> --host <host> <username>:<password> cluster-action --action autoheal
pfctl --host 10.2.11.222 isadmin:isadmin222 --host 10.2.11.232 isadmin:isadmin232 --host 10.2.11.244 isadmin:isadminpass cluster-action --action autoheal
Example Output
The following section lists example healthcheck output:
verify db host for cluster 10.2.11.222...........[OK] check dex connectivity 10.2.11.222...............[OK] check rabbit cluster count 10.2.11.222...........[OK] check rabbit cluster alarms 10.2.11.222..........[OK] verify cmd in container 10.2.11.222..............[OK] verify cmd in container 10.2.11.222..............[OK] verify cmd in container 10.2.11.222..............[OK] verify cmd in container 10.2.11.222..............[OK] verify cmd in container 10.2.11.222..............[OK] verify cmd in container 10.2.11.222..............[OK] verify cmd in container 10.2.11.232..............[Skipped - iservices_steprunner not found on 10.2.11.232] verify cmd in container 10.2.11.232..............[OK] verify cmd in container 10.2.11.232..............[OK] verify cmd in container 10.2.11.232..............[Skipped - iservices_contentapi not found on 10.2.11.232] verify cmd in container 10.2.11.232..............[OK] verify cmd in container 10.2.11.232..............[Skipped - iservices_steprunner not found on 10.2.11.232] verify cmd in container 10.2.11.232..............[OK] verify cmd in container 10.2.11.232..............[OK] verify cmd in container 10.2.11.232..............[Skipped - iservices_contentapi not found on 10.2.11.232] verify cmd in container 10.2.11.232..............[OK] verify cmd in container 10.2.11.244..............[OK] verify cmd in container 10.2.11.244..............[OK] verify cmd in container 10.2.11.244..............[OK] verify cmd in container 10.2.11.244..............[OK] verify cmd in container 10.2.11.244..............[OK] verify cmd in container 10.2.11.244..............[OK] get file hash 10.2.11.222........................[OK] get file hash 10.2.11.232........................[OK] /etc/iservices/isconfig.yml does not match between 10.2.11.222 and 10.2.11.232 get file hash 10.2.11.222........................[OK] get file hash 10.2.11.232........................[OK] get file hash 10.2.11.244........................[OK] /opt/iservices/scripts/docker-compose.yml does not match between 10.2.11.222 and 10.2.11.244 get file hash 10.2.11.222........................[OK] get file hash 10.2.11.232........................[OK] get file hash 10.2.11.244........................[OK] get file hash 10.2.11.222........................[OK] get file hash 10.2.11.232........................[OK] get file hash 10.2.11.244........................[OK] get file hash 10.2.11.222........................[OK] get file hash 10.2.11.232........................[OK] get file hash 10.2.11.244........................[OK] check cpu 10.2.11.222............................[OK] check disk 10.2.11.222...........................[OK] check memory 10.2.11.222.........................[Failed] check cpu 10.2.11.232............................[OK] check disk 10.2.11.232...........................[OK] check memory 10.2.11.232.........................[OK] check cpu 10.2.11.244............................[OK] check disk 10.2.11.244...........................[OK] check memory 10.2.11.244.........................[OK] Utilization warnings in the cluster: {'10.2.11.222': ['There is less than 2000mb memory available']} verify ntp sync 10.2.11.222......................[OK] verify ntp sync 10.2.11.232......................[OK] verify ntp sync 10.2.11.244......................[OK] check replica count logs 10.2.11.222.............[OK] check replica count content 10.2.11.222..........[Failed] Identified missing replicas on some buckets: ['Replica count for bucket: content is not the expected 2'] verify pingable addr 10.2.11.222.................[OK] verify pingable addr 10.2.11.232.................[OK] verify pingable addr 10.2.11.244.................[OK] get exited container count 10.2.11.222...........[OK] get exited container count 10.2.11.232...........[OK] get exited container count 10.2.11.244...........[OK] 6 exited (stale) containers found cluster-wide verify node indexes 10.2.11.222..................[Failed] Some nodes are missing required indexes. Here are the nodes with the missing indeces: Missing the following indexes: {'couchbase.isnet': ['idx_casbin'], 'couchbase-worker2.isnet': ['idx_content_configuration']}
Using powerflowcontrol healthcheck on the docker-compose file
You can also validate the docker-compose file with the powerflowcontrol healthcheck action. The action will show a message if pypiserver or dexserver services are not configured properly in the docker-compose file. You can fix these settings manually or with the powerflowcontrol autoheal action, which corrects the docker-compose file and copies it to all the nodes in the clustered environment.
When using version 1.3.0 or later of the powerflowcontrol (pfctl) command-line utility, the autocluster action validates and fixes the pypiserver and dexserver services definitions in the docker-compose file.
The healthcheck action in the powerflowcontrol command-line utility for PowerFlow clusters will check the Docker version for each cluster to ensure that the Docker version is the same in all the hosts.
autocluster
You can use the powerflowcontrol (pfctl) command-line utility to perform multiple administrator-level actions on your PowerFlow cluster. You can use the autocluster action with the powerflowcontrol command to automate the configuration of a three-node cluster.
If you are using another cluster configuration, the deployment process should be manual, because the powerflowcontrol utility only supports the automated configuration of a three-node cluster.
The autocluster action will completely reset and remove all data from the system. When you run this action, you will get a prompt verifying that you want run the action and delete all data.
To automate the configuration of a three-node cluster, run the following command:
pfctl --host <pf_host1> <username>:<password> --host <pf_host2> <username>:<password> --host <pf_host3> <username>:<password> autocluster
For example:
pfctl --host 192.11.1.1 isadmin:passw0rd --host 192.11.1.2 isadmin:passw0rd --host 192.11.1.3 isadmin:passw0rd autocluster
Running this command will configure your PowerFlow three-node cluster without any additional manual steps required.
You can use the generate_haproxy_config cluster-action in the powerflowcontrol (pfctl) utility to create an HAProxy configuration template that lets you easily set an HAProxy load balancer for a three-node cluster. For example: pfctl cluster-action --action generate_haproxy_config
apply_<n>GB_override, verify_<n>GB_override
The actions in this topic are available in the powerflowcontrol (pfctl) utility version 2.7.4 and later.
You can use the following cluster-actions to apply and verify 16 GB, 32 GB, and 64 GB overrides to SaaS PowerFlow systems only. These actions let you control the memory allocation of the PowerFlow nodes and ensure full replication of all services in any failover scenario. In addition, when you run these actions on a SaaS PowerFlow system, the docker-compose.yml file is updated with deployment configurations specific to a SaaS environment.
- apply_16GB_override and verify_16GB_override. These settings support up to 25,000 to 30,000 devices, depending on the relationship depth of the devices.
- apply_32GB_override and verify_32GB_override. These settings support up to approximately 70,000 devices.
- apply_64GB_override and verify_64GB_override.
The following command is an example of a pfctl command to apply the 32 GB override:
pfctl --host 10.10.10.100 'isadmin:testing@is' --host 10.10.10.102 'isadmin:testing@is' --host 10.10.10.105 'isadmin:testing@is' cluster-action --action apply_32GB_override
When you run the override actions listed above, the updates are applied automatically to the PowerFlow server as well as to the docker-compose.yml file. You do not need to redeploy the whole stack.
For more information, see Recommended Memory Allocation of PowerFlow Nodes.
check_docker_service_update_status
The check_docker_service_update_status action is available in the powerflowcontrol (pfctl) utility version 2.7.4 and later.
The check_docker_service_update_status action iterates over all the running services in PowerFlow and checks the status of the Docker service after running a docker service update command. You can run this action as a node-action or a cluster-action.
For example:
pfctl --config config.yml cluster-action --action check_docker_service_update_status
In addition, you can use the --update-parallelism option with the docker service update command to along with a value of 0, to update all Docker services at once.
Use the following format for the docker service update command:
docker service update --update-parallelism <uint> <configurations_to_update><service_name>
where <uint> is the number of replicas that you want to update in parallel. Use a value of 0 to update all Docker services at once. For example:
docker service update --update-parallelism 0 iservices_couchbase-worker --env-add AUTO_REBALANCE=true
collect_pf_logs
To collect additional logs for troubleshooting, SSH to the PowerFlow server and run the following command:
pfctl --host <pf_host_IP_address> <username>:<password> node-action --action collect_pf_logs
You can also gather Couchbase data by running the following command:
pfctl --host <pf_host_ip_address> <username>:<password> --json couchbase --cli_type statsDirectory --bucket_name logs
Many of the other powerflowcontrol actions use the same format as the collect_pf_logs action, above. The only change you need to make for those commands is to replace the name of the action at the end of the command. For example: pfctl --host <pf_host_IP_address> isadmin:<password> node-action --action pull_latest_images
open_firewall_ports
To open firewall ports for a single node, SSH to the PowerFlow server and run the following command:
pfctl --host <pf_host_IP_address> isadmin:<password> node-action --action open_firewall_ports
password
To encrypt a password using the powerflowcontrol (pfctl) command-line utility, SSH to the PowerFlow server and run the following command:
pfctl password encrypt
To view the current node password unencrypted, run the following command:
pfctl password decrypt
This command displays the decrypted password on standard output. It does not alter the contents of /etc/iservices/is_pass in place, but just decrypts to stdout.
This command is used locally when the decrypted password is needed for certain tasks that are, in turn, remotely started. On certain systems where the password is encrypted, this involves decrypting the password using the encryption_key file contents. For remote nodes, the nodes must also support the same version or later of the powerflowcontrol (pfctl) command-line utility, as that command is executed locally.