This
New installations of SL1 Extended Architecture are available only on SaaS deployments.
Use the following menu options to navigate the SL1 user interface:
- To view a pop-out list of menu options, click the menu icon ().
- To view a page containing all of the menu options, click the Advanced menu icon ().
Workflow
The following sections describe the steps to plan and deploy an SL1 update.
If would like assistance planning an upgrade path that minimizes downtime, contact your Customer Success Manager.
The workflow for upgrading SL1 is:
- Plan the update.
- Schedule maintenance windows.
- Review pre-upgrade best practices for SL1.
- Back up SSL certificates.
- Set the timeout for PhoneHome Watchdog.
- Adjust the timeout for slow connections.
- Run the system status script on the Database Server or All-In-One before upgrading.
- Upgrade the SL1 Distributed Architecture using the System Update tool (System > Tools > Updates).
- Remove SL1 appliances from maintenance mode.
- Upgrade the Extended Architecture.
- Upgrade MariaDB, if needed.
- Reboot SL1 appliances, if needed.
- Restore SSL certificates.
- Reset the timeout for PhoneHome Watchdog.
- Update the default PowerPacks.
- Configure Subscription Billing (one time only). For details, see .
For details on all steps in this list except step 10, see the section on Upgrading SL1.
Prerequisites
- ScienceLogic recommends that for production systems, each Compute Cluster contains six (6) Compute Nodes. Lab systems can continue to use Compute Clusters that include only three (3) Compute Nodes.
- The Storage Cluster requires a (possibly additional) node to act as the Storage Manager.
- Perform the installation steps in the Installation manual to install these additional nodes (for the Computer Cluster and the Storage Cluster) before upgrading your existing nodes.
- Ensure that all nodes in the SL1 Extended Architecture can access the internet.
- You must use the same password for the em7admin account during ISO installation of the Database Server and ISO installation of the appliances in the SL1 Extended Architecture.
To perform the upgrade, you must have a ScienceLogic customer account that allows you access to the Harbor repository page on the ScienceLogic Support Site. To verify your access, go to https://registry.scilo.tools/harbor/. For more information about obtaining Harbor login credentials, contact your Customer Success Manager.
Resizing the Disks on the Compute Node
The Kafka Messaging service requires additional disk space on each Compute Node. Before upgrading, ensure that each disk on each existing Compute Node in the Compute Node cluster is at least 350 GB.
If each disk on each existing Compute Node is not at least 350 GB, perform the following steps on each Compute Node:
- Resize the hard disk via your hypervisor to at least 350 GB.
- Note the name of the disk that you expanded in your hypervisor.
- Power on the virtual machine.
- Either go to the console of the Compute Node or use SSH to access the Compute Node.
- Open a shell session on the server.
- Log in with the system password for the Compute Node.
- At the shell prompt, enter:
- Note the name of the disk that you expanded in your hypervisor.
- At the shell prompt, enter:
- Enter p to print the partition table.
- Enter n to add a new partition.
- Enter p to make the new partition the primary partition.
- Select the default values for partition number, first sector, and last sector.
- Enter w to save these changes
- Restart the VM.
- At the shell prompt, enter:
- Notice that now another partition is present.
- To initialize the new partition as a physical volume, enter the following at the shell prompt:
- To add the physical volume to the existing volume group, enter the following at the shell prompt:
- To verify and confirm that the volume group has grown to the expected size, enter the following at the shell prompt:
sudo lsblk | grep <disk_size>
where:
disk_size is your hard disk size from step #1.
sudo fdisk /dev/<disk_name>
where:
disk_name is the name of the disk you want to expand.
sudo fdisk -l
sudo pvcreate <partition_name>
sudo vgextend em7vg <partition_name>
sudo vgdisplay | grep "VG Size
Installing ORAS
If you have not already installed OCI Registry as Storage (ORAS), you will need to do so before you can upgrade the SL1 Extended Architecture.
To do so:
- Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
- In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
- Run the following commands:
cd sl1x-deploy
sudo su
curl -LO https://github.com/oras-project/oras/releases/download/v0.12.0/oras_0.12.0_linux_amd64.tar.gz
mkdir -p oras-install/
tar -zxf oras_0.12.0_*.tar.gz -C oras-install/
mv oras-install/oras /usr/bin/
rm -rf oras_0.12.0_*.tar.gz oras-install/
exit
Obtaining Your Harbor Credentials
You will need to know your Harbor username and CLI secret when you upgrade the SL1 Extended Architecture. To obtain these credentials:
- Log in to Harbor at: https://registry.scilo.tools/harbor/sign-in?redirect_url=%2Fharbor%2Fprojects
- Click .
- Click .
- Log in with the username and credentials that you use to access the ScienceLogic Support site (support.sciencelogic.com).
- Click the username in the upper right and select User Profile.
- On the User Profile page:
- Note the username.
- Click the pages icon next to the CLI secret field to copy the CLI secret to cache.
- Exit the browser session.
Upgrading to 12.2.x
Before upgrading to SL1 12.2.0 or later, you must already be running SL1 on Oracle Linux 8 (OL8). If you are on a version of SL1 prior to 12.1.1 and running on OL7, you must first upgrade to SL1 12.1.1 or 12.1.2 and then migrate to OL8 before you can upgrade to SL1 12.2.x. For an overview of potential upgrade paths and their required steps, see the appropriate 12.2.x SL1 release notes.
Upgrading from 12.1.1 to 12.2.x
To upgrade the SL1 Extended Architecture to 12.2.x from 12.1.1, follow these steps:
-
If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 9.
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
-
In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
cd sl1x-deploy
-
Back up the following files:
/home/em7admin/sl1x-deploy/sl1x-inv.yml
/home/em7admin/sl1x-deploy/output-files/cluster.yml
/home/em7admin/sl1x-deploy/output-files/cluster.rkestate
/home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml
-
Run the following command to enter the Ansible shell on the Docker container:
docker-compose -f docker-compose.external.yml run --rm deploy shell
-
Delete any failed charts:
helm ls | awk '/FAILED/'
-
If the above command results in any output, run the following command:
helm delete $(helm ls | awk '/FAILED/ { print $1 }')
-
Exit the Ansible shell session:
exit
-
Log in to Harbor repository:
oras login registry.scilo.tools/sciencelogic/
-
Enter the username you used to log in to the browser-based session of Harbor.
-
Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
-
Download the deployment files:
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:12.2
cd sl1x-deploy
-
Copy the inventory template file to the file named sl1x-inv.yml:
cp sl1x-inv-template.yml sl1x-inv.yml
-
Edit the file sl1x-inv.yml to match your SL1 Extended system:
vi sl1x-inv.yml
Do not remove colons when editing this file.
-
Make sure that the sl1_version value is the latest service version for 12.2 code line.
-
Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
-
Save your changes and exit the file (:wq).
-
-
Pull the Docker image that is referenced in the docker-compose file:
docker-compose -f docker-compose.external.yml pull
-
Update credentials on all nodes:
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass
-
Run the following deploy command at the shell prompt to upgrade RKE and Kubernetes on the Compute Nodes:
docker-compose -f docker-compose.external.yml run --rm deploy cn
-
Update the SL1 Extended system services:
docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections
-
Update the Storage Manager:
docker-compose -f docker-compose.external.yml run --rm deploy sm
-
Update security packages on all nodes:
docker-compose -f docker-compose.external.yml run --rm deploy package-updates
-
Update your classic SL1 appliances. For more information, see the section on Updating SL1.
ScienceLogic recommends that you back up these files at regular intervals.
Upgrading from 12.1.0 (OL8) to 12.2.x
To upgrade the SL1 Extended Architecture to 12.2.x from 12.1.0 instances running on Oracle Linux 8 (OL8), follow these steps:
- Complete preupgrade steps.
- Upgrade with Scylla or disable Scylla.
- Upgrade the SL1 Distributed Architecture.
Step 1: Preupgrade
-
If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 4.
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
-
In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
cd sl1x-deploy
-
Log in to Harbor repository:
oras login registry.scilo.tools/sciencelogic/
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
-
Exit out of the sl1x-deploy directory and download the deployment files:
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:12.2
cd sl1x-deploy
-
Copy the inventory template file to the sl1x-inv.yml file:
cp sl1x-inv-template.yml sl1x-inv.yml
-
Edit the sl1x-inv.yml file to match your SL1 Extended system:
vi sl1x-inv.yml
Do not remove colons when editing this file.
- Make sure that the sl1_version value is set to the latest service version for the 12.2.x code line.
- Supply values in all the fields that are applicable. For details on the sl1x-inv.yml file, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
- Save your changes and exit the file (:wq).
-
Pull the Docker image that is referenced in the docker-compose file:
docker-compose -f docker-compose.external.yml pull
Step 2: Upgrade with Scylla or Disable the Scylla Cluster
On-premises SL1 users have the following options with regards to the Scylla cluster:
- Option 1: Upgrade with Scylla. This option upgrades RKE and Kubernetes on the Compute Nodes and updates the system services while continuing to utilize Scylla.
- Option 2: Disable Scylla. This option is available for users who do not utilize SL1's machine learning-based anomaly detection feature.
Option 1: Upgrade with Scylla
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
To upgrade RKE and Kubernetes on the Compute Nodes, run the following command:
docker-compose -f docker-compose.external.yml run --rm deploy cn
-
To update the SL1 Extended Architecture system services, run the following command:
docker-compose -f docker-compose.external.yml run --rm deploy app
Option 2: Disable Scylla
If you do not utilize SL1's machine learning-based anomaly detection service, you have the option to remove existing Scylla databases from your Storage Nodes. This serves to lower resource utilization and cost. After disabling Scylla from a Storage Node, you can then opt to delete that Storage Node.
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
Open a text editor for the sl1x-inv.yml file:
vi /home/em7admin/sl1x-deploy/sl1x-inv.yml
-
Edit the file:
all:
vars:
install_aiml: false
enableNonScyllaPipeline: true
enableLegacyScyllaPipeline: false
-
In that same file, remove the Storage Node and Storage Manager IP addresses from the list. For example, you would remove the following lines:
sn:
hosts:
#10.2.253.90: # ip of storage node 1
#10.2.253.91: # ip of storage node 2
#10.2.253.92: # ip of storage node 3
vars:
# roles/sn-scylla
scylla_admin_username: em7admin # scylla admin username
scylla_admin_password: <password> # scylla admin password
sm:
hosts:
#10.2.253.82: # ip of sm
-
Save your changes and exit the file (:wq).
-
To upgrade RKE and Kubernetes on the Compute Nodes, run the following command:
docker-compose -f docker-compose.external.yml run --rm deploy cn
-
To remove services, run the following command:
docker-compose -f docker-compose.external.yml run --rm deploy app-purge
-
To deploy the updated services with the non-Scylla configuration, run the following command:
docker-compose -f docker-compose.external.yml run --rm deploy app
Step 3: Upgrade the SL1 Distributed Architecture
Update your classic SL1 appliances. For more information, see the section on Updating SL1.
Upgrading to 12.1.2
Upgrading from 12.1.1 (OL8) to 12.1.2 (OL8)
To upgrade the SL1 Extended Architecture to 12.1.2 running on Oracle Linux 8 (OL8) from 12.1.1 running on OL8, follow these steps:
- Complete preupgrade steps.
- Upgrade or disable the Scylla cluster.
- Upgrade the SL1 Distributed Architecture.
Step 1: Preupgrade
-
If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 4.
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
cd sl1x-deploy
-
Log in to Harbor repository:
oras login registry.scilo.tools/sciencelogic/
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
-
Exit out of sl1x-deploy and download the deployment files:
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:12.1.2
cd sl1x-deploy
-
Copy the inventory template file to the name sl1x-inv.yml:
cp sl1x-inv-template.yml sl1x-inv.yml
-
Open the vi text editor to edit the sl1x-inv.yml file:
vi sl1x-inv.yml
Do not remove colons when editing this file.
-
Change the sl1_version to 12.1.2.
-
Supply values in all the fields that are applicable to your system and then save your changes and exit the file (:wq).
-
Pull the Docker image that is referenced in the docker-compose file:
docker-compose -f docker-compose.external.yml pull
Step 2: Upgrade with Scylla or Disable the Scylla Cluster
On-premises SL1 users have the following options with regards to the Scylla cluster:
- Option 1: Upgrade with Scylla. This option upgrades RKE and Kubernetes on the Compute Nodes and updates the system services while continuing to utilize Scylla.
- Option 2: Disable Scylla. This option is available for users who do not utilize SL1's machine learning-based anomaly detection feature.
Procedures for these options are described in this section.
Option 1: Upgrade with Scylla
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
To upgrade RKE and Kubernetes on the Compute Nodes, run the following command:
docker-compose -f docker-compose.external.yml run --rm deploy cn
-
To update the SL1 Extended Architecture system services, run the following command:
docker-compose -f docker-compose.external.yml run --rm deploy app
Option 2: Disable Scylla
If you do not utilize SL1's machine learning-based anomaly detection service, you have the option to remove existing Scylla databases from your Storage Nodes. This serves to lower resource utilization and cost. After disabling Scylla from a Storage Node, you can then opt to delete that Storage Node.
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
Open a text editor for the sl1x-inv.yml file:
vi /home/em7admin/sl1x-deploy/sl1x-inv.yml
Do not remove colons when editing this file.
-
Edit the file:
all:
vars:
install_aiml: false
enableNonScyllaPipeline: true
enableLegacyScyllaPipeline: false
-
Save your changes and exit the file (:wq).
-
To upgrade RKE and Kubernetes on the Compute Nodes, remove services, and then deploy updated services with the non-Scylla configuration, run the following commands:
docker-compose -f docker-compose.external.yml run --rm deploy cn
docker-compose -f docker-compose.external.yml run --rm deploy app-purge
docker-compose -f docker-compose.external.yml run --rm deploy app
-
Re-open the text editor for the sl1x-inv.yml file:
vi /home/em7admin/sl1x-deploy/sl1x-inv.yml
Do not remove colons when editing this file.
-
In the sl1x-inv.yml file, remove the Storage Node and Storage Manager hosts from the list. For example, after editing the file, that section might look like this, with no hosts listed:
sn:
hosts:
vars:
scylla_admin_username: em7admin
scylla_admin_password: <password>
sm:
hosts:
vars:
scylla_manager_db_user: em7admin
scylla_manager_db_password: <password>
-
Save your changes and exit the file (:wq).
Step 3. Upgrade the SL1 Distributed Architecture
Update your classic SL1 appliances. For more information, see the section on Updating SL1.
Upgrading from 11.2.x, 11.3.x, 12.1.0.x, or 12.1.1 (OL7) to 12.1.2 (OL8)
To upgrade the SL1 Extended Architecture to 12.1.2 running on Oracle Linux 8 (OL8) from 11.2.x, 11.3.x, 12.1.0.x, or 12.1.1 instances running on Oracle Linux 7 (OL7), follow these steps:
- Complete preupgrade steps.
- Upgrade or disable the Scylla cluster.
- Upgrade the SL1 Distributed Architecture.
- Upgrade the Compute Node clusters.
- Upgrade the Management Node.
Step 1: Preupgrade
-
If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 4.
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
cd sl1x-deploy
-
Log in to Harbor repository:
oras login registry.scilo.tools/sciencelogic/
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
-
Open the vi text editor to edit the sl1x-inv.yml file:
vi sl1x-inv.yml
Do not remove colons when editing this file.
-
Change the sl1_version to 12.1.2.
-
Supply values in all the fields that are applicable to your system and then save your changes and exit the file (:wq).
-
Set the docker-compose image to iac-sl1x:12.1.2:
vi /home/em7admin/sl1x-deploy/docker-compose.external.yml
image: registry.scilo.tools/sciencelogic/iac-sl1x:12.1.2
-
Save your changes and exit the file (:wq).
-
Pull the Docker image that is referenced in the docker-compose file:
docker-compose -f docker-compose.external.yml pull
Step 2: Upgrade or Disable the Scylla Cluster
On-premises SL1 users have three options for upgrading the Scylla cluster or the option to disable Scylla:
- Option 1: Rolling upgrade. This option is recommended for most deployments.
- Option 2: Backup and restore. This option requires AWS S3 access and is recommended for smaller deployments and lab environments.
- Option 3: Disable Scylla. This option is available for users who do not utilize SL1's machine learning-based anomaly detection feature.
Procedures for these options are described in this section.
Option 1: Rolling Upgrade
This option for upgrading Scylla is recommended for most SL1 deployments.
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
Remove the first Scylla node from the cluster:
docker-compose -f docker-compose.external.yml run --rm deploy sn-remove --limit sn[0]
-
Re-ISO the first Scylla node with the SL1 12.1.2 OL8 ISO. These Scylla node IPs can be found in the sl1x-inv.yml file. The following is an example:
sn:
hosts:
10.2.253.90: # ip of storage node 1
10.2.253.91: # ip of storage node 2
10.2.253.92: # ip of storage node 3
vars:
# roles/sn-scylla
scylla_admin_username: em7admin # scylla admin username
scylla_admin_password: <password> # scylla admin password
sm:
hosts:
10.2.253.82: # ip of sm
-
Re-add the first Scylla node to the cluster:
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit sn[0]
docker-compose -f docker-compose.external.yml run --rm deploy sn-restore --limit sn[0]
-
Confirm that the node was added successfully:
docker-compose -f docker-compose.external.yml run --rm deploy sn-cluster-check --limit sn[0]
If you receive a message informing you that the task has failed because the new node has not yet joined the cluster, wait at least 15 minutes for the node to join and then run the command again. Larger clusters might require additional time. Continue checking every 15 minutes until the command is successful.
-
Remove the second and third Scylla nodes from the cluster:
docker-compose -f docker-compose.external.yml run --rm deploy sn-remove --limit sn[1]
docker-compose -f docker-compose.external.yml run --rm deploy sn-remove --limit sn[2]
For large amounts of data, remove the nodes one at a time.
-
Re-ISO the second and third Scylla nodes with the SL1 12.1.2 OL8 ISO.
-
Re-ISO the Storage Manager node with the SL1 12.1.2 OL8 ISO.
-
Re-add the second and third Scylla nodes to the cluster:
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit sn[1],sn[2]
docker-compose -f docker-compose.external.yml run --rm deploy sn-restore --limit sn[1],sn[2]
-
Confirm that the nodes were added correctly:
docker-compose -f docker-compose.external.yml run --rm deploy sn-cluster-check --limit sn[1]
docker-compose -f docker-compose.external.yml run --rm deploy sn-cluster-check --limit sn[2]
-
Deploy the Storage Manager:
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit sm
docker-compose -f docker-compose.external.yml run --rm deploy sm
Option 2: Backup and Restore
This option requires AWS S3 access and is recommended for smaller deployments and lab environments.
Before beginning this procedure, you will need the following:
-
A Scylla AWS S3 bucket
You will need an IAM role to access the bucket. For more information on configuring this role, see Scylla's documentation.
- An active Scylla cluster
- The Terraform state (tfstate) of the previous deployment
-
Disable the Streamer service.
-
Scale down the service so SL1 agents can collect data and store it locally until the Storage Node/Storage Manager upgrade process completes. To do so, use SSH to access the Management Node and run the following command in an Ansible shell session:
kubectl scale --replicas=0 deployment.apps/streamer
-
Exit the Ansible shell session and edit the sl1x-inv.yml file to include variables for the S3 bucket:
scylla_backup_bucket: scilo-scylla-backup
scylla_backup_bucket_region: scilo-scylla-backup
access_key: #######
secret_key : #######
-
Back up Scylla data:
cd /home/ec2-user/
docker-compose -f docker-compose.external.yml run --rm deploy backup-scylla-ol8
-
During the execution, take note of the output of this task:
TASK [sciencelogic.sl1x_sn.sn-scylla : Output Host IDs] *********************************************************************************************
changed: [10.152.1.250]
TASK [sciencelogic.sl1x_sn.sn-scylla : debug] *******************************************************************************************************
ok: [10.152.1.250] => {
"host_ids.stdout_lines": [
"Datacenter: dc",
"==============",
"Status=Up/Down",
"|/ State=Normal/Leaving/Joining/Moving",
"-- Address Load Tokens Owns Host ID Rack",
"UN 10.152.5.250 9.05 MB 256 ? a6a4758a-5eb4-4382-99fb-b30e8841e68c r2",
"UN 10.152.3.250 9.09 MB 256 ? d73d1ebb-acdb-47ad-81dc-b675a1ac5234 r1",
"UN 10.152.1.250 9.08 MB 256 ? 10de9ae4-4c39-42c2-9ee0-6864244a4240 r0",
"",
"Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless"
]
}
-
SSH into the first Storage Node and get a snapshot tag:
scylla-manager-agent download-files -L s3:scilo-scylla-backup --list-snapshots
sm_20230214123551UTC
-
Re-ISO the Storage Node/Storage Manager nodes with the SL1 12.1.2 OL8 ISO:
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit sn,sm
-
SSH into the Management Node and finish the Storage Node/Storage Manager deployment:
cd /home/ec2-user/
docker-compose -f docker-compose.external.yml run --rm deploy sn
docker-compose -f docker-compose.external.yml run --rm deploy sm
-
Edit the sl1x-inv.yml file to add the following variables, based on steps 5 and 6:
all:
vars:
#scylla backup and restore config
scylla_backup_bucket: scilo-scylla-backup
scylla_backup_bucket_region: us-east-1
access_key: ************
secret_key: ************
# snapshot_tag specifies the Scylla Manager snapshot tag you want to restore.
snapshot_tag: sm_20230214123551UTC
# host_id specifies a mapping from the clone cluster node IP to the source cluster host IDs.
# cluster host IDs.
host_id:
10.152.1.250: 10de9ae4-4c39-42c2-9ee0-6864244a4240
10.152.3.250: d73d1ebb-acdb-47ad-81dc-b675a1ac5234
10.152.5.250: a6a4758a-5eb4-4382-99fb-b30e8841e68c
-
Run the restore playbook:
docker-compose -f docker-compose.external.yml run --rm deploy restore-scylla-ol8
-
Re-enable the Streamer service.
-
After upgrading the Storage Node/Storage Manager, you can increase the scale for the Streamer service:
kubectl scale --replicas=3 deployment.apps/streamer
Option 3: Disable Scylla
If you do not utilize SL1's machine learning-based anomaly detection service, you have the option to remove existing Scylla databases from your Storage Nodes. This serves to lower resource utilization and cost. After disabling Scylla from a Storage Node, you can then opt to delete that Storage Node.
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
Open a text editor for the sl1x-inv.yml file:
vi /home/em7admin/sl1x-deploy/sl1x-inv.yml
-
Edit the file:
all:
vars:
install_aiml: false
enableNonScyllaPipeline: true
enableLegacyScyllaPipeline: false
-
In that same file, remove the Storage Node and Storage Manager IP addresses from the list. For example, you would remove the following lines:
sn:
hosts:
#10.2.253.90: # ip of storage node 1
#10.2.253.91: # ip of storage node 2
#10.2.253.92: # ip of storage node 3
vars:
# roles/sn-scylla
scylla_admin_username: em7admin # scylla admin username
scylla_admin_password: <password> # scylla admin password
sm:
hosts:
#10.2.253.82: # ip of sm
-
Save your changes and exit the file (:wq).
Step 3. Upgrade the SL1 Distributed Architecture
Update your classic SL1 appliances. For more information, see the section on Updating SL1.
Step 4. Upgrade the Compute Node Cluster
The process for upgrading your Compute Node (CN) cluster varies slightly based on whether you have a six-node cluster or a three-node cluster. Both options are described in this section.
Option 1: Six-node Clusters
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
Run the backup procedure:
docker-compose -f docker-compose.external.yml run --rm deploy rke-backup --tags 6+nodes
-
Re-ISO the CN worker nodes to the SL1 12.1.2 OL8 ISO.
You can find the IP addresses for the worker nodes in the sl1x-inv.yml file.
-
Set up SSH keys to the worker nodes and restore their data:
rm -rf /home/em7admin/.ssh/known_hosts
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit worker
docker-compose -f docker-compose.external.yml run --rm deploy rke-restore --tags 6+nodes
-
Re-ISO the CN master nodes to the SL1 12.1.2 OL8 ISO.
You can find the IP addresses for the master nodes in the sl1x-inv.yml file.
-
If configured, re-ISO the load balancers to the SL1 12.1.2 OL8 ISO.
-
Set up SSH keys to the master nodes and redeploy the cluster:
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit master,lb
docker-compose -f docker-compose.external.yml run --rm deploy cn
docker-compose -f docker-compose.external.yml run --rm deploy app
-
Check for publisher/subscriptions .yml inside the input files. These are used if you have Publisher services enabled. Once .yml files are deployed, Publisher pods should be deployed as well.
ls /home/em7admin/sl1x-deploy/input-files/subscriptions
Apply datamodel first then subscriptions
Option 2: Three-node Clusters
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
Run the backup procedure:
docker-compose -f docker-compose.external.yml run --rm deploy rke-backup --tags 3nodes
-
Re-ISO the first two master nodes listed in the sl1x-inv.yml file to the SL1 12.1.2 OL8 ISO.
-
Set up SSH keys to the two master nodes and restore their data:
echo > ~/.ssh/known_hosts
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit master[0],master[1]
docker-compose -f docker-compose.external.yml run --rm deploy rke-restore --tags 3nodes
-
Re-ISO the last master node listed in the sl1x-inv.yml file to the SL1 12.1.2 OL8 ISO.
-
If configured, re-ISO the load balancers to the SL1 12.1.2 OL8 ISO.
-
Set up SSH keys to the last master node and redeploy the cluster:
echo > ~/.ssh/known_hosts
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit master[2],lb
docker-compose -f docker-compose.external.yml run --rm deploy cn
docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections
-
Ensure pods are running:
docker-compose -f docker-compose.external.yml run --rm deploy shell
kubectl get pods
-
Check for publisher/subscriptions .yml inside the input files. These are used if you have Publisher services enabled. Once .yml files are deployed, Publisher pods should be deployed as well.
ls /home/em7admin/sl1x-deploy/input-files/subscriptions
Apply datamodel first then subscriptions
Step 5. Upgrade the Management Node
Do not upgrade the Management Node until your SL1 Database Server, Administration Portal, Storage Node, Storage Manager, Compute Node, and load balancers are upgraded to 12.1.2 OL8.
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
Run the backup procedure:
cd /home/em7admin/
cp .bash_history sl1x-deploy/input-files/
tar cvf sl1x-deploy.tgz sl1x-deploy
-
Copy the compressed file to a secure machine. For example:
scp em7admin@<MN_IP>:sl1x-deploy.tgz sl1x-deploy.tgz
-
Re-ISO the Management Node to the SL1 12.1.2 OL8 ISO.
-
Log in to Harbor repository:
oras login registry.scilo.tools/sciencelogic/
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
-
Pull and run the mn-transformation.sh script, then exit the SSH session to apply the script changes:
oras pull registry.scilo.tools/sciencelogic/mn-transformation:MN-Trans-OL8
mv mn-transformation.sh /tmp/
sudo sh /tmp/mn-transformation.sh
exit
-
Copy the compressed file back to the Management Node. For example:
scp sl1x-deploy.tgz em7admin@<MN_IP>:/home/em7admin/sl1x-deploy.tgz
-
SSH back into your Management Node and restore the sl1x-deploy folder and the bash history file:
cd /home/em7admin/
tar xf sl1x-deploy.tgz -C ./
cp /home/em7admin/sl1x-deploy/input-files/.bash_history /home/em7admin/
-
Your management node is now configured and can manage the cluster. To test, run the following command to see the kubectl pod output:
docker-compose -f docker-compose.external.yml run --rm deploy shell
INFO:__main__:Running with Parameters: Namespace(ansible_args=[], command='shell', force_root=False)
ansible@74c0d0905aa7:/ansible$ kubectl get pods
Upgrading to 12.1.1
Upgrading from 11.2.x, 11.3.x, or 12.1.0.x (OL7) to 12.1.1 (OL8)
To upgrade the SL1 Extended Architecture to 12.1.1 running on Oracle Linux 8 (OL8) from 11.2.x, 11.3.x, or 12.1.0.x instances running on Oracle Linux 7 (OL7), follow these steps:
- Complete preupgrade steps.
- Upgrade or disable the Scylla cluster.
- Upgrade the SL1 Distributed Architecture.
- Upgrade the Compute Node clusters.
- Upgrade the Management Node.
Step 1: Preupgrade
-
If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 4.
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
cd sl1x-deploy
-
Log in to Harbor repository:
oras login registry.scilo.tools/sciencelogic/
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
-
Set the SL1 version to 12.1.1 in the sl1x-inv.yml file:
vi /home/em7admin/sl1x-deploy/sl1x-inv.yml
change sl1_version to 12.1.1
Do not remove colons when editing this file.
-
Set the docker-compose image to iac-sl1x:12.1.1:
vi /home/em7admin/sl1x-deploy/docker-compose.external.yml
image: registry.scilo.tools/sciencelogic/iac-sl1x:12.1.1
-
Save your changes and exit the file (:wq).
-
Pull the Docker image that is referenced in the docker-compose file:
docker-compose -f docker-compose.external.yml pull
Step 2: Upgrade or Disable the Scylla Cluster
On-premises SL1 users have three options for upgrading the Scylla cluster or the option to disable Scylla:
- Option 1: Rolling upgrade. This option is recommended for most deployments.
- Option 2: Backup and restore. This option requires AWS S3 access and is recommended for smaller deployments and lab environments.
- Option 3: Disable Scylla. This option is available for users who do not utilize SL1's machine learning-based anomaly detection feature.
Procedures for these options are described in this section.
Option 1: Rolling Upgrade
This option for upgrading Scylla is recommended for most SL1 deployments.
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
Remove the first Scylla node from the cluster:
docker-compose -f docker-compose.external.yml run --rm deploy sn-remove --limit sn[0]
-
Re-ISO the first Scylla node with the SL1 12.1.1 OL8 ISO. These Scylla node IPs can be found in the sl1x-inv.yml file. The following is an example:
sn:
hosts:
10.2.253.90: # ip of storage node 1
10.2.253.91: # ip of storage node 2
10.2.253.92: # ip of storage node 3
vars:
# roles/sn-scylla
scylla_admin_username: em7admin # scylla admin username
scylla_admin_password: <password> # scylla admin password
sm:
hosts:
10.2.253.82: # ip of sm
-
Re-add the first Scylla node to the cluster:
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit sn[0]
docker-compose -f docker-compose.external.yml run --rm deploy sn-restore --limit sn[0]
-
Confirm that the node was added successfully:
docker-compose -f docker-compose.external.yml run --rm deploy sn-cluster-check --limit sn[0]
If you receive a message informing you that the task has failed because the new node has not yet joined the cluster, wait at least 15 minutes for the node to join and then run the command again. Larger clusters might require additional time. Continue checking every 15 minutes until the command is successful.
-
Remove the second and third Scylla nodes from the cluster:
docker-compose -f docker-compose.external.yml run --rm deploy sn-remove --limit sn[1]
docker-compose -f docker-compose.external.yml run --rm deploy sn-remove --limit sn[2]
For large amounts of data, remove the nodes one at a time.
-
Re-ISO the second and third Scylla nodes with the SL1 12.1.1 OL8 ISO.
-
Re-ISO the Storage Manager node with the SL1 12.1.1 OL8 ISO.
-
Re-add the second and third Scylla nodes to the cluster:
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit sn[1],sn[2]
docker-compose -f docker-compose.external.yml run --rm deploy sn-restore --limit sn[1],sn[2]
-
Confirm that the nodes were added correctly:
docker-compose -f docker-compose.external.yml run --rm deploy sn-cluster-check --limit sn[1]
docker-compose -f docker-compose.external.yml run --rm deploy sn-cluster-check --limit sn[2]
-
Deploy the Storage Manager:
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit sm
docker-compose -f docker-compose.external.yml run --rm deploy sm
Option 2: Backup and Restore
This option requires AWS S3 access and is recommended for smaller deployments and lab environments.
Before beginning this procedure, you will need the following:
-
A Scylla AWS S3 bucket
You will need an IAM role to access the bucket. For more information on configuring this role, see Scylla's documentation.
- An active Scylla cluster
- The Terraform state (tfstate) of the previous deployment
-
Disable the Streamer service.
-
Scale down the service so SL1 agents can collect data and store it locally until the Storage Node/Storage Manager upgrade process completes. To do so, use SSH to access the Management Node and run the following command in an Ansible shell session:
kubectl scale --replicas=0 deployment.apps/streamer
-
Exit the Ansible shell session and edit the sl1x-inv.yml file to include variables for the S3 bucket:
scylla_backup_bucket: scilo-scylla-backup
scylla_backup_bucket_region: scilo-scylla-backup
access_key: #######
secret_key : #######
-
Back up Scylla data:
cd /home/ec2-user/
docker-compose -f docker-compose.external.yml run --rm deploy backup-scylla-ol8
-
During the execution, take note of the output of this task:
TASK [sciencelogic.sl1x_sn.sn-scylla : Output Host IDs] *********************************************************************************************
changed: [10.152.1.250]
TASK [sciencelogic.sl1x_sn.sn-scylla : debug] *******************************************************************************************************
ok: [10.152.1.250] => {
"host_ids.stdout_lines": [
"Datacenter: dc",
"==============",
"Status=Up/Down",
"|/ State=Normal/Leaving/Joining/Moving",
"-- Address Load Tokens Owns Host ID Rack",
"UN 10.152.5.250 9.05 MB 256 ? a6a4758a-5eb4-4382-99fb-b30e8841e68c r2",
"UN 10.152.3.250 9.09 MB 256 ? d73d1ebb-acdb-47ad-81dc-b675a1ac5234 r1",
"UN 10.152.1.250 9.08 MB 256 ? 10de9ae4-4c39-42c2-9ee0-6864244a4240 r0",
"",
"Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless"
]
}
-
SSH into the first Storage Node and get a snapshot tag:
scylla-manager-agent download-files -L s3:scilo-scylla-backup --list-snapshots
sm_20230214123551UTC
-
Re-ISO the Storage Node/Storage Manager nodes with the SL1 12.1.1 OL8 ISO:
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit sn,sm
-
SSH into the Management Node and finish the Storage Node/Storage Manager deployment:
cd /home/ec2-user/
docker-compose -f docker-compose.external.yml run --rm deploy sn
docker-compose -f docker-compose.external.yml run --rm deploy sm
-
Edit the sl1x-inv.yml file to add the following variables, based on steps 5 and 6:
all:
vars:
#scylla backup and restore config
scylla_backup_bucket: scilo-scylla-backup
scylla_backup_bucket_region: us-east-1
access_key: ************
secret_key: ************
# snapshot_tag specifies the Scylla Manager snapshot tag you want to restore.
snapshot_tag: sm_20230214123551UTC
# host_id specifies a mapping from the clone cluster node IP to the source cluster host IDs.
# cluster host IDs.
host_id:
10.152.1.250: 10de9ae4-4c39-42c2-9ee0-6864244a4240
10.152.3.250: d73d1ebb-acdb-47ad-81dc-b675a1ac5234
10.152.5.250: a6a4758a-5eb4-4382-99fb-b30e8841e68c
-
Run the restore playbook:
docker-compose -f docker-compose.external.yml run --rm deploy restore-scylla-ol8
-
Re-enable the Streamer service.
-
After upgrading the Storage Node/Storage Manager, you can increase the scale for the Streamer service:
kubectl scale --replicas=3 deployment.apps/streamer
Option 3: Disable Scylla
If you do not utilize SL1's machine learning-based anomaly detection service, you have the option to remove existing Scylla databases from your Storage Nodes. This serves to lower resource utilization and cost. After disabling Scylla from a Storage Node, you can then opt to delete that Storage Node.
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
Open a text editor for the sl1x-inv.yml file:
vi /home/em7admin/sl1x-deploy/sl1x-inv.yml
-
Edit the file:
all:
vars:
install_aiml: false
enableNonScyllaPipeline: true
enableLegacyScyllaPipeline: false
-
In that same file, remove the Storage Node and Storage Manager IP addresses from the list. For example, you would remove the following lines:
sn:
hosts:
#10.2.253.90: # ip of storage node 1
#10.2.253.91: # ip of storage node 2
#10.2.253.92: # ip of storage node 3
vars:
# roles/sn-scylla
scylla_admin_username: em7admin # scylla admin username
scylla_admin_password: <password> # scylla admin password
sm:
hosts:
#10.2.253.82: # ip of sm
-
Save your changes and exit the file (:wq).
Step 3. Upgrade the SL1 Distributed Architecture
Update your classic SL1 appliances. For more information, see the section on Updating SL1.
Step 4. Upgrade the Compute Node Cluster
The process for upgrading your Compute Node (CN) cluster varies slightly based on whether you have a six-node cluster or a three-node cluster. Both options are described in this section.
Option 1: Six-node Clusters
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
Run the backup procedure:
docker-compose -f docker-compose.external.yml run --rm deploy rke-backup --tags 6+nodes
-
Re-ISO the CN worker nodes to the SL1 12.1.1 OL8 ISO.
You can find the IP addresses for the worker nodes in the sl1x-inv.yml file.
-
Set up SSH keys to the worker nodes and restore their data:
rm -rf /home/em7admin/.ssh/known_hosts
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit worker
docker-compose -f docker-compose.external.yml run --rm deploy rke-restore --tags 6+nodes
-
Re-ISO the CN master nodes to the SL1 12.1.1 OL8 ISO.
You can find the IP addresses for the master nodes in the sl1x-inv.yml file.
-
If configured, re-ISO the load balancers to the SL1 12.1.1 OL8 ISO.
-
Set up SSH keys to the master nodes and redeploy the cluster:
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit master,lb
docker-compose -f docker-compose.external.yml run --rm deploy cn
docker-compose -f docker-compose.external.yml run --rm deploy app
-
Check for publisher/subscriptions .yml inside the input files. These are used if you have Publisher services enabled. Once .yml files are deployed, Publisher pods should be deployed as well.
ls /home/em7admin/sl1x-deploy/input-files/subscriptions
Apply datamodel first then subscriptions
Option 2: Three-node Clusters
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
Run the backup procedure:
docker-compose -f docker-compose.external.yml run --rm deploy rke-backup --tags 3nodes
-
Re-ISO the first two master nodes listed in the sl1x-inv.yml file to the SL1 12.1.1 OL8 ISO.
-
Set up SSH keys to the two master nodes and restore their data:
echo > ~/.ssh/known_hosts
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit master[0],master[1]
docker-compose -f docker-compose.external.yml run --rm deploy rke-restore --tags 3nodes
-
Re-ISO the last master node listed in the sl1x-inv.yml file to the SL1 12.1.1 OL8 ISO.
-
If configured, re-ISO the load balancers to the SL1 12.1.1 OL8 ISO.
-
Set up SSH keys to the last master node and redeploy the cluster:
echo > ~/.ssh/known_hosts
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit master[2],lb
docker-compose -f docker-compose.external.yml run --rm deploy cn
docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections
-
Ensure pods are running:
docker-compose -f docker-compose.external.yml run --rm deploy shell
kubectl get pods
-
Check for publisher/subscriptions .yml inside the input files. These are used if you have Publisher services enabled. Once .yml files are deployed, Publisher pods should be deployed as well.
ls /home/em7admin/sl1x-deploy/input-files/subscriptions
Apply datamodel first then subscriptions
Step 5. Upgrade the Management Node
Do not upgrade the Management Node until your SL1 Database Server, Administration Portal, Storage Node, Storage Manager, Compute Node, and load balancers are upgraded to 12.1.1 OL8.
-
Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
-
Run the backup procedure:
cd /home/em7admin/
cp .bash_history sl1x-deploy/input-files/
tar cvf sl1x-deploy.tgz sl1x-deploy
-
Copy the compressed file to a secure machine. For example:
scp em7admin@<MN_IP>:sl1x-deploy.tgz sl1x-deploy.tgz
-
Re-ISO the Management Node to the SL1 12.1.1 OL8 ISO.
-
Log in to Harbor repository:
oras login registry.scilo.tools/sciencelogic/
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
-
Pull and run the mn-transformation.sh script, then exit the SSH session to apply the script changes:
oras pull registry.scilo.tools/sciencelogic/mn-transformation:MN-Trans-OL8
mv mn-transformation.sh /tmp/
sudo sh /tmp/mn-transformation.sh
exit
-
Copy the compressed file back to the Management Node. For example:
scp sl1x-deploy.tgz em7admin@<MN_IP>:/home/em7admin/sl1x-deploy.tgz
-
SSH back into your Management Node and restore the sl1x-deploy folder and the bash history file:
cd /home/em7admin/
tar xf sl1x-deploy.tgz -C ./
cp /home/em7admin/sl1x-deploy/input-files/.bash_history /home/em7admin/
-
Your management node is now configured and can manage the cluster. To test, run the following command to see the kubectl pod output:
docker-compose -f docker-compose.external.yml run --rm deploy shell
INFO:__main__:Running with Parameters: Namespace(ansible_args=[], command='shell', force_root=False)
ansible@74c0d0905aa7:/ansible$ kubectl get pods
Upgrading to 12.1.0.x
Upgrading from 11.2.x or 11.3.x to 12.1.0.x:
To upgrade the SL1 Extended Architecture to 12.1.0.x from 11.2.x or 11.3.x:
- If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 4.
- Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
- In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
cd sl1x-deploy
- Log in to Harbor repository:
oras login registry.scilo.tools/sciencelogic/
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
- Download the deployment files:
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:12.1
cd sl1x-deploy
- Copy the inventory template file to the file named sl1x-inv.yml:
cp sl1x-inv-template.yml sl1x-inv.yml
- Edit the file sl1x-inv.yml to match your SL1 Extended system:
vi sl1x-inv.yml
Do not remove colons when editing this file.
- Make sure that the sl1_version value is set to the latest service version for the 12.1.0 code line.
- Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
- Save your changes and exit the file (:wq).
- Pull the Docker image that is referenced in the docker-compose file:
docker-compose -f docker-compose.external.yml pull
- Complete the upgrade by running the full deployment:
docker-compose -f docker-compose.external.yml run --rm deploy sl1x --skip-tags maxconnections
Alternatively, you can deploy each platform node individually by running the following commands in series:
docker-compose -f docker-compose.external.yml run --rm deploy sn
docker-compose -f docker-compose.external.yml run --rm deploy cn
docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections
docker-compose -f docker-compose.external.yml run --rm deploy sm
- Update security packages on all nodes:
docker-compose -f docker-compose.external.yml run --rm deploy package-updates
- Update your classic SL1 appliances. For more information, see the section on Updating SL1.
Upgrading from 11.1.x to 12.1.0.x
To upgrade the SL1 Extended Architecture from 11.1.x to 12.1.0.x:
- If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 4.
- Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
- In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
cd sl1x-deploy
- Log in to Harbor repository:
oras login registry.scilo.tools/sciencelogic/
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
- Download the deployment files:
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:12.1
cd sl1x-deploy
- Pull the Docker image that is referenced in the docker-compose file
docker-compose -f docker-compose.external.yml pull
- Update credentials on all nodes:
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass
- Download the deployment files:
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:12.1
cd sl1x-deploy
- Copy the inventory template file to the file named sl1x-inv.yml:
cp sl1x-inv-template.yml sl1x-inv.yml
- Edit the file sl1x-inv.yml to match your SL1 Extended system:
vi sl1x-inv.yml
Do not remove colons when editing this file.
- Make sure that the sl1_version value is set to the latest service version for 12.1.0.x code line.
- Add the variable deployment: onprem.
- Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
- Save your changes and exit the file (:wq).
- Pull the Docker image that is referenced in the docker-compose file
docker-compose -f docker-compose.external.yml pull
- Complete the upgrade by running the full deployment:
docker-compose -f docker-compose.external.yml run --rm deploy sl1x --skip-tags maxconnections
Alternatively, you can deploy each platform node individually by running the following commands in series:
docker-compose -f docker-compose.external.yml run --rm deploy sn
docker-compose -f docker-compose.external.yml run --rm deploy cn
docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections
docker-compose -f docker-compose.external.yml run --rm deploy sm
- Update security packages on all nodes:
docker-compose -f docker-compose.external.yml run --rm deploy package-updates
- Update your classic SL1 appliances. For more information, see the section on Updating SL1.
Upgrading to 11.3.x
Upgrading from 11.3.x to the Latest Version of 11.3.x
To upgrade the SL1 Extended Architecture from 11.3.0 to 11.3.1:
- Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
- In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
cd sl1x-deploy
- Back up the following files:
- /home/em7admin/sl1x-deploy/sl1x-inv.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.rkestate
- /home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml
ScienceLogic recommends that you back up these files at regular intervals.
- Run the following command to enter the Ansible shell on the Docker container:
docker-compose -f docker-compose.external.yml run --rm deploy shell
- Delete any failed charts:
helm ls | awk '/FAILED/'
- If the above command results in any output, run the following command:
helm delete $(helm ls | awk '/FAILED/ { print $1 }')
- Exit the Ansible shell session:
exit
- If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 10.
- If needed, use SSH to access the Management Node again. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
- Log in to Harbor repository:
oras login registry.scilo.tools/sciencelogic/
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
- Download the deployment files:
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:11.3
cd sl1x-deploy
- Copy the inventory template file to the file named sl1x-inv.yml:
cp sl1x-inv-template.yml sl1x-inv.yml
- Edit the file sl1x-inv.yml to match your SL1 Extended system:
vi sl1x-inv.yml
Do not remove colons when editing this file.
- Make sure that the sl1_version value is the latest service version for 11.3 code line.
- Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
- Save your changes and exit the file (:wq).
- Pull the Docker image that is referenced in the docker-compose file
docker-compose -f docker-compose.external.yml pull
- Update credentials on all nodes:
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass
- Run the following deploy command at the shell prompt to upgrade RKE and Kubernetes on the Compute Nodes:
docker-compose -f docker-compose.external.yml run --rm deploy cn
- Update the SL1 Extended system services:
docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections
- Update security packages on all nodes:
docker-compose -f docker-compose.external.yml run --rm deploy package-updates
Upgrading from 11.2.x to 11.3.x
To upgrade the SL1 Extended Architecture from the 11.2.x line to 11.3.x:
- Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
- In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
cd sl1x-deploy
- Back up the following files:
- /home/em7admin/sl1x-deploy/sl1x-inv.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.rkestate
- /home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml
ScienceLogic recommends that you back up these files at regular intervals.
- Run the following command to enter the Ansible shell on the Docker container:
docker-compose -f docker-compose.external.yml run --rm deploy shell
- Delete any failed charts:
helm ls | awk '/FAILED/'
- If the above command results in any output, run the following command:
helm delete $(helm ls | awk '/FAILED/ { print $1 }')
- Exit the Ansible shell session:
exit
- If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 10.
- If needed, use SSH to access the Management Node again. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
- Log in to Harbor repository:
oras login registry.scilo.tools/sciencelogic/
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
- Download the deployment files:
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:11.3
cd sl1x-deploy
- Copy the inventory template file to the file named sl1x-inv.yml:
cp sl1x-inv-template.yml sl1x-inv.yml
- Edit the file sl1x-inv.yml to match your SL1 Extended system:
vi sl1x-inv.yml
Do not remove colons when editing this file.
- Make sure that the sl1_version value is set to the latest version in the 11.3 code line.
- Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
- Save your changes and exit the file (:wq).
- Pull the Docker image that is referenced in the docker-compose file
docker-compose -f docker-compose.external.yml pull
- Update credentials on all nodes:
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass
- Run the following deploy commands at the shell prompt to upgrade RKE and Kubernetes on the Compute Nodes:
docker-compose -f docker-compose.external.yml run --rm deploy rke-preupgrade
docker-compose -f docker-compose.external.yml run --rm deploy app-purge
docker-compose -f docker-compose.external.yml run --rm deploy rke-upgrade
docker-compose -f docker-compose.external.yml run --rm deploy rke-postupgrade --skip-tags ten
You can run the deploy rke-upgrade and deploy rke-postupgrade commands only once.
- Re-enter the Ansible shell on the Docker container:
docker-compose -f docker-compose.external.yml run --rm deploy shell
- Run the following command:
kubectl get nodes
- Verify that all node versions listed are upgraded to RKE2 and Kubernetes v1.22. For example, you might see v1.22.9+rke2r2 listed as the version.
- Exit out of the Ansible shell session:
exit
- Update the SL1 Extended system services:
docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections
- Update security packages on all nodes:
docker-compose -f docker-compose.external.yml run --rm deploy package-updates
- Re-enter the Ansible shell and run the following command:
kubectl --kubeconfig=/ansible/output-files/kube_config_cluster.yml delete deployment rke2-ingress-nginx-defaultbackend -n kube-system
This command ensures that all old resources are deleted. The output can be resource delete/Error from server (NotFound).
Upgrading from 11.1.x to 11.3.x
To upgrade the SL1 Extended Architecture from the 11.1.x line to 11.3.x:
- Use SSH to access the Management Node. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
- In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
cd sl1x-deploy
- Back up the following files:
- /home/em7admin/sl1x-deploy/sl1x-inv.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.rkestate
- /home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml
ScienceLogic recommends that you back up these files at regular intervals.
- Run the following command to enter the Ansible shell on the Docker container:
docker-compose -f docker-compose.external.yml run --rm deploy shell
- Delete any failed charts:
helm ls | awk '/FAILED/'
- If the above command results in any output, run the following command:
helm delete $(helm ls | awk '/FAILED/ { print $1 }')
- Exit the Ansible shell session:
exit
- If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 10.
- If needed, use SSH to access the Management Node again. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
- Log in to Harbor repository:
oras login registry.scilo.tools/sciencelogic/
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
- Download the deployment files:
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:11.3
cd sl1x-deploy
- Copy the inventory template file to the file named sl1x-inv.yml:
cp sl1x-inv-template.yml sl1x-inv.yml
- Edit the file sl1x-inv.yml to match your SL1 Extended system:
vi sl1x-inv.yml
Do not remove colons when editing this file.
- Make sure that the sl1_version value is set to the latest version in the 11.3 code line.
- Make sure that the deployment value is: deployment: on-prem.
- Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
- Save your changes and exit the file (:wq).
- Pull the Docker image that is referenced in the docker-compose file
docker-compose -f docker-compose.external.yml pull
- Update credentials on all nodes:
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass
- Run the following deploy commands at the shell prompt to upgrade RKE and Kubernetes on the Compute Nodes:
docker-compose -f docker-compose.external.yml run --rm deploy rke-preupgrade
docker-compose -f docker-compose.external.yml run --rm deploy app-purge
docker-compose -f docker-compose.external.yml run --rm deploy rke-upgrade
docker-compose -f docker-compose.external.yml run --rm deploy rke-postupgrade --skip-tags ten
You can run the deploy rke-upgrade and deploy rke-postupgrade commands only once.
- Re-enter the Ansible shell on the Docker container:
docker-compose -f docker-compose.external.yml run --rm deploy shell
- Run the following command:
kubectl get nodes
- Verify that all node versions listed are upgraded to RKE2 and Kubernetes v1.22. For example, you might see v1.22.9+rke2r2 listed as the version.
- Exit out of the Ansible shell session:
exit
- At the shell prompt, run the following deploy commands to update the SL1 Extended system services:
docker-compose -f docker-compose.external.yml run --rm deploy sn
docker-compose -f docker-compose.external.yml run --rm deploy sm
docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections
- Update security packages on all nodes:
docker-compose -f docker-compose.external.yml run --rm deploy package-updates
- Re-enter the Ansible shell and run the following command:
kubectl --kubeconfig=/ansible/output-files/kube_config_cluster.yml delete deployment rke2-ingress-nginx-defaultbackend -n kube-system
This command ensures that all old resources are deleted. The output can be resource delete/Error from server (NotFound).
Upgrading from 10.2.x to 11.3.x
To upgrade the SL1 Extended Architecture from the 10.2.x line to 11.3.x:
- Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
- In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
cd sl1x-deploy
- Back up the following files:
- /home/em7admin/sl1x-deploy/sl1x-inv.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.rkestate
- /home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml
ScienceLogic recommends that you back up these files at regular intervals.
- Run the following command to enter the Ansible shell on the Docker container:
docker-compose -f docker-compose.external.yml run --rm deploy shell
- Delete any failed charts:
helm ls | awk '/FAILED/'
- If the above command results in any output, run the following command:
helm delete $(helm ls | awk '/FAILED/ { print $1 }')
- Exit the Ansible shell session:
exit
- If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 10.
- If needed, use SSH to access the Management Node again. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
- Log in to Harbor repository:
oras login registry.scilo.tools/sciencelogic/
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
- Download the deployment files:
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:11.3
cd sl1x-deploy
- Copy the inventory template file to the file named sl1x-inv.yml:
cp sl1x-inv-template.yml sl1x-inv.yml
- Edit the file sl1x-inv.yml to match your SL1 Extended system:
vi sl1x-inv.yml
Do not remove colons when editing this file.
- Make sure that the sl1_version value is set to the latest version in the 11.3 code line.
- Make sure that the deployment value is: deployment: on-prem.
- Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
- Save your changes and exit the file (:wq).
- Pull the Docker image that is referenced in the docker-compose file
docker-compose -f docker-compose.external.yml pull
- Update credentials on all nodes:
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass
When prompted, enter the System Password that you entered on the ISO menu.
- Run the cn-helm-upgrade service:
docker-compose -f docker-compose.external.yml run --rm deploy cn-helm-upgrade
- Run the following deploy commands at the shell prompt to upgrade RKE and Kubernetes on the Compute Nodes:
docker-compose -f docker-compose.external.yml run --rm deploy rke-preupgrade
docker-compose -f docker-compose.external.yml run --rm deploy app-purge
docker-compose -f docker-compose.external.yml run --rm deploy rke-upgrade
docker-compose -f docker-compose.external.yml run --rm deploy rke-postupgrade --skip-tags eleven
You can run the deploy rke-upgrade and deploy rke-postupgrade commands only once.
- Re-enter the Ansible shell on the Docker container:
docker-compose -f docker-compose.external.yml run --rm deploy shell
- Run the following command:
kubectl get nodes
- Verify that all node versions listed are upgraded to RKE2 and Kubernetes v1.22. For example, you might see v1.22.9+rke2r2 listed as the version.
- Exit out of the Ansible shell session:
exit
- At the shell prompt, run the following deploy commands to update the SL1 Extended system services:
docker-compose -f docker-compose.external.yml run --rm deploy sn
docker-compose -f docker-compose.external.yml run --rm deploy sm
docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections
- Update security packages on all nodes:
docker-compose -f docker-compose.external.yml run --rm deploy package-updates
- Re-enter the Ansible shell and run the following commands:
kubectl --kubeconfig=/ansible/output-files/kube_config_cluster.yml delete deployment rke2-ingress-nginx-defaultbackend -n kube-system
kubectl patch job migration-agent-addons-remove --type=strategic --patch '{"spec":{"suspend":true}}' -n kube-system
These commands ensure that all old resources are deleted. The output can be resource delete/Error from server (NotFound).
Upgrading to 11.2.x
Upgrading from 11.2.x to the Latest Version of 11.2.x
To upgrade the SL1 Extended Architecture from the 11.2.x line to a later release in the 11.2.x line:
- Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
- In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
- Back up the following files:
- /home/em7admin/sl1x-deploy/sl1x-inv.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.rkestate
- /home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml
- Run the following command to enter the Ansible shell on the Docker container:
- Delete any failed charts:
- If the above command results in any output, run the following command:
- Exit the Ansible shell session:
- If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 10.
- If needed, use SSH to access the Management Node again. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
- Log in to Harbor repository:
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
- Download the deployment files:
- Copy the inventory template file to the file named sl1x-inv.yml:
- Edit the file sl1x-inv.yml to match your SL1 Extended system:
- Make sure that the sl1_version value is set to the latest version in the 11.2 code line.
- Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
- Save your changes and exit the file (:wq).
- Pull the docker image that is referenced in the docker-compose file:
- Update credentials on all nodes:
- Run the deploy commands at the shell prompt:
- Update security packages on all nodes:
cd sl1x-deploy
: ScienceLogic recommends that you back up these files at regular intervals.
docker-compose -f docker-compose.external.yml run --rm deploy shell
helm ls | awk '/FAILED/'
helm delete $(helm ls | awk '/FAILED/ { print $1 }')
exit
oras login registry.scilo.tools/sciencelogic/
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:11.2
cd sl1x-deploy
cp sl1x-inv-template.yml sl1x-inv.yml
vi sl1x-inv.yml
Do not remove colons when editing this file.
docker-compose -f docker-compose.external.yml pull
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass
When prompted, enter the System Password that you entered in the ISO menu.
docker-compose -f docker-compose.external.yml run --rm deploy cn
docker-compose -f docker-compose.external.yml run --rm deploy app
docker-compose -f docker-compose.external.yml run --rm deploy package-updates
Upgrading from 11.1.x to 11.2.x
To upgrade the SL1 Extended Architecture from the 11.1.x line to the 11.2.x line:
- Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
- In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
- Back up the following files:
- /home/em7admin/sl1x-deploy/sl1x-inv.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.rkestate
- /home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml
- Run the following command to enter the Ansible shell on the Docker container:
- Delete any failed charts:
- If the above command results in any output, run the following command:
- Exit the Ansible shell session:
- If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 10.
- If needed, use SSH to access the Management Node again. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
- Log in to Harbor repository:
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
- Download the deployment files:
- Copy the inventory template file to the file named sl1x-inv.yml:
- Edit the file sl1x-inv.yml to match your SL1 Extended system:
-
Make sure that the sl1_version value is set to the latest version in the 11.2 code line.
- Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
- Save your changes and exit the file (:wq).
- Pull the docker image that is referenced in the docker-compose file:
- Update credentials on all nodes:
- Run the deploy commands at the shell prompt:
- Update security packages on all nodes:
cd sl1x-deploy
: ScienceLogic recommends that you back up these files at regular intervals.
docker-compose -f docker-compose.external.yml run --rm deploy shell
helm ls | awk '/FAILED/'
helm delete $(helm ls | awk '/FAILED/ { print $1 }')
exit
oras login registry.scilo.tools/sciencelogic/
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:11.2
cd sl1x-deploy
cp sl1x-inv-template.yml sl1x-inv.yml
vi sl1x-inv.yml
Do not remove colons when editing this file.
docker-compose -f docker-compose.external.yml pull
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass
When prompted, enter the System Password that you entered in the ISO menu.
docker-compose -f docker-compose.external.yml run --rm deploy sl1x
docker-compose -f docker-compose.external.yml run --rm deploy package-updates
Upgrading from 10.2.x to 11.2.x
To upgrade the SL1 Extended Architecture from the 10.2.x line to the 11.2.x line:
- Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
- In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
- Back up the following files:
- /home/em7admin/sl1x-deploy/sl1x-inv.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.rkestate
- /home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml
- Run the following command to enter the Ansible shell on the docker container:
- Delete any failed charts:
- If the above command results in any output, run the following command:
- Exit the Ansible shell session:
- If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 10.
- If needed, use SSH to access the Management Node again. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
- Log in to Harbor repository:
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
- Download the deployment files:
- Copy the inventory template file to the file named sl1x-inv.yml:
- Edit the file sl1x-inv.yml to match your SL1 Extended system:
- Make sure that the sl1_version value is set to the latest version in the 11.2 code line.
- Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture.
- Save your changes and exit the file (:wq).
- Pull the docker image that is referenced in the docker-compose file:
- Update credentials on all nodes:
- Run the cn-helm-upgrade service and the app-purge service:
- Run the deploy commands at the shell prompt:
- Update security packages on all nodes:
cd sl1x-deploy
ScienceLogic recommends that you back up these files at regular intervals.
docker-compose -f docker-compose.external.yml run --rm deploy shell
helm ls | awk '/FAILED/'
helm delete --purge $(helm ls | awk '/FAILED/ { print $1 }')
exit
oras login registry.scilo.tools/sciencelogic/
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:11.2
cd sl1x-deploy
cp sl1x-inv-template.yml sl1x-inv.yml
vi sl1x-inv.yml
Do not remove colons when editing this file.
docker-compose -f docker-compose.external.yml pull
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass
When prompted, enter the System Password that you entered in the ISO menu.
docker-compose -f docker-compose.external.yml run --rm deploy cn-helm-upgrade
docker-compose -f docker-compose.external.yml run --rm deploy app-purge
docker-compose -f docker-compose.external.yml run --rm deploy sl1x
docker-compose -f docker-compose.external.yml run --rm deploy package-updates
Upgrading from 10.1.x to 11.2.x
To upgrade the SL1 Extended Architecture from the 10.1.x line to the 11.2.x line:
- Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
- In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
- Back up the following files:
- /home/em7admin/sl1x-deploy/sl1x-inv.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.rkestate
- /home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml
- Run the following command to enter the Ansible shell on the docker container:
- Delete the following Helm charts:
- Delete any failed charts:
- If the above command results in any output, run the following command:
- Exit the Ansible shell session:
- If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 11.
- If needed, use SSH to access the Management Node again. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
- Log in to Harbor repository:
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
- Download the deployment files:
- Copy the inventory template file to the file named sl1x-inv.yml:
- Edit the file sl1x-inv.yml to match your SL1 Extended system:
- Make sure that the sl1_version value is set to the latest version in the 11.2 code line.
- Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture.
- Save your changes and exit the file (:wq).
- Pull the Docker image that is referenced in the docker-compose file:
- Update credentials on all nodes:
- Run the cn-helm-upgrade service and the app-purge service:
- Run the deploy command:
- Update security packages on all nodes:
cd sl1x-deploy
ScienceLogic recommends that you back up these files at regular intervals.
docker-compose -f docker-compose.external.yml run --rm deploy shell
helm delete --purge sl1-cn-registration
helm delete --purge model-registry
helm delete --purge aiml-redis-inputcache
kubectl patch pvc redis-data-model-registry-redis-master-0 -p '{"metadata":{"finalizers":null}}'
kubectl patch pvc redis-data-aiml-redis-inputcache-master-0 -p '{"metadata":{"finalizers":null}}'
kubectl delete pvc redis-data-model-registry-redis-master-0 --force --cascade=true
kubectl delete pvc redis-data-aiml-redis-inputcache-master-0 --force --cascade=true
helm ls | awk '/FAILED/'
helm delete --purge $(helm ls | awk '/FAILED/ { print $1 }')
exit
oras login registry.scilo.tools/sciencelogic/
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:11.2
cd sl1x-deploy
cp sl1x-inv-template.yml sl1x-inv.yml
vi sl1x-inv.yml
Do not remove colons when editing this file.
docker-compose -f docker-compose.external.yml pull
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass
When prompted, enter the System Password that you entered in the ISO menu.
docker-compose -f docker-compose.external.yml run --rm deploy cn-helm-upgrade
docker-compose -f docker-compose.external.yml run --rm deploy app-purge
docker-compose -f docker-compose.external.yml run --rm deploy sl1x
docker-compose -f docker-compose.external.yml run --rm deploy package-updates
Upgrading to 11.1.x
Upgrading from 11.1.x to the Latest Version of 11.1.x
To upgrade the SL1 Extended Architecture from the 11.1.x line to a later version of the 11.1.x line:
- Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
- In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
- Backup the following files:
- /home/em7admin/sl1x-deploy/sl1x-inv.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.rkestate
- /home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml
- Run the following command to enter the Ansible shell on the docker container:
- Delete any failed charts:
- If the above command results in any output, run the following command:
- Exit out of ansible shell session:
- If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 10.
- If needed, use SSH to access the Management Node again. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
- Log in to Harbor repository:
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
- Download the deployment files:
- Edit the file sl1x-inv.yml to match your SL1 Extended system:
- Make sure that the sl1_version value is set to the latest version in the 11.1 code line.
- Supply values in all the fields that are applicable. Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture.
- Save your changes and exit the file (:wq).
- Pull the docker image:
- Update credentials on all nodes:
- Run the deploy commands:
- Update security packages on all nodes:
cd sl1x-deploy
ScienceLogic recommends that you back up these files at regular intervals
docker-compose -f docker-compose.external.yml run --rm deploy shell
helm ls | awk '/FAILED/'
helm delete $(helm ls | awk '/FAILED/ { print $1 }')
exit
oras login registry.scilo.tools/sciencelogic/
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:11.1
cd sl1x-deploy
vi sl1x-inv.yml
Do not remove colons when editing this file.
docker-compose -f docker-compose.external.yml pull
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass
When prompted, enter the System Password that you entered in the ISO menu.
docker-compose -f docker-compose.external.yml run --rm deploy kafka-purge
docker-compose -f docker-compose.external.yml run --rm deploy sl1x
docker-compose -f docker-compose.external.yml run --rm deploy package-updates
Upgrading from 10.2.x to 11.1.x
To upgrade the SL1 Extended Architecture from the 10.2.x line to the 11.1.x line:
- Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
- In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
- Backup the following files:
- /home/em7admin/sl1x-deploy/sl1x-inv.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.rkestate
- /home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml
- Run the following command to enter the Ansible shell on the docker container:
- In the Ansible shell, delete the following deprecated helm charts:
- Delete any other failed charts:
- If the above command results in any output, run the following command:
- Exit the Ansible shell session:
- If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 11.
- If needed, use SSH to access the Management Node again. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
- Log in to Harbor repository:
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
- Download the deployment files:
- Rename old inventory file:
- Copy the inventory template file to the file named sl1x-inv.yml:
- Edit the file sl1x-inv.yml to match your SL1 Extended system:
- Make sure that the sl1_version value is set to the latest version in the 11.1 code line.
- Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture.
- Save your changes and exit the file (:wq).
- Update the docker image:
- Update credentials on all nodes:
- When prompted, enter the System Password that you entered in the ISO menu.
- Run the cn-helm-upgrade service and the app_purge service:
- Run the deploy commands at the shell prompt:
- Update security packages on all nodes:
cd sl1x-deploy
ScienceLogic recommends that you back up these files at regular intervals
docker-compose -f docker-compose.external.yml run --rm deploy shell
helm delete --purge sl1-cn-registration
helm ls | awk '/FAILED/'
helm del --purge $(helm ls | awk '/FAILED/ { print $1 }')
exit
oras login registry.scilo.tools/sciencelogic/
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:11.1
cd sl1x-deploy
mv sl1x-inv.yml jfrog-sl1x-inv.yml
cp sl1x-inv-template.yml sl1x-inv.yml
vi sl1x-inv.yml
Do not remove colons when editing this file.
docker-compose -f docker-compose.external.yml pull
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass
docker-compose -f docker-compose.external.yml run --rm deploy cn-helm-upgrade
docker-compose -f docker-compose.external.yml run --rm deploy app-purge
docker-compose -f docker-compose.external.yml run --rm deploy sl1x
docker-compose -f docker-compose.external.yml run --rm deploy package-updates
Upgrading from 10.1.x to 11.1.x
To upgrade the SL1 Extended Architecture from the 10.1.x line to the 11.1.x line:
- Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
- In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
- Backup the following files:
- /home/em7admin/sl1x-deploy/sl1x-inv.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.rkestate
- /home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml
- Run the following command to enter Ansible shell on the the docker container:
- Inside the Ansible shell delete the following helm charts:
- Delete any failed charts:
- Exit the ansible shell session:
- If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 10.
- If needed, use SSH to access the Management Node again. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
- Log in to Harbor repository:
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
- Download the deployment files:
- Rename old inventory file:
- Copy the inventory template file to the name sl1x-inv.yml:
- Edit the file sl1x-inv.yml to match your SL1Extended system:
- Make sure that the sl1_version value is set to the latest version in the 11.1 code line.
- Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture.
- Save your changes and exit the file (:wq).
- Update the docker image:
- To update credentials on all nodes:
- Run helm upgrade and purge app services:
- Run the deploy command:
- Update security packages on all nodes:
cd sl1x-deploy
ScienceLogic recommends that you back up these files at regular intervals
docker-compose -f docker-compose.external.yml run --rm deploy shell
helm delete --purge sl1-cn-registration
helm delete --purge model-registry
helm delete --purge aiml-redis-inputcache
kubectl patch pvc redis-data-model-registry-redis-master-0 -p '{"metadata":{"finalizers":null}}'
kubectl patch pvc redis-data-aiml-redis-inputcache-master-0 -p '{"metadata":{"finalizers":null}}'
kubectl delete pvc redis-data-model-registry-redis-master-0 --force --cascade=true
kubectl delete pvc redis-data-aiml-redis-inputcache-master-0 --force --cascade=true
helm ls | awk '/FAILED/'
If the above command results in any output, run the following command:
helm delete --purge $(helm ls | awk '/FAILED/ { print $1 }')
exit
oras login registry.scilo.tools/sciencelogic/
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:11.1
cd sl1x-deploy
mv sl1x-inv.yml jfrog-sl1x-inv.yml
cp sl1x-inv-template.yml sl1x-inv.yml
vi sl1x-inv.yml
Do not remove colons when editing this file.
docker-compose -f docker-compose.external.yml pull
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass
When prompted, enter the System Password that you entered in the ISO menu.
docker-compose -f docker-compose.external.yml run --rm deploy cn-helm-upgrade
docker-compose -f docker-compose.external.yml run --rm deploy app-purge
docker-compose -f docker-compose.external.yml run --rm deploy sl1x
docker-compose -f docker-compose.external.yml run --rm deploy package-updates
Upgrading from 8.14.x to 11.1.x
To upgrade the SL1 Extended Architecture from the 8.14.x line to the 11.1.x line:
- Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
- In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
- /home/em7admin/sl1x-deploy/sl1x-inv.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.yml
- /home/em7admin/sl1x-deploy/output-files/cluster.rkestate
- /home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml
- Run the following command to enter the Ansible shell on the docker container:
- Make a note of the HOSTS value from running the following command:
- In the Ansible shell delete the the following services on helm chart:
- Delete any failed charts:
- Exit the Ansible shell session:
- Monitor the queues until they are drained:
- If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 11.
- If needed, use SSH to access the Management Node again. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
- Log in to Harbor repository:
- Enter the username you used to log in to the browser-based session of Harbor.
- Enter the password (CLI Secret) that you saved from the browser-based session of Harbor.
- Download the deployment files:
- Rename old inventory file:
- Copy the inventory template file to the name sl1x-inv.yml:
- Edit the file sl1x-inv.yml to match your SL1Extended system:
- Make sure that the sl1_version value is set to the latest version in the 11.1 code line.
- Supply values in all the fields that are applicable. Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture.
- Save your changes and exit the file (:wq).
- Download the deployment latest files:
- Update the SL1 Extended system compute nodes at the shell prompt:
- Navigate to the sl1x-deploy directory and download and extract latest templates:
- Update libraries and credentials on all nodes:
- Run helm upgrade and purge app services:
- Run the deploy command:
- Update security packages on all nodes:
cd sl1x-deploy
3. Backup the following files:
ScienceLogic recommends that you back up these files at regular intervals
docker-compose -f docker-compose.external.yml run --rm deploy shell
kubectl get ing responder-ingress
helm delete --purge sl1-cn-registration
helm delete --purge sl1-streamer
helm delete --purge sls-api-storeconfig
helm delete --purge avail-store
helm delete --purge da-postprocessing-service
helm delete --purge bundle-manager
helm ls | awk '/FAILED/'
If the above command results in any output, run the following command:
helm delete --purge $(helm ls | awk '/FAILED/ { print $1 }')
exit
check https://<HOSTS>/api/queues/list/?api_key=asdfQ345sdf
where:
HOSTS is the value from step #5.
Refresh that page until all queues have a value of 0 (zero).
oras login registry.scilo.tools/sciencelogic/
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:10.1
cd sl1x-deploy
mv sl1x-inv.yml sl1x-inv.yml.8.14
cp sl1x-inv-template.yml sl1x-inv.yml
vi sl1x-inv.yml
Do not remove colons when editing this file.
docker-compose -f docker-compose.external.yml pull
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass
When prompted, enter the System Password that you entered in the ISO menu.
docker-compose -f docker-compose.external.yml run --rm deploy cn
cd /home/em7admin/
oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:11.1
cd sl1x-deploy
docker-compose -f docker-compose.external.yml pull
docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass
When prompted, enter the System Password that you entered in the ISO menu.
docker-compose -f docker-compose.external.yml run --rm deploy cn-helm-upgrade
docker-compose -f docker-compose.external.yml run --rm deploy app-purge
docker-compose -f docker-compose.external.yml run --rm deploy sl1x
docker-compose -f docker-compose.external.yml run --rm deploy package-updates