Updrading SL1 Extended Architecture

Download this manual as a PDF file

This section provides detailed steps for performing an upgrade on SL1 Extended Architecture.

New installations of SL1 Extended Architecture are available only on SaaS deployments.

Use the following menu options to navigate the SL1 user interface:

  • To view a pop-out list of menu options, click the menu icon ().
  • To view a page containing all of the menu options, click the Advanced menu icon ().

Workflow

The following sections describe the steps to plan and deploy an SL1 update.

If would like assistance planning an upgrade path that minimizes downtime, contact your Customer Success Manager.

The workflow for upgrading SL1 is:

  1. Plan the update.
  2. Schedule maintenance windows.
  3. Review pre-upgrade best practices for SL1.
  4. Back up SSL certificates.
  5. Set the timeout for PhoneHome Watchdog.
  6. Adjust the timeout for slow connections.
  7. Run the system status script on the Database Server or All-In-One before upgrading.
  8. Upgrade the SL1 Distributed Architecture using the System Update tool (System > Tools > Updates).
  9. Remove SL1 appliances from maintenance mode.
  10. Upgrade the Extended Architecture.
  11. Upgrade MariaDB, if needed.
  12. Reboot SL1 appliances, if needed.
  13. Restore SSL certificates.
  14. Reset the timeout for PhoneHome Watchdog.
  15. Update the default PowerPacks.
  16. Configure Subscription Billing (one time only). For details, see .

For details on all steps in this list except step 10, see the section on Upgrading SL1.

Prerequisites

  • ScienceLogic recommends that for production systems, each Compute Cluster contains six (6) Compute Nodes. Lab systems can continue to use Compute  Clusters that include only three (3) Compute Nodes.
  • The Storage Cluster requires a (possibly additional) node to act as the Storage Manager.
  • Perform the installation steps in the Installation manual to install these additional nodes (for the Computer Cluster and the Storage Cluster) before upgrading your existing nodes.
  • Ensure that all nodes in the SL1 Extended Architecture can access the internet.
  • You must use the same password for the em7admin account during ISO installation of the Database Server and ISO installation of the appliances in the SL1 Extended Architecture.

To perform the upgrade, you must have a ScienceLogic customer account that allows you access to the Harbor repository page on the ScienceLogic Support Site. To verify your access, go to https://registry.scilo.tools/harbor/. For more information about obtaining Harbor login credentials, contact your Customer Success Manager.

Resizing the Disks on the Compute Node

The Kafka Messaging service requires additional disk space on each Compute Node. Before upgrading, ensure that each disk on each existing Compute Node in the Compute Node cluster is at least 350 GB.

If each disk on each existing Compute Node is not at least 350 GB, perform the following steps on each Compute Node:

  1. Resize the hard disk via your hypervisor to at least 350 GB.
  2. Note the name of the disk that you expanded in your hypervisor.
  3. Power on the virtual machine.
  4. Either go to the console of the Compute Node or use SSH to access the Compute Node.
  5. Open a shell session on the server.
  6. Log in with the system password for the Compute Node.
  7. At the shell prompt, enter:
  8. sudo lsblk | grep <disk_size>

    where:

    disk_size is your hard disk size from step #1.

  9. Note the name of the disk that you expanded in your hypervisor.
  10. At the shell prompt, enter:
  11. sudo fdisk /dev/<disk_name>

    where:

    disk_name is the name of the disk you want to expand.

  12. Enter p to print the partition table.
  13. Enter n to add a new partition.
  14. Enter p to make the new partition the primary partition.
  15. Select the default values for partition number, first sector, and last sector.
  16. Enter w to save these changes
  17. Restart the VM.
  18. At the shell prompt, enter:
  19. sudo fdisk -l

  20. Notice that now another partition is present.
  21. To initialize the new partition as a physical volume, enter the following at the shell prompt:
  22. sudo pvcreate <partition_name>

  23. To add the physical volume to the existing volume group, enter the following at the shell prompt:
  24. sudo vgextend em7vg <partition_name>

  25. To verify and confirm that the volume group has grown to the expected size, enter the following at the shell prompt:
  26. sudo vgdisplay | grep "VG Size

Installing ORAS

If you have not already installed OCI Registry as Storage (ORAS), you will need to do so before you can upgrade the SL1 Extended Architecture.

To do so:

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
  2. In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:
  3. cd sl1x-deploy

  4. Run the following commands:
  5. sudo su

    curl -LO https://github.com/oras-project/oras/releases/download/v0.12.0/oras_0.12.0_linux_amd64.tar.gz

    mkdir -p oras-install/

    tar -zxf oras_0.12.0_*.tar.gz -C oras-install/

    mv oras-install/oras /usr/bin/

    rm -rf oras_0.12.0_*.tar.gz oras-install/

    exit

Obtaining Your Harbor Credentials

You will need to know your Harbor username and CLI secret when you upgrade the SL1 Extended Architecture. To obtain these credentials:

  1. Log in to Harbor at: https://registry.scilo.tools/harbor/sign-in?redirect_url=%2Fharbor%2Fprojects
  2. Click Login via OIDC Provider.
  3. Click Customer Login.
  4. Log in with the username and credentials that you use to access the ScienceLogic Support site (support.sciencelogic.com).
  5. Click the username in the upper right and select User Profile.
  6. On the User Profile page:
    • Note the username.
    • Click the pages icon next to the CLI secret field to copy the CLI secret to cache.
  7. Exit the browser session.

Upgrading to 12.3.x

Before upgrading to SL1 12.3.0 or later, you must already be running SL1 on Oracle Linux 8 (OL8). If you are on a version of SL1 prior to 12.2.0 and running on OL7, you must first upgrade to SL1 12.1.1 or 12.1.2 and then migrate to OL8 before you can upgrade to SL1 12.3.x. For an overview of potential upgrade paths and their required steps, see the appropriate 12.3.x SL1 release notes.

To upgrade the SL1 Extended Architecture to 12.3.x from 12.1.x or 12.2.x instances running on Oracle Linux 8 (OL8), follow these steps:

  1. Complete preupgrade steps.
  2. Disable Scylla.
  3. Upgrade the SL1 Extended Architecture.
  4. Upgrade the SL1 Distributed Architecture.

Step 1: Preupgrade

  1. If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 4.

  2. Use SSH to access the Management Node. Open a shell session on the server. Log in with the system password you defined in the ISO menu.

  3. In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:

    cd sl1x-deploy

  4. Log in to Harbor repository:

    oras login registry.scilo.tools/sciencelogic/

  5. Exit out of the sl1x-deploy directory and download the deployment files:

    cd /home/em7admin/

    oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:12.3.x

    cd sl1x-deploy

  6. Copy the inventory template file to the sl1x-inv.yml file:

    cp sl1x-inv-template.yml sl1x-inv.yml

  7. Edit the sl1x-inv.yml file to match your SL1 Extended system:

    vi sl1x-inv.yml

    Do not remove colons when editing this file.

    • Make sure that the sl1_version value is sl1_version: 12.3.x.
    • Supply values in all the fields that are applicable. For details on the sl1x-inv.yml file, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
    • Save your changes and exit the file (:wq).
  8. Pull the Docker image that is referenced in the docker-compose file:

    docker-compose -f docker-compose.external.yml pull

Step 2: Disable the Scylla Cluster

To disable the Scylla cluster:

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  2. Open a text editor for the sl1x-inv.yml file:

    vi /home/em7admin/sl1x-deploy/sl1x-inv.yml

  3. Edit the file to add the following additional variables:

    all:

    vars:

    install_aiml: false

    enableNonScyllaPipeline: true

    enableLegacyScyllaPipeline: false

  4. Run the following command to remove services that used the previous configuration:

    docker-compose -f docker-compose.external.yml run --rm deploy app-purge

  5. In the sl1x-inv.yml file, remove the Storage Node and Storage Manager IP addresses from the list. For example, you would remove the following lines:

    sn:

    hosts:

    vars:

    scylla_admin_username: em7admin

    scylla_admin_password: <Scylla password>

    sm:

    hosts:

    vars:

    scylla_manager_db_user: em7admin

    scylla_manager_db_password: <Scylla password>

  6. Save your changes and exit the file (:wq).

Step 3: Upgrade the SL1 Extended Architecture

To upgrade the SL1 extended architecture:

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  2. To upgrade the Management Node services, run the following script:

    sudo bash package-update.sh

  3. To upgrade RKE and Kubernetes on the Compute Nodes, run the following command:

    docker-compose -f docker-compose.external.yml run --rm deploy cn

  4. To update the SL1 Extended Architecture system services, run the following command:

    docker-compose -f docker-compose.external.yml run --rm deploy app

Step 4: Upgrade the SL1 Distributed Architecture

Update your classic SL1 appliances. For more information, see the section on Updating SL1.

Upgrading to 12.2.x

Before upgrading to SL1 12.2.0 or later, you must already be running SL1 on Oracle Linux 8 (OL8). If you are on a version of SL1 prior to 12.2.0 and running on OL7, you must first upgrade to SL1 12.1.1 or 12.1.2 and then migrate to OL8 before you can upgrade to SL1 12.2.x. For an overview of potential upgrade paths and their required steps, see the appropriate 12.2.x SL1 release notes.

To upgrade the SL1 Extended Architecture to 12.2.x from 12.1.x instances running on Oracle Linux 8 (OL8), follow these steps:

  1. Complete preupgrade steps.
  2. Disable Scylla.
  3. Upgrade the SL1 Extended Architecture.
  4. Upgrade the SL1 Distributed Architecture.

Step 1: Preupgrade

  1. If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 4.

  2. Use SSH to access the Management Node. Open a shell session on the server. Log in with the system password you defined in the ISO menu.

  3. In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:

    cd sl1x-deploy

  4. Log in to Harbor repository:

    oras login registry.scilo.tools/sciencelogic/

  5. Exit out of the sl1x-deploy directory and download the deployment files:

    cd /home/em7admin/

    oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:12.2.x

    cd sl1x-deploy

  6. Copy the inventory template file to the sl1x-inv.yml file:

    cp sl1x-inv-template.yml sl1x-inv.yml

  7. Edit the sl1x-inv.yml file to match your SL1 Extended system:

    vi sl1x-inv.yml

    Do not remove colons when editing this file.

    • Make sure that the sl1_version value is sl1_version: 12.2.x.
    • Supply values in all the fields that are applicable. For details on the sl1x-inv.yml file, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
    • Save your changes and exit the file (:wq).
  8. Pull the Docker image that is referenced in the docker-compose file:

    docker-compose -f docker-compose.external.yml pull

Step 2: Disable the Scylla Cluster

To disable the Scylla cluster:

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  2. Open a text editor for the sl1x-inv.yml file:

    vi /home/em7admin/sl1x-deploy/sl1x-inv.yml

  3. Edit the file to add the following additional variables:

    all:

    vars:

    install_aiml: false

    enableNonScyllaPipeline: true

    enableLegacyScyllaPipeline: false

  4. Run the following command to remove services that used the previous configuration:

    docker-compose -f docker-compose.external.yml run --rm deploy app-purge

  5. In the sl1x-inv.yml file, remove the Storage Node and Storage Manager IP addresses from the list. For example, you would remove the following lines:

    sn:

    hosts:

    vars:

    scylla_admin_username: em7admin

    scylla_admin_password: <Scylla password>

    sm:

    hosts:

    vars:

    scylla_manager_db_user: em7admin

    scylla_manager_db_password: <Scylla password>

  6. Save your changes and exit the file (:wq).

Step 3: Upgrade the SL1 Extended Architecture

To upgrade the SL1 extended architecture:

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  2. To upgrade the Management Node services, run the following script:

    sudo bash package-update.sh

  3. To upgrade RKE and Kubernetes on the Compute Nodes, run the following command:

    docker-compose -f docker-compose.external.yml run --rm deploy cn

  4. To update the SL1 Extended Architecture system services, run the following command:

    docker-compose -f docker-compose.external.yml run --rm deploy app

Step 4: Upgrade the SL1 Distributed Architecture

Update your classic SL1 appliances. For more information, see the section on Updating SL1.

Upgrading to 12.1.2

Upgrading from 12.1.1 (OL8) to 12.1.2 (OL8)

To upgrade the SL1 Extended Architecture to 12.1.2 running on Oracle Linux 8 (OL8) from 12.1.1 running on OL8, follow these steps:

  1. Complete preupgrade steps.
  2. Upgrade or disable the Scylla cluster.
  3. Upgrade the SL1 Distributed Architecture.

Step 1: Preupgrade

  1. If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 4.

  2. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  3. In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:

    cd sl1x-deploy

  4. Log in to Harbor repository:

    oras login registry.scilo.tools/sciencelogic/

  5. Exit out of sl1x-deploy and download the deployment files:

    cd /home/em7admin/

    oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:12.1.2

    cd sl1x-deploy

  6. Copy the inventory template file to the name sl1x-inv.yml:

    cp sl1x-inv-template.yml sl1x-inv.yml

  7. Open the vi text editor to edit the sl1x-inv.yml file:

    vi sl1x-inv.yml

    Do not remove colons when editing this file.

  8. Change the sl1_version to 12.1.2.

  9. Supply values in all the fields that are applicable to your system and then save your changes and exit the file (:wq). 

  10. Pull the Docker image that is referenced in the docker-compose file:

    docker-compose -f docker-compose.external.yml pull

Step 2: Upgrade with Scylla or Disable the Scylla Cluster

On-premises SL1 users have the following options with regards to the Scylla cluster:

  • Option 1: Upgrade with Scylla. This option upgrades RKE and Kubernetes on the Compute Nodes and updates the system services while continuing to utilize Scylla.
  • Option 2: Disable Scylla. This option is available for users who do not utilize SL1's machine learning-based anomaly detection feature.

Procedures for these options are described in this section.

Option 1: Upgrade with Scylla

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  2. To upgrade RKE and Kubernetes on the Compute Nodes, run the following command:

    docker-compose -f docker-compose.external.yml run --rm deploy cn

  3. To update the SL1 Extended Architecture system services, run the following command:

    docker-compose -f docker-compose.external.yml run --rm deploy app

Option 2: Disable Scylla

If you do not utilize SL1's machine learning-based anomaly detection service, you have the option to remove existing Scylla databases from your Storage Nodes. This serves to lower resource utilization and cost. After disabling Scylla from a Storage Node, you can then opt to delete that Storage Node.

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  2. Open a text editor for the sl1x-inv.yml file:

    vi /home/em7admin/sl1x-deploy/sl1x-inv.yml

    Do not remove colons when editing this file.

  3. Edit the file:

    all:

    vars:

    install_aiml: false

    enableNonScyllaPipeline: true

    enableLegacyScyllaPipeline: false

  4. Save your changes and exit the file (:wq).

  5. To upgrade RKE and Kubernetes on the Compute Nodes, remove services, and then deploy updated services with the non-Scylla configuration, run the following commands:

    docker-compose -f docker-compose.external.yml run --rm deploy cn

    docker-compose -f docker-compose.external.yml run --rm deploy app-purge

    docker-compose -f docker-compose.external.yml run --rm deploy app

  6. Re-open the text editor for the sl1x-inv.yml file:

    vi /home/em7admin/sl1x-deploy/sl1x-inv.yml

    Do not remove colons when editing this file.

  7. In the sl1x-inv.yml file, remove the Storage Node and Storage Manager hosts from the list. For example, after editing the file, that section might look like this, with no hosts listed:

    sn:

    hosts:

    vars:

    scylla_admin_username: em7admin

    scylla_admin_password: <password>

     

    sm:

    hosts:

    vars:

    scylla_manager_db_user: em7admin

    scylla_manager_db_password: <password>

  8. Save your changes and exit the file (:wq).

Step 3. Upgrade the SL1 Distributed Architecture

Update your classic SL1 appliances. For more information, see the section on Updating SL1.

Upgrading from 11.2.x, 11.3.x, 12.1.0.x, or 12.1.1 (OL7) to 12.1.2 (OL8)

To upgrade the SL1 Extended Architecture to 12.1.2 running on Oracle Linux 8 (OL8) from 11.2.x, 11.3.x, 12.1.0.x, or 12.1.1 instances running on Oracle Linux 7 (OL7), follow these steps:

  1. Complete preupgrade steps.
  2. Upgrade or disable the Scylla cluster.
  3. Upgrade the SL1 Distributed Architecture.
  4. Upgrade the Compute Node clusters.
  5. Upgrade the Management Node.

Step 1: Preupgrade

  1. If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 4.

  2. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  3. In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:

    cd sl1x-deploy

  4. Log in to Harbor repository:

    oras login registry.scilo.tools/sciencelogic/

  5. Open the vi text editor to edit the sl1x-inv.yml file:

    vi sl1x-inv.yml

    Do not remove colons when editing this file.

  6. Change the sl1_version to 12.1.2.

  7. Supply values in all the fields that are applicable to your system and then save your changes and exit the file (:wq). 

  8. Set the docker-compose image to iac-sl1x:12.1.2:

    vi /home/em7admin/sl1x-deploy/docker-compose.external.yml

    image: registry.scilo.tools/sciencelogic/iac-sl1x:12.1.2

  9. Save your changes and exit the file (:wq).

  10. Pull the Docker image that is referenced in the docker-compose file:

    docker-compose -f docker-compose.external.yml pull

Step 2: Upgrade or Disable the Scylla Cluster

On-premises SL1 users have three options for upgrading the Scylla cluster or the option to disable Scylla:

Procedures for these options are described in this section.

Option 1: Rolling Upgrade

This option for upgrading Scylla is recommended for most SL1 deployments.

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  2. Remove the first Scylla node from the cluster:

    docker-compose -f docker-compose.external.yml run --rm deploy sn-remove --limit sn[0]

  3. Re-ISO the first Scylla node with the SL1 12.1.2 OL8 ISO. These Scylla node IPs can be found in the sl1x-inv.yml file. The following is an example:

    sn:

    hosts:

    10.2.253.90: # ip of storage node 1

    10.2.253.91: # ip of storage node 2

    10.2.253.92: # ip of storage node 3

    vars:

    # roles/sn-scylla

    scylla_admin_username: em7admin # scylla admin username

    scylla_admin_password: <password> # scylla admin password

     

    sm:

    hosts:

    10.2.253.82: # ip of sm

  4. Re-add the first Scylla node to the cluster:

    docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit sn[0]

    docker-compose -f docker-compose.external.yml run --rm deploy sn-restore --limit sn[0]

  5. Confirm that the node was added successfully:

    docker-compose -f docker-compose.external.yml run --rm deploy sn-cluster-check --limit sn[0]

    If you receive a message informing you that the task has failed because the new node has not yet joined the cluster, wait at least 15 minutes for the node to join and then run the command again. Larger clusters might require additional time. Continue checking every 15 minutes until the command is successful.

  6. Remove the second and third Scylla nodes from the cluster:

    docker-compose -f docker-compose.external.yml run --rm deploy sn-remove --limit sn[1]

    docker-compose -f docker-compose.external.yml run --rm deploy sn-remove --limit sn[2]

    For large amounts of data, remove the nodes one at a time.

  7. Re-ISO the second and third Scylla nodes with the SL1 12.1.2 OL8 ISO.

  8. Re-ISO the Storage Manager node with the SL1 12.1.2 OL8 ISO.

  9. Re-add the second and third Scylla nodes to the cluster:

    docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit sn[1],sn[2]

    docker-compose -f docker-compose.external.yml run --rm deploy sn-restore --limit sn[1],sn[2]

  10. Confirm that the nodes were added correctly:

    docker-compose -f docker-compose.external.yml run --rm deploy sn-cluster-check --limit sn[1]

    docker-compose -f docker-compose.external.yml run --rm deploy sn-cluster-check --limit sn[2]

  11. Deploy the Storage Manager:

    docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit sm

    docker-compose -f docker-compose.external.yml run --rm deploy sm

Option 2: Backup and Restore

This option requires AWS S3 access and is recommended for smaller deployments and lab environments.

Before beginning this procedure, you will need the following:

  • A Scylla AWS S3 bucket

    You will need an IAM role to access the bucket. For more information on configuring this role, see Scylla's documentation.

  • An active Scylla cluster
  • The Terraform state (tfstate) of the previous deployment
  1. Disable the Streamer service.

  2. Scale down the service so SL1 agents can collect data and store it locally until the Storage Node/Storage Manager upgrade process completes. To do so, use SSH to access the Management Node and run the following command in an Ansible shell session:

    kubectl scale --replicas=0 deployment.apps/streamer

  3. Exit the Ansible shell session and edit the sl1x-inv.yml file to include variables for the S3 bucket:

    scylla_backup_bucket: scilo-scylla-backup

    scylla_backup_bucket_region: scilo-scylla-backup

    access_key: #######

    secret_key : #######

  4. Back up Scylla data:

    cd /home/ec2-user/

    docker-compose -f docker-compose.external.yml run --rm deploy backup-scylla-ol8

  5. During the execution, take note of the output of this task:

    TASK [sciencelogic.sl1x_sn.sn-scylla : Output Host IDs] *********************************************************************************************

    changed: [10.152.1.250]

    TASK [sciencelogic.sl1x_sn.sn-scylla : debug] *******************************************************************************************************

    ok: [10.152.1.250] => {

    "host_ids.stdout_lines": [

    "Datacenter: dc",

    "==============",

    "Status=Up/Down",

    "|/ State=Normal/Leaving/Joining/Moving",

    "-- Address Load Tokens Owns Host ID Rack",

    "UN 10.152.5.250 9.05 MB 256 ? a6a4758a-5eb4-4382-99fb-b30e8841e68c r2",

    "UN 10.152.3.250 9.09 MB 256 ? d73d1ebb-acdb-47ad-81dc-b675a1ac5234 r1",

    "UN 10.152.1.250 9.08 MB 256 ? 10de9ae4-4c39-42c2-9ee0-6864244a4240 r0",

    "",

    "Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless"

    ]

    }

  6. SSH into the first Storage Node and get a snapshot tag:

    scylla-manager-agent download-files -L s3:scilo-scylla-backup --list-snapshots

    sm_20230214123551UTC

  7. Re-ISO the Storage Node/Storage Manager nodes with the SL1 12.1.2 OL8 ISO:

    docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit sn,sm

  8. SSH into the Management Node and finish the Storage Node/Storage Manager deployment:

    cd /home/ec2-user/

    docker-compose -f docker-compose.external.yml run --rm deploy sn

    docker-compose -f docker-compose.external.yml run --rm deploy sm

  9. Edit the sl1x-inv.yml file to add the following variables, based on steps 5 and 6:

    all:

    vars:

    #scylla backup and restore config

    scylla_backup_bucket: scilo-scylla-backup

    scylla_backup_bucket_region: us-east-1

    access_key: ************

    secret_key: ************

    # snapshot_tag specifies the Scylla Manager snapshot tag you want to restore.

    snapshot_tag: sm_20230214123551UTC

    # host_id specifies a mapping from the clone cluster node IP to the source cluster host IDs.

    # cluster host IDs.

    host_id:

    10.152.1.250: 10de9ae4-4c39-42c2-9ee0-6864244a4240

    10.152.3.250: d73d1ebb-acdb-47ad-81dc-b675a1ac5234

    10.152.5.250: a6a4758a-5eb4-4382-99fb-b30e8841e68c

  10. Run the restore playbook:

    docker-compose -f docker-compose.external.yml run --rm deploy restore-scylla-ol8

  11. Re-enable the Streamer service.

  12. After upgrading the Storage Node/Storage Manager, you can increase the scale for the Streamer service:

    kubectl scale --replicas=3 deployment.apps/streamer

Option 3: Disable Scylla

If you do not utilize SL1's machine learning-based anomaly detection service, you have the option to remove existing Scylla databases from your Storage Nodes. This serves to lower resource utilization and cost. After disabling Scylla from a Storage Node, you can then opt to delete that Storage Node.

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  2. Open a text editor for the sl1x-inv.yml file:

    vi /home/em7admin/sl1x-deploy/sl1x-inv.yml

  3. Edit the file:

    all:

    vars:

    install_aiml: false

    enableNonScyllaPipeline: true

    enableLegacyScyllaPipeline: false

  4. In that same file, remove the Storage Node and Storage Manager IP addresses from the list. For example, you would remove the following lines:

    sn:

    hosts:

    #10.2.253.90: # ip of storage node 1

    #10.2.253.91: # ip of storage node 2

    #10.2.253.92: # ip of storage node 3

    vars:

    # roles/sn-scylla

    scylla_admin_username: em7admin # scylla admin username

    scylla_admin_password: <password> # scylla admin password

     

    sm:

    hosts:

    #10.2.253.82: # ip of sm

  5. Save your changes and exit the file (:wq).

Step 3. Upgrade the SL1 Distributed Architecture

Update your classic SL1 appliances. For more information, see the section on Updating SL1.

Step 4. Upgrade the Compute Node Cluster

The process for upgrading your Compute Node (CN) cluster varies slightly based on whether you have a six-node cluster or a three-node cluster. Both options are described in this section.

Option 1: Six-node Clusters

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  2. Run the backup procedure:

    docker-compose -f docker-compose.external.yml run --rm deploy rke-backup --tags 6+nodes

  3. Re-ISO the CN worker nodes to the SL1 12.1.2 OL8 ISO.

    You can find the IP addresses for the worker nodes in the sl1x-inv.yml file.

  4. Set up SSH keys to the worker nodes and restore their data:

    rm -rf /home/em7admin/.ssh/known_hosts

    docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit worker

    docker-compose -f docker-compose.external.yml run --rm deploy rke-restore --tags 6+nodes

  5. Re-ISO the CN master nodes to the SL1 12.1.2 OL8 ISO.

    You can find the IP addresses for the master nodes in the sl1x-inv.yml file.

  6. If configured, re-ISO the load balancers to the SL1 12.1.2 OL8 ISO.

  7. Set up SSH keys to the master nodes and redeploy the cluster:

    docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit master,lb

    docker-compose -f docker-compose.external.yml run --rm deploy cn

    docker-compose -f docker-compose.external.yml run --rm deploy app

  8. Check for publisher/subscriptions .yml inside the input files. These are used if you have Publisher services enabled. Once .yml files are deployed, Publisher pods should be deployed as well.

    ls /home/em7admin/sl1x-deploy/input-files/subscriptions

    Apply datamodel first then subscriptions

Option 2: Three-node Clusters

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  2. Run the backup procedure:

    docker-compose -f docker-compose.external.yml run --rm deploy rke-backup --tags 3nodes

  3. Re-ISO the first two master nodes listed in the sl1x-inv.yml file to the SL1 12.1.2 OL8 ISO.

  4. Set up SSH keys to the two master nodes and restore their data:

    echo > ~/.ssh/known_hosts

    docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit master[0],master[1]

    docker-compose -f docker-compose.external.yml run --rm deploy rke-restore --tags 3nodes

  5. Re-ISO the last master node listed in the sl1x-inv.yml file to the SL1 12.1.2 OL8 ISO.

  6. If configured, re-ISO the load balancers to the SL1 12.1.2 OL8 ISO.

  7. Set up SSH keys to the last master node and redeploy the cluster:

    echo > ~/.ssh/known_hosts

    docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit master[2],lb

    docker-compose -f docker-compose.external.yml run --rm deploy cn

    docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections

  8. Ensure pods are running:

    docker-compose -f docker-compose.external.yml run --rm deploy shell

    kubectl get pods

  9. Check for publisher/subscriptions .yml inside the input files. These are used if you have Publisher services enabled. Once .yml files are deployed, Publisher pods should be deployed as well.

    ls /home/em7admin/sl1x-deploy/input-files/subscriptions

    Apply datamodel first then subscriptions

Step 5. Upgrade the Management Node

Do not upgrade the Management Node until your SL1 Database Server, Administration Portal, Storage Node, Storage Manager, Compute Node, and load balancers are upgraded to 12.1.2 OL8.

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  2. Run the backup procedure:

    cd /home/em7admin/

    cp .bash_history sl1x-deploy/input-files/

    tar cvf sl1x-deploy.tgz sl1x-deploy

  3. Copy the compressed file to a secure machine. For example:

    scp em7admin@<MN_IP>:sl1x-deploy.tgz sl1x-deploy.tgz

  4. Re-ISO the Management Node to the SL1 12.1.2 OL8 ISO.

  5. Log in to Harbor repository:

    oras login registry.scilo.tools/sciencelogic/

  6. Pull and run the mn-transformation.sh script, then exit the SSH session to apply the script changes:

    oras pull registry.scilo.tools/sciencelogic/mn-transformation:MN-Trans-OL8

    mv mn-transformation.sh /tmp/

    sudo sh /tmp/mn-transformation.sh

    exit

  7. Copy the compressed file back to the Management Node. For example:

    scp sl1x-deploy.tgz em7admin@<MN_IP>:/home/em7admin/sl1x-deploy.tgz

  8. SSH back into your Management Node and restore the sl1x-deploy folder and the bash history file:

    cd /home/em7admin/

    tar xf sl1x-deploy.tgz -C ./

    cp /home/em7admin/sl1x-deploy/input-files/.bash_history /home/em7admin/

  9. Your management node is now configured and can manage the cluster. To test, run the following command to see the kubectl pod output:

    docker-compose -f docker-compose.external.yml run --rm deploy shell

    INFO:__main__:Running with Parameters: Namespace(ansible_args=[], command='shell', force_root=False)

    ansible@74c0d0905aa7:/ansible$ kubectl get pods

Upgrading to 12.1.1

Upgrading from 11.2.x, 11.3.x, or 12.1.0.x (OL7) to 12.1.1 (OL8)

To upgrade the SL1 Extended Architecture to 12.1.1 running on Oracle Linux 8 (OL8) from 11.2.x, 11.3.x, or 12.1.0.x instances running on Oracle Linux 7 (OL7), follow these steps:

  1. Complete preupgrade steps.
  2. Upgrade or disable the Scylla cluster.
  3. Upgrade the SL1 Distributed Architecture.
  4. Upgrade the Compute Node clusters.
  5. Upgrade the Management Node.

Step 1: Preupgrade

  1. If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 4.

  2. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  3. In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:

    cd sl1x-deploy

  4. Log in to Harbor repository:

    oras login registry.scilo.tools/sciencelogic/

  5. Set the SL1 version to 12.1.1 in the sl1x-inv.yml file:

    vi /home/em7admin/sl1x-deploy/sl1x-inv.yml

    change sl1_version to 12.1.1

    Do not remove colons when editing this file.

  6. Set the docker-compose image to iac-sl1x:12.1.1:

    vi /home/em7admin/sl1x-deploy/docker-compose.external.yml

    image: registry.scilo.tools/sciencelogic/iac-sl1x:12.1.1

  7. Save your changes and exit the file (:wq).

  8. Pull the Docker image that is referenced in the docker-compose file:

    docker-compose -f docker-compose.external.yml pull

Step 2: Upgrade or Disable the Scylla Cluster

On-premises SL1 users have three options for upgrading the Scylla cluster or the option to disable Scylla:

Procedures for these options are described in this section.

Option 1: Rolling Upgrade

This option for upgrading Scylla is recommended for most SL1 deployments.

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  2. Remove the first Scylla node from the cluster:

    docker-compose -f docker-compose.external.yml run --rm deploy sn-remove --limit sn[0]

  3. Re-ISO the first Scylla node with the SL1 12.1.1 OL8 ISO. These Scylla node IPs can be found in the sl1x-inv.yml file. The following is an example:

    sn:

    hosts:

    10.2.253.90: # ip of storage node 1

    10.2.253.91: # ip of storage node 2

    10.2.253.92: # ip of storage node 3

    vars:

    # roles/sn-scylla

    scylla_admin_username: em7admin # scylla admin username

    scylla_admin_password: <password> # scylla admin password

     

    sm:

    hosts:

    10.2.253.82: # ip of sm

  4. Re-add the first Scylla node to the cluster:

    docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit sn[0]

    docker-compose -f docker-compose.external.yml run --rm deploy sn-restore --limit sn[0]

  5. Confirm that the node was added successfully:

    docker-compose -f docker-compose.external.yml run --rm deploy sn-cluster-check --limit sn[0]

    If you receive a message informing you that the task has failed because the new node has not yet joined the cluster, wait at least 15 minutes for the node to join and then run the command again. Larger clusters might require additional time. Continue checking every 15 minutes until the command is successful.

  6. Remove the second and third Scylla nodes from the cluster:

    docker-compose -f docker-compose.external.yml run --rm deploy sn-remove --limit sn[1]

    docker-compose -f docker-compose.external.yml run --rm deploy sn-remove --limit sn[2]

    For large amounts of data, remove the nodes one at a time.

  7. Re-ISO the second and third Scylla nodes with the SL1 12.1.1 OL8 ISO.

  8. Re-ISO the Storage Manager node with the SL1 12.1.1 OL8 ISO.

  9. Re-add the second and third Scylla nodes to the cluster:

    docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit sn[1],sn[2]

    docker-compose -f docker-compose.external.yml run --rm deploy sn-restore --limit sn[1],sn[2]

  10. Confirm that the nodes were added correctly:

    docker-compose -f docker-compose.external.yml run --rm deploy sn-cluster-check --limit sn[1]

    docker-compose -f docker-compose.external.yml run --rm deploy sn-cluster-check --limit sn[2]

  11. Deploy the Storage Manager:

    docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit sm

    docker-compose -f docker-compose.external.yml run --rm deploy sm

Option 2: Backup and Restore

This option requires AWS S3 access and is recommended for smaller deployments and lab environments.

Before beginning this procedure, you will need the following:

  • A Scylla AWS S3 bucket

    You will need an IAM role to access the bucket. For more information on configuring this role, see Scylla's documentation.

  • An active Scylla cluster
  • The Terraform state (tfstate) of the previous deployment
  1. Disable the Streamer service.

  2. Scale down the service so SL1 agents can collect data and store it locally until the Storage Node/Storage Manager upgrade process completes. To do so, use SSH to access the Management Node and run the following command in an Ansible shell session:

    kubectl scale --replicas=0 deployment.apps/streamer

  3. Exit the Ansible shell session and edit the sl1x-inv.yml file to include variables for the S3 bucket:

    scylla_backup_bucket: scilo-scylla-backup

    scylla_backup_bucket_region: scilo-scylla-backup

    access_key: #######

    secret_key : #######

  4. Back up Scylla data:

    cd /home/ec2-user/

    docker-compose -f docker-compose.external.yml run --rm deploy backup-scylla-ol8

  5. During the execution, take note of the output of this task:

    TASK [sciencelogic.sl1x_sn.sn-scylla : Output Host IDs] *********************************************************************************************

    changed: [10.152.1.250]

    TASK [sciencelogic.sl1x_sn.sn-scylla : debug] *******************************************************************************************************

    ok: [10.152.1.250] => {

    "host_ids.stdout_lines": [

    "Datacenter: dc",

    "==============",

    "Status=Up/Down",

    "|/ State=Normal/Leaving/Joining/Moving",

    "-- Address Load Tokens Owns Host ID Rack",

    "UN 10.152.5.250 9.05 MB 256 ? a6a4758a-5eb4-4382-99fb-b30e8841e68c r2",

    "UN 10.152.3.250 9.09 MB 256 ? d73d1ebb-acdb-47ad-81dc-b675a1ac5234 r1",

    "UN 10.152.1.250 9.08 MB 256 ? 10de9ae4-4c39-42c2-9ee0-6864244a4240 r0",

    "",

    "Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless"

    ]

    }

  6. SSH into the first Storage Node and get a snapshot tag:

    scylla-manager-agent download-files -L s3:scilo-scylla-backup --list-snapshots

    sm_20230214123551UTC

  7. Re-ISO the Storage Node/Storage Manager nodes with the SL1 12.1.1 OL8 ISO:

    docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit sn,sm

  8. SSH into the Management Node and finish the Storage Node/Storage Manager deployment:

    cd /home/ec2-user/

    docker-compose -f docker-compose.external.yml run --rm deploy sn

    docker-compose -f docker-compose.external.yml run --rm deploy sm

  9. Edit the sl1x-inv.yml file to add the following variables, based on steps 5 and 6:

    all:

    vars:

    #scylla backup and restore config

    scylla_backup_bucket: scilo-scylla-backup

    scylla_backup_bucket_region: us-east-1

    access_key: ************

    secret_key: ************

    # snapshot_tag specifies the Scylla Manager snapshot tag you want to restore.

    snapshot_tag: sm_20230214123551UTC

    # host_id specifies a mapping from the clone cluster node IP to the source cluster host IDs.

    # cluster host IDs.

    host_id:

    10.152.1.250: 10de9ae4-4c39-42c2-9ee0-6864244a4240

    10.152.3.250: d73d1ebb-acdb-47ad-81dc-b675a1ac5234

    10.152.5.250: a6a4758a-5eb4-4382-99fb-b30e8841e68c

  10. Run the restore playbook:

    docker-compose -f docker-compose.external.yml run --rm deploy restore-scylla-ol8

  11. Re-enable the Streamer service.

  12. After upgrading the Storage Node/Storage Manager, you can increase the scale for the Streamer service:

    kubectl scale --replicas=3 deployment.apps/streamer

Option 3: Disable Scylla

If you do not utilize SL1's machine learning-based anomaly detection service, you have the option to remove existing Scylla databases from your Storage Nodes. This serves to lower resource utilization and cost. After disabling Scylla from a Storage Node, you can then opt to delete that Storage Node.

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  2. Open a text editor for the sl1x-inv.yml file:

    vi /home/em7admin/sl1x-deploy/sl1x-inv.yml

  3. Edit the file:

    all:

    vars:

    install_aiml: false

    enableNonScyllaPipeline: true

    enableLegacyScyllaPipeline: false

  4. In that same file, remove the Storage Node and Storage Manager IP addresses from the list. For example, you would remove the following lines:

    sn:

    hosts:

    #10.2.253.90: # ip of storage node 1

    #10.2.253.91: # ip of storage node 2

    #10.2.253.92: # ip of storage node 3

    vars:

    # roles/sn-scylla

    scylla_admin_username: em7admin # scylla admin username

    scylla_admin_password: <password> # scylla admin password

     

    sm:

    hosts:

    #10.2.253.82: # ip of sm

  5. Save your changes and exit the file (:wq).

Step 3. Upgrade the SL1 Distributed Architecture

Update your classic SL1 appliances. For more information, see the section on Updating SL1.

Step 4. Upgrade the Compute Node Cluster

The process for upgrading your Compute Node (CN) cluster varies slightly based on whether you have a six-node cluster or a three-node cluster. Both options are described in this section.

Option 1: Six-node Clusters

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  2. Run the backup procedure:

    docker-compose -f docker-compose.external.yml run --rm deploy rke-backup --tags 6+nodes

  3. Re-ISO the CN worker nodes to the SL1 12.1.1 OL8 ISO.

    You can find the IP addresses for the worker nodes in the sl1x-inv.yml file.

  4. Set up SSH keys to the worker nodes and restore their data:

    rm -rf /home/em7admin/.ssh/known_hosts

    docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit worker

    docker-compose -f docker-compose.external.yml run --rm deploy rke-restore --tags 6+nodes

  5. Re-ISO the CN master nodes to the SL1 12.1.1 OL8 ISO.

    You can find the IP addresses for the master nodes in the sl1x-inv.yml file.

  6. If configured, re-ISO the load balancers to the SL1 12.1.1 OL8 ISO.

  7. Set up SSH keys to the master nodes and redeploy the cluster:

    docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit master,lb

    docker-compose -f docker-compose.external.yml run --rm deploy cn

    docker-compose -f docker-compose.external.yml run --rm deploy app

  8. Check for publisher/subscriptions .yml inside the input files. These are used if you have Publisher services enabled. Once .yml files are deployed, Publisher pods should be deployed as well.

    ls /home/em7admin/sl1x-deploy/input-files/subscriptions

    Apply datamodel first then subscriptions

Option 2: Three-node Clusters

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  2. Run the backup procedure:

    docker-compose -f docker-compose.external.yml run --rm deploy rke-backup --tags 3nodes

  3. Re-ISO the first two master nodes listed in the sl1x-inv.yml file to the SL1 12.1.1 OL8 ISO.

  4. Set up SSH keys to the two master nodes and restore their data:

    echo > ~/.ssh/known_hosts

    docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit master[0],master[1]

    docker-compose -f docker-compose.external.yml run --rm deploy rke-restore --tags 3nodes

  5. Re-ISO the last master node listed in the sl1x-inv.yml file to the SL1 12.1.1 OL8 ISO.

  6. If configured, re-ISO the load balancers to the SL1 12.1.1 OL8 ISO.

  7. Set up SSH keys to the last master node and redeploy the cluster:

    echo > ~/.ssh/known_hosts

    docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass --limit master[2],lb

    docker-compose -f docker-compose.external.yml run --rm deploy cn

    docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections

  8. Ensure pods are running:

    docker-compose -f docker-compose.external.yml run --rm deploy shell

    kubectl get pods

  9. Check for publisher/subscriptions .yml inside the input files. These are used if you have Publisher services enabled. Once .yml files are deployed, Publisher pods should be deployed as well.

    ls /home/em7admin/sl1x-deploy/input-files/subscriptions

    Apply datamodel first then subscriptions

Step 5. Upgrade the Management Node

Do not upgrade the Management Node until your SL1 Database Server, Administration Portal, Storage Node, Storage Manager, Compute Node, and load balancers are upgraded to 12.1.1 OL8.

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.

  2. Run the backup procedure:

    cd /home/em7admin/

    cp .bash_history sl1x-deploy/input-files/

    tar cvf sl1x-deploy.tgz sl1x-deploy

  3. Copy the compressed file to a secure machine. For example:

    scp em7admin@<MN_IP>:sl1x-deploy.tgz sl1x-deploy.tgz

  4. Re-ISO the Management Node to the SL1 12.1.1 OL8 ISO.

  5. Log in to Harbor repository:

    oras login registry.scilo.tools/sciencelogic/

  6. Pull and run the mn-transformation.sh script, then exit the SSH session to apply the script changes:

    oras pull registry.scilo.tools/sciencelogic/mn-transformation:MN-Trans-OL8

    mv mn-transformation.sh /tmp/

    sudo sh /tmp/mn-transformation.sh

    exit

  7. Copy the compressed file back to the Management Node. For example:

    scp sl1x-deploy.tgz em7admin@<MN_IP>:/home/em7admin/sl1x-deploy.tgz

  8. SSH back into your Management Node and restore the sl1x-deploy folder and the bash history file:

    cd /home/em7admin/

    tar xf sl1x-deploy.tgz -C ./

    cp /home/em7admin/sl1x-deploy/input-files/.bash_history /home/em7admin/

  9. Your management node is now configured and can manage the cluster. To test, run the following command to see the kubectl pod output:

    docker-compose -f docker-compose.external.yml run --rm deploy shell

    INFO:__main__:Running with Parameters: Namespace(ansible_args=[], command='shell', force_root=False)

    ansible@74c0d0905aa7:/ansible$ kubectl get pods

Upgrading to 12.1.0.x

Upgrading from 11.2.x or 11.3.x to 12.1.0.x:

To upgrade the SL1 Extended Architecture to 12.1.0.x from 11.2.x or 11.3.x:

  1. If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 4.
  2. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
  3. In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:

cd sl1x-deploy

  1. Log in to Harbor repository:

oras login registry.scilo.tools/sciencelogic/

  1. Download the deployment files:

cd /home/em7admin/

oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:12.1

cd sl1x-deploy

  1. Copy the inventory template file to the file named sl1x-inv.yml:

cp sl1x-inv-template.yml sl1x-inv.yml

  1. Edit the file sl1x-inv.yml to match your SL1 Extended system:

vi sl1x-inv.yml

Do not remove colons when editing this file.

  • Make sure that the sl1_version value is set to the latest service version for the 12.1.0 code line.
  • Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
  • Save your changes and exit the file (:wq).
  1. Pull the Docker image that is referenced in the docker-compose file:

docker-compose -f docker-compose.external.yml pull

  1. Complete the upgrade by running the full deployment:

docker-compose -f docker-compose.external.yml run --rm deploy sl1x --skip-tags maxconnections

Alternatively, you can deploy each platform node individually by running the following commands in series:

docker-compose -f docker-compose.external.yml run --rm deploy sn

docker-compose -f docker-compose.external.yml run --rm deploy cn

docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections

docker-compose -f docker-compose.external.yml run --rm deploy sm

  1. Update security packages on all nodes:

docker-compose -f docker-compose.external.yml run --rm deploy package-updates

  1. Update your classic SL1 appliances. For more information, see the section on Updating SL1.

Upgrading from 11.1.x to 12.1.0.x

To upgrade the SL1 Extended Architecture from 11.1.x to 12.1.0.x:

  1. If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 4.
  2. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
  3. In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:

cd sl1x-deploy

  1. Log in to Harbor repository:

oras login registry.scilo.tools/sciencelogic/

  1. Download the deployment files:

cd /home/em7admin/

oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:12.1

cd sl1x-deploy

  1. Pull the Docker image that is referenced in the docker-compose file

docker-compose -f docker-compose.external.yml pull

  1. Update credentials on all nodes:

docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass

  1. Download the deployment files:

cd /home/em7admin/

oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:12.1

cd sl1x-deploy

  1. Copy the inventory template file to the file named sl1x-inv.yml:

cp sl1x-inv-template.yml sl1x-inv.yml

  1. Edit the file sl1x-inv.yml to match your SL1 Extended system:

vi sl1x-inv.yml

Do not remove colons when editing this file.

  • Make sure that the sl1_version value is set to the latest service version for 12.1.0.x code line.
  • Add the variable deployment: onprem.
  • Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
  • Save your changes and exit the file (:wq).
  1. Pull the Docker image that is referenced in the docker-compose file

docker-compose -f docker-compose.external.yml pull

  1. Complete the upgrade by running the full deployment:

docker-compose -f docker-compose.external.yml run --rm deploy sl1x --skip-tags maxconnections

Alternatively, you can deploy each platform node individually by running the following commands in series:

docker-compose -f docker-compose.external.yml run --rm deploy sn

docker-compose -f docker-compose.external.yml run --rm deploy cn

docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections

docker-compose -f docker-compose.external.yml run --rm deploy sm

  1. Update security packages on all nodes: 

docker-compose -f docker-compose.external.yml run --rm deploy package-updates

  1. Update your classic SL1 appliances. For more information, see the section on Updating SL1.

Upgrading to 11.3.x

Upgrading from 11.3.x to the Latest Version of 11.3.x

To upgrade the SL1 Extended Architecture from 11.3.0 to 11.3.1:

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
  2. In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:

cd sl1x-deploy

  1. Back up the following files:
  • /home/em7admin/sl1x-deploy/sl1x-inv.yml
  • /home/em7admin/sl1x-deploy/output-files/cluster.yml
  • /home/em7admin/sl1x-deploy/output-files/cluster.rkestate
  • /home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml

ScienceLogic recommends that you back up these files at regular intervals.

  1. Run the following command to enter the Ansible shell on the Docker container:

docker-compose -f docker-compose.external.yml run --rm deploy shell

  1. Delete any failed charts:

helm ls | awk '/FAILED/'

  1. If the above command results in any output, run the following command:

helm delete $(helm ls | awk '/FAILED/ { print $1 }')

  1. Exit the Ansible shell session:

exit

  1. If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 10.
  2. If needed, use SSH to access the Management Node again. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
  3. Log in to Harbor repository:

oras login registry.scilo.tools/sciencelogic/

  1. Download the deployment files:

cd /home/em7admin/

oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:11.3

cd sl1x-deploy

  1. Copy the inventory template file to the file named sl1x-inv.yml:

cp sl1x-inv-template.yml sl1x-inv.yml

  1. Edit the file sl1x-inv.yml to match your SL1 Extended system:

vi sl1x-inv.yml

Do not remove colons when editing this file.

  • Make sure that the sl1_version value is the latest service version for 11.3 code line.
  • Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
  • Save your changes and exit the file (:wq).
  1. Pull the Docker image that is referenced in the docker-compose file

docker-compose -f docker-compose.external.yml pull

  1. Update credentials on all nodes:

docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass

  1. Run the following deploy command at the shell prompt to upgrade RKE and Kubernetes on the Compute Nodes:

docker-compose -f docker-compose.external.yml run --rm deploy cn

  1. Update the SL1 Extended system services:

docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections

  1. Update security packages on all nodes: 

docker-compose -f docker-compose.external.yml run --rm deploy package-updates

Upgrading from 11.2.x to 11.3.x

To upgrade the SL1 Extended Architecture from the 11.2.x line to 11.3.x:

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
  2. In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:

cd sl1x-deploy

  1. Back up the following files:
  • /home/em7admin/sl1x-deploy/sl1x-inv.yml
  • /home/em7admin/sl1x-deploy/output-files/cluster.yml
  • /home/em7admin/sl1x-deploy/output-files/cluster.rkestate
  • /home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml

ScienceLogic recommends that you back up these files at regular intervals.

  1. Run the following command to enter the Ansible shell on the Docker container:

docker-compose -f docker-compose.external.yml run --rm deploy shell

  1. Delete any failed charts:

helm ls | awk '/FAILED/'

  1. If the above command results in any output, run the following command:

helm delete $(helm ls | awk '/FAILED/ { print $1 }')

  1. Exit the Ansible shell session:

exit

  1. If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 10.
  2. If needed, use SSH to access the Management Node again. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
  3. Log in to Harbor repository:

oras login registry.scilo.tools/sciencelogic/

  1. Download the deployment files:

cd /home/em7admin/

oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:11.3

cd sl1x-deploy

  1. Copy the inventory template file to the file named sl1x-inv.yml:

cp sl1x-inv-template.yml sl1x-inv.yml

  1. Edit the file sl1x-inv.yml to match your SL1 Extended system:

vi sl1x-inv.yml

Do not remove colons when editing this file.

  • Make sure that the sl1_version value is set to the latest version in the 11.3 code line.
  • Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
  • Save your changes and exit the file (:wq).
  1. Pull the Docker image that is referenced in the docker-compose file

docker-compose -f docker-compose.external.yml pull

  1. Update credentials on all nodes:

docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass

  1. Run the following deploy commands at the shell prompt to upgrade RKE and Kubernetes on the Compute Nodes:

docker-compose -f docker-compose.external.yml run --rm deploy rke-preupgrade

docker-compose -f docker-compose.external.yml run --rm deploy app-purge

docker-compose -f docker-compose.external.yml run --rm deploy rke-upgrade

docker-compose -f docker-compose.external.yml run --rm deploy rke-postupgrade --skip-tags ten

You can run the deploy rke-upgrade and deploy rke-postupgrade commands only once.

  1. Re-enter the Ansible shell on the Docker container:

docker-compose -f docker-compose.external.yml run --rm deploy shell

  1. Run the following command:

kubectl get nodes

  1. Verify that all node versions listed are upgraded to RKE2 and Kubernetes v1.22. For example, you might see v1.22.9+rke2r2 listed as the version.
  2. Exit out of the Ansible shell session:

exit

  1. Update the SL1 Extended system services:

docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections

  1. Update security packages on all nodes: 

docker-compose -f docker-compose.external.yml run --rm deploy package-updates

  1. Re-enter the Ansible shell and run the following command:

kubectl --kubeconfig=/ansible/output-files/kube_config_cluster.yml delete deployment rke2-ingress-nginx-defaultbackend -n kube-system

This command ensures that all old resources are deleted. The output can be resource delete/Error from server (NotFound).

Upgrading from 11.1.x to 11.3.x

To upgrade the SL1 Extended Architecture from the 11.1.x line to 11.3.x:

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
  2. In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:

cd sl1x-deploy

  1. Back up the following files:
  • /home/em7admin/sl1x-deploy/sl1x-inv.yml
  • /home/em7admin/sl1x-deploy/output-files/cluster.yml
  • /home/em7admin/sl1x-deploy/output-files/cluster.rkestate
  • /home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml

ScienceLogic recommends that you back up these files at regular intervals.

  1. Run the following command to enter the Ansible shell on the Docker container:

docker-compose -f docker-compose.external.yml run --rm deploy shell

  1. Delete any failed charts:

helm ls | awk '/FAILED/'

  1. If the above command results in any output, run the following command:

helm delete $(helm ls | awk '/FAILED/ { print $1 }')

  1. Exit the Ansible shell session:

exit

  1. If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 10.
  2. If needed, use SSH to access the Management Node again. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
  3. Log in to Harbor repository:

oras login registry.scilo.tools/sciencelogic/

  1. Download the deployment files:

cd /home/em7admin/

oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:11.3

cd sl1x-deploy

  1. Copy the inventory template file to the file named sl1x-inv.yml:

cp sl1x-inv-template.yml sl1x-inv.yml

  1. Edit the file sl1x-inv.yml to match your SL1 Extended system:

vi sl1x-inv.yml

Do not remove colons when editing this file.

  • Make sure that the sl1_version value is set to the latest version in the 11.3 code line.
  • Make sure that the deployment value is: deployment: on-prem.
  • Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
  • Save your changes and exit the file (:wq).
  1. Pull the Docker image that is referenced in the docker-compose file

docker-compose -f docker-compose.external.yml pull

  1. Update credentials on all nodes:

docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass

  1. Run the following deploy commands at the shell prompt to upgrade RKE and Kubernetes on the Compute Nodes:

docker-compose -f docker-compose.external.yml run --rm deploy rke-preupgrade

docker-compose -f docker-compose.external.yml run --rm deploy app-purge

docker-compose -f docker-compose.external.yml run --rm deploy rke-upgrade

docker-compose -f docker-compose.external.yml run --rm deploy rke-postupgrade --skip-tags ten

You can run the deploy rke-upgrade and deploy rke-postupgrade commands only once.

  1. Re-enter the Ansible shell on the Docker container:

docker-compose -f docker-compose.external.yml run --rm deploy shell

  1. Run the following command:

kubectl get nodes

  1. Verify that all node versions listed are upgraded to RKE2 and Kubernetes v1.22. For example, you might see v1.22.9+rke2r2 listed as the version.
  2. Exit out of the Ansible shell session:

exit

  1. At the shell prompt, run the following deploy commands to update the SL1 Extended system services:

docker-compose -f docker-compose.external.yml run --rm deploy sn

docker-compose -f docker-compose.external.yml run --rm deploy sm

docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections

  1. Update security packages on all nodes: 

docker-compose -f docker-compose.external.yml run --rm deploy package-updates

  1. Re-enter the Ansible shell and run the following command:

kubectl --kubeconfig=/ansible/output-files/kube_config_cluster.yml delete deployment rke2-ingress-nginx-defaultbackend -n kube-system

This command ensures that all old resources are deleted. The output can be resource delete/Error from server (NotFound).

Upgrading from 10.2.x to 11.3.x

To upgrade the SL1 Extended Architecture from the 10.2.x line to 11.3.x:

  1. Use SSH to access the Management Node. Open a shell session on the server. Log in with the System Password you defined in the ISO menu.
  2. In the Management Node, navigate to the sl1x-deploy directory. To do this, enter the following at the shell prompt:

cd sl1x-deploy

  1. Back up the following files:
  • /home/em7admin/sl1x-deploy/sl1x-inv.yml
  • /home/em7admin/sl1x-deploy/output-files/cluster.yml
  • /home/em7admin/sl1x-deploy/output-files/cluster.rkestate
  • /home/em7admin/sl1x-deploy/output-files/kube_config_cluster.yml

ScienceLogic recommends that you back up these files at regular intervals.

  1. Run the following command to enter the Ansible shell on the Docker container:

docker-compose -f docker-compose.external.yml run --rm deploy shell

  1. Delete any failed charts:

helm ls | awk '/FAILED/'

  1. If the above command results in any output, run the following command:

helm delete $(helm ls | awk '/FAILED/ { print $1 }')

  1. Exit the Ansible shell session:

exit

  1. If you have not already done so, you must install ORAS and obtain your Harbor credentials, which you will need for step 10.
  2. If needed, use SSH to access the Management Node again. Open a shell session on the server. Log in with the system password you defined in the ISO menu.
  3. Log in to Harbor repository:

oras login registry.scilo.tools/sciencelogic/

  1. Download the deployment files:

cd /home/em7admin/

oras pull registry.scilo.tools/sciencelogic/sl1x-deploy:11.3

cd sl1x-deploy

  1. Copy the inventory template file to the file named sl1x-inv.yml:

cp sl1x-inv-template.yml sl1x-inv.yml

  1. Edit the file sl1x-inv.yml to match your SL1 Extended system:

vi sl1x-inv.yml

Do not remove colons when editing this file.

  • Make sure that the sl1_version value is set to the latest version in the 11.3 code line.
  • Make sure that the deployment value is: deployment: on-prem.
  • Supply values in all the fields that are applicable. For details on the sl1x-inv.yml, see the manual Installing SL1 Extended Architecture, which can be obtained by contacting ScienceLogic Support.
  • Save your changes and exit the file (:wq).
  1. Pull the Docker image that is referenced in the docker-compose file

docker-compose -f docker-compose.external.yml pull

  1. Update credentials on all nodes:

docker-compose -f docker-compose.external.yml run --rm deploy ssh-keys --ask-pass

When prompted, enter the System Password that you entered on the ISO menu.

  1. Run the cn-helm-upgrade service:

docker-compose -f docker-compose.external.yml run --rm deploy cn-helm-upgrade

  1. Run the following deploy commands at the shell prompt to upgrade RKE and Kubernetes on the Compute Nodes:

docker-compose -f docker-compose.external.yml run --rm deploy rke-preupgrade

docker-compose -f docker-compose.external.yml run --rm deploy app-purge

docker-compose -f docker-compose.external.yml run --rm deploy rke-upgrade

docker-compose -f docker-compose.external.yml run --rm deploy rke-postupgrade --skip-tags eleven

You can run the deploy rke-upgrade and deploy rke-postupgrade commands only once.

  1. Re-enter the Ansible shell on the Docker container:

docker-compose -f docker-compose.external.yml run --rm deploy shell

  1. Run the following command:

kubectl get nodes

  1. Verify that all node versions listed are upgraded to RKE2 and Kubernetes v1.22. For example, you might see v1.22.9+rke2r2 listed as the version.
  2. Exit out of the Ansible shell session:

exit

  1. At the shell prompt, run the following deploy commands to update the SL1 Extended system services:

docker-compose -f docker-compose.external.yml run --rm deploy sn

docker-compose -f docker-compose.external.yml run --rm deploy sm

docker-compose -f docker-compose.external.yml run --rm deploy app --skip-tags maxconnections

  1. Update security packages on all nodes: 

docker-compose -f docker-compose.external.yml run --rm deploy package-updates

  1. Re-enter the Ansible shell and run the following commands:

kubectl --kubeconfig=/ansible/output-files/kube_config_cluster.yml delete deployment rke2-ingress-nginx-defaultbackend -n kube-system

kubectl patch job migration-agent-addons-remove --type=strategic --patch '{"spec":{"suspend":true}}' -n kube-system

These commands ensure that all old resources are deleted. The output can be resource delete/Error from server (NotFound).