Additional Configurations

Download this manual as a PDF file

Skylar Automated RCA On Prem allows for additional configurations to enable advanced features. Below is a list of these features and the necessary steps needed to configure them within the Skylar Automated RCA Install Chart.

Enabling OpenAI Models

Skylar Automated RCA supports leveraging OpenAI models to augment and enhance the summaries and titles of Root Cause reports. Currently Skylar Automated RCA supports the following OpenAI model providers:

Skylar Automated RCA supports the following OpenAI models:

  • Davinci
  • GPT 3.5 Turbo
  • GPT 4
  • GPT 4 32k

To leverage these models, you will need to create and set up OpenAI services from one of the above providers. Skylar Automated RCA supports multiple model configurations, using the following JSON format:

[
  {
    "name":  "gpt-3-davinci",
    "model": "gpt-3-davinci",
    "key":   "<KEY>",
    "url":   "<URL>",
    "default": true,
    "provider": "azure"
  },
  {
    "name":  "gpt-35-turbo",
    "model": "gpt-35-turbo",
    "key":   "<KEY>",
    "url":   "<URL>",
    "default": false,
    "provider": "azure"
  },
  {
    "name":  "gpt-4",
    "model": "gpt-4",
    "key":   "<KEY>",
    "url":   "<URL>",
    "default": false,
    "provider": "azure"
  },
  {
    "name":  "gpt-4-32k",
    "model": "gpt-4-32k",
    "key":   "<KEY>",
    "url":   "<URL>",
    "default": false,
    "provider": "azure"
  }
]

Prerequisites

  • You have completed all assumptions and prerequisites from the installation.
  • You have created an account in one of the supported OpenAI Providers.
  • You have onboarded one or more supported models in your provider and have the appropriate URL and API keys.

Installation

  1. Save the above JSON configuration into a JSON file on a machine with access to your Kubernetes cluster. For this example, we will be storing the file with the name of ai-nlp-models.json.

  2. Create a configmap in the namespace that you are deploying your zebrium-onprem application into, using the following command:

    kubectl create configmap -n example ai-nlp-models --from-file ai-nlp-models.json

    In this example, we are naming our configmap ai-nlp-models and deploying it into the namespace example. When we created the configmap above, the contents of the file was stored in the configmap under a key corresponding to the filename. So in this example, the key of the configmap is ai-nlp-models.json. You can verify this by running the following command:

    kubectl describe configmap -n example ai-nlp-models

  3. Update your helm override file and include the following section:

    zebrium-core:
      additionalEnvs:
      - name: AI_NLP_MODELS
        valueFrom:
          configMapKeyRef:
            name: ai-nlp-models
            key: ai-nlp-models.json

    In this section, we set the new environment variable AI_NLP_MODELS to the value of the configmap we created in step 2. Be sure to update the name and key references to the appropriate values from step 2.

  4. Add any more [configurations] (#additionalConfigurations) or continue with the installation process.

Setting NLP Provider Limits

NLP providers OpenAI and Azure provide Usage Limit settings that allow you to :

  1. Set a monthly budget, such as $300 USD per month.
  2. Set an email notification threshold, such as $150 USD per month.

It is strongly recommended that you set these values on the NLP provider account to ensure that you stay within a well-defined and limited budget.