Step Development

A custom step should focus on only performing a single task and should return a result. If an exception is caught within the step, it must be re-raised or another exception should be raised that provides information on the issue. If a step does not raise an exception, it assumes the step successfully completed. This causes difficulty in tracking down issues as you may be looking at the incorrect step for the issue.

A custom step has no pre-defined function signature or format. It’s flexibility enables a step developer to request only the the Standard Parameters they need. Depending on the step type, additional parameters may be required.

Note

Standard Parameters differ from decorator parameters. Decorator parameters are used while registering the Step to the Snippet Framework while Standard Parameters are for the function signature.

Considerations

Before developing a step, ensure that there is not already a step or a way to chain steps together to accomplish the task. This helps maintain a healthy set of steps without overlap and reduces the chances of outdated code being propagated to different Dynamic Applications and/or SL1 stacks.

Ensure to use security best practices when developing a step. For example, avoid logging any secure information (such as credentials).

There are two cases when a Sciencelogic Library must be created:

  1. When creating a step and that step is intended to be used by many Dynamic Applications then it would be recommended to place the code in a ScienceLogic Library to eliminate the need to copy, paste, and potentially update it in multiple locations.

  2. When a step leverages a third-party wheel/library that is not included in the execution environment then that library must be included in a Sciencelogic Library.

Note

Custom ScienceLogic Library development is not covered by this document. Refer to documentation for creating custom ScienceLogic Libraries.

Avoiding De-Duplication Conflicts

When creating a custom step, it is important to determine what makes a step unique. By defining a steps uniqueness, the Snippet Framework can perform optimizations by reducing the amount of time processing the same set of data. This is referred as the request_id and anything with the same request_id is determined to be the same operation.

The default request_id is <step_name>_<step_args> which may or may not be sufficient in determining uniqueness for the step. If the step uses configuration outside of the step_args, then a custom request_id may be required. A custom request_id generator should return a string that identifies the uniqueness for the step.

For example, if you are running an SSH command you would expect to get the same results (within the same collection timeframe) if the following are the same between collections:

  • IP Address / Hostname

  • User

  • Command

def ssh_req_id(credential, step_args):
    # Assume step_args is a dict, and if not, step_args is the
    # entire command
    try:
        command = step_args["command"]
    except TypeError:
        command = step_args
    return "{}:{}:{}".format(
        credential.fields["cred_host"],
        credential.fields["cred_user"],
        command,
    )

@register_requestor(get_req_id=ssh_req_id)
def ssh_example(credential, step_args):
    pass

Writing to Device Logs

A custom step has the ability to write messages to the Device Logs, which are visible in SL1. This is done by raising the exception silo.apps.errors.DeviceError with the first parameter being the log messages.

from silo.apps.errors import DeviceError

raise DeviceError("Device Log message goes here")

Note

A device log message can only be 1024 characters.

Standard Parameters

When creating a custom step, it’s important to understand the available parameters that are available for the custom step. The following parameters can be used by all the step types.

Available Step Parameters

Name

Type

Description

step_args

object

The step_args object provides the ability to pass arguments from the snippet argument into the step.

collection

Collection

Object that contains the following attributes related to the Collection. It has the following attributes:

  • DynamicApp: Configuration related to the Dynamic Application

    • freq (int): Dynamic Application Frequency

    • id (int): Dynamic Application ID

    • gmtime (int): Timestamp for the collection

    • guid (str): Dynamic Application GUID

    • name (str): Name of the Dynamic Application

  • group: Group ID for the collection

  • obj_id: Object ID for the collection

  • argument: Snippet Argument for the collection

  • type: Class Type for the collection

credential

CredentialObject

Object that contains the decrypted information for the aligned credential. It has the following attributes:

  • id: Credential ID

  • name: Credential Name

  • cred_type_name: Name of the credential. If the credential type (credential.fields["cred_type"]) is universal, the name will be the credential display name.

    • 1: SNMP

    • 2: Database

    • 3: SOAP/XML

    • 4: LDAP/AD

    • 5: Basic/Snippet

    • 6: SSH/Key

    • 7: PowerShell

    • 8: Universal Credential Display Name

  • fields: All other credential information as a dictionary. Each field in the credential will have a corresponding key within this dictionary.

debug

callable

Function used for writing context-aware debug logs to the current collection and all de-duplicated collections.

metadata

object

Metadata related to the collection. Updating this value will not update the metadata. To update the metadata, use set_metadata. It is possible to update the existing reference without using set_metadata but it is prone to issues and not recommended.

request_id

string

Request ID of the current step. The Request ID is unique path to the step.

result

object

Finished object from the previous step.

set_metadata

callable

Sets the metadata to the value provided. An example of this callable would be set_metadata(new_metadata). If the step requested metadata and set_metadata, the metadata variable will not be updated with the recently set information from set_metadata. The next step can request metadata and have the most up-to-date metadata.

Step Types

To register a step into the Snippet Framework, you must decorate the callable with the decorator that applies to your step type. For example, if you wanted to register a Processor you would use the following:

@register_processor
def count_items(result):
    pass

Note

Python decorators are wrappers for function that enable additional functionality. Refer to PEP-318 for more details.

The Snippet Framework has different types of steps that perform different operations. This allows steps to be more focused. The types of steps are as follows:

  • Requestor - Retrieves the required data from the datasource.

  • Processor - Perform an action on the result. For example, a processor may parse, transform, or format the data. These are only a few examples of what actions a processor may perform.

  • Cacher - Attempts to fast-forward through already completed steps from cache. If no cache is found a cache element will be saved at this step.

  • RequestMoreData - Enables looping within the Snippet Framework to collect additional information. An example of this would be pagination of a REST endpoint.

  • Syntax - Enables the user to create their own logic for how to parse Snippet Arguments into the Snippet Framework for execution.

Requestor

A Requestor defines how to retrieve information from a single source-type. A Requester has access to all the Standard Parameters. Due to the uniqueness of a Requestor, a request_id generator should be written. Refer to Avoiding De-Duplication Conflicts for more information.

The returned value from the Requestor should be the result.

When registering a Requestor, the following information can be specified in the decorator:

silo.low_code.register_requestor(*args, **kwargs)

Decorator for registering Requesters

Parameters:
  • get_req_id (callable) – Function used for generating the request id. This is a very important function and its operation must be understood when developing a requestor. The proper choice for building the requestor is critical to building a highly perfomant dynamic app. If one is not provided, the request_id will be <step_name>_<step_args_in_string_form> and you must ensure this provides the correct level of uniquness. This function can utilize any of the Standard Parameters

  • name (str) – This function allows for overriding the pythonic name for the step. If a name is not specified, it will use the pythonic name of the callable. For example, if you use have def cool_beans(result): ... the name would be cool_beans.

  • metadata (dict) – Metadata related to the step

  • rewind (callable) –

    Callable function that updates the step_args to start the rewinding process. This function can utilize any of the Standard Parameters and an additional argument:

    • data_request: RequestMoreData exception that was previously raised.

  • generate_agent_config (callable) – Function used to generated agent configs. The function must return a ProtocolConfig This function can utilize any of the Standard Parameters.

  • manipulate_agent_data (callable) –

    Function used to perform any pre-processing on the agent results before providing it to the pipeline. This function can utilize any of the Standard Parameters and additional arguments:

    • file_timestamp: Time of the collection

    • pid: ProcessID of the JVM being monitored

    • jvm_name: Name of the JVM being monitored

  • validate_request (callable) – Function used to validate a request prior to execution. If an issue is detected, an exception must be raised and the ResultContainer will not be executed. This function can utilize any of the Standard Parameters.

  • required_args (list) – List of required top-level arguments for the step. If these arguments are not present in the step_args, the Snippet Framework will not attempt to execute the collection and log a warning message instead.

  • arg_required (bool) – If an argument is required for a step. This denotes a step must have an argument, regardless of the type. This can be used in conjunction with required_args if your step accepts a dictionary or a single value.

Returns:

Original object

Return type:

object

Examples

Returning the Step Args

The following example will return the step argument as the result.

@register_requestor
def static_value(step_args):
    return step_args
Returning the Step Args and updating metadata

The following example will return the step argument as the result and update the metadata

@register_requestor
def static_value(metadata, set_metadata, step_args):
    if not isinstance(metadata, dict):
        metadata = {}
    metadata["update"] = True
    set_metadata(metadata)
    return step_args
Return Step Args and Include Port Check

The following example will return the step argument as the result. There is also a validation check to ensure there is a correct credential.

def port_check(credential):
    try:
        if int(credential.fields["cred_port"]) != 443:
            raise Exception("This requestor only supports port 443")
    except ValueError:
        raise Exception(
            "Invalid value specified for port. Expected int, but received {}".format(
                type(credential.fields["cred_port"])
            )
        )

@register_requestor(
    validate_request=port_check
)
def static_value(step_args):
    return step_args
Reading a File

The following example will show developing a custom Requestor that reads from a file. The file will be specified as step_args.

@register_requestor(
    required_args=["file"],
)  # Decorator that registers this step as a requestor
def read_file(step_args):  # Defines the step name and parameters to use
    # Opens and reads the file indicated in the step_args
    with open(step_args["file"], "r") as file:
        return file.read()
http request

The following example shows a basic http Requestor. It uses several parts from the decorator to ensure proper execution:

  • validate_request: Validates the credential type is Basic/Snippet

  • get_req_id: specifies request uniqueness

  • required_args: states the URI is mandatory

The code appears as follows:

import hashlib
import requests

def cred_check(credential):
    if credential.fields.get("cred_type") != 5:
        raise Exception("Credential Type is incorrect Use Basic Snippet")


def generate_request_id(credential, step_args, debug):
    key = credential.fields["cred_user"] + credential.fields["cred_host"]
    for arg in sorted(step_args):
        key = key + "|" + step_args[arg]
    hash_obj = hashlib.sha256()
    hash_obj.update(key.encode("utf-8"))
    return hash_obj.hexdigest()


@register_requestor(
    required_args={"uri"}, validate_request=cred_check, get_req_id=generate_request_id
)
def https_simple(step_args, credential, debug):
    # Adds specified arg to the previous result
    url = "https://" + credential.fields["cred_host"] + "/" + step_args["uri"]
    debug("URL: " + url)
    auth = (credential.fields["cred_user"]), credential.fields["cred_pwd"]
    response = requests.get(url, verify=False, auth=auth)
    # If the response was successful, no Exception will be raised
    response.raise_for_status()
    return response.content

Processor

A Processor should perform a single operation on a result. A Processor has access to all the Standard Parameters.

When creating custom Processor steps an important concept is the difference between a Parser and Selector. A Parser should convert a data structure into a consumable format for a Selector. By separating these two concepts you can produce a step that is more reusable than if a single step performed both actions.

When registering a Processor, the following information can be specified in the decorator:

silo.low_code.register_processor(*args, **kwargs)

Decorator to register a processor

Parameters:
  • get_req_id (callable) –

    Function used for generating the request id. If one is not provided, the request_id will be <step_name>_<step_args_in_string_form>. This function can utilize any of the Standard Parameters and additional arguments:

    • reg_info: Registration information for the step

    • caller: Callable assigned to the step

  • name (str) – Name for the step that will be referenced in the snippet argument. If a name is not specified, it will use the pythonic name of the callable. For example, if you use have def cool_beans(result): ... the name would be cool_beans.

  • metadata (dict) – Metadata related to the step

  • required_args (list) – List of required top-level arguments for the step. If these arguments are not present in the step_args, the Snippet Framework will not attempt to execute the collection and log a warning message instead.

  • arg_required (bool) – If an argument is required for a step. This denotes a step must have an argument, regardless of the type. This can be used in conjunction with required_args if your step accepts a dictionary or a single value.

Returns:

Original object

Return type:

object

Examples

Count items in a List

The following example shows how to create a Processor that counts the number of items in a list.

@register_processor
def count_items(result):  # Only the result is passed into this function
    return len(result)  # Returns the number of items in the result
Wrapper Around a Custom Library

The following example shows how to create a Processor that wraps a custom library, jc.

import jc


@register_processor(
    name="jc",
    required_args=["parser_name"],
)
def jcparser(result, step_args):
    """Run jc against the result
    :param str result: Result from the previous step
    :param object step_args: Argument supplied to the step
    :rtype: object
    """
    parser_mod_name = jcparser_get_parser_mod_name(step_args)
    jc_kwargs = step_args if isinstance(step_args, dict) else {}
    return jc.parse(parser_mod_name, result, **jc_kwargs)


def jcparser_get_parser_mod_name(step_args):
    """Get the parser name from the configuration
    :param object step_args: Step arguments supplied to the step
    :rtype: str
    """
    if isinstance(step_args, str):
        parser_mod_name = step_args
    else:
        parser_mod_name = step_args.pop("parser_name")
    if parser_mod_name not in jcparser_get_supported_parsers():
        raise FrameworkError(
            "jc or the Snippet Framework does not support parser {}".format(parser_mod_name)
        )
    return parser_mod_name


def jcparser_get_supported_parsers():
    """Determine all supported jc parsers
    The Snippet Framework cannot consume streaming parsers and should
    be removed from the available list.
    :rtype: list
    """
    try:
        return [x for x in jc.parser_mod_list() if not x.endswith("_s")]
    except ImportError:
        return []

Cacher

A Cacher can store the current result or perform a fast-forward operation if the cache exists before executing. A Cache step does not have the ability to modify the result. A Cacher can reuse the results from a previous collection / Dynamic Application. A Cacher has access to all the Standard Parameters.

A Cacher can optionally specify a read callable which allows the Snippet Framework to fast-forward to the step after the Cacher. This can be specified the in registration decorator utilizing the keyword parameter read.

When registering a Cacher, the following information can be specified in the decorator:

silo.low_code.register_cacher(*args, **kwargs)

Decorator to register a cacher

All arguments are optional during registration. A Cacher can request any of the Standard Parameters and an additional argument:

  • step_cache: CacheManager from silo-apps for interacting with cache associated to the step

Parameters:
  • get_req_id (callable) – Function used for generating the request id. If one is not provided, the request_id will be <step_name>_<step_args_in_string_form>. This function can utilize any of the Standard Parameters a

  • name (str) – Name for the step that will be referenced in the snippet argument. If a name is not specified, it will use the pythonic name of the callable. For example, if you use have def cool_beans(result): ... the name would be cool_beans.

  • metadata (dict) – Metadata related to the step

  • read (callable) – Callable function that performs a cache read for fast-forwarding. This function can utilize any of the Standard Parameters.

  • required_args (list) – List of required top-level arguments for the step. If these arguments are not present in the step_args, the Snippet Framework will not attempt to execute the collection and log a warning message instead.

  • arg_required (bool) – If an argument is required for a step. This denotes a step must have an argument, regardless of the type. This can be used in conjunction with required_args if your step accepts a dictionary or a single value.

Returns:

Original object

Return type:

object

Example

Writing based on a provided key

This sample Cacher will write the current data to the specified key. If a key is not specified the request_id will be used instead.

def get_key(step_args, request_id):
    try:
        key = step_args.get("key", request_id)
    except AttributeError:
        key = request_id
    return key


def cache_read(step_args, request_id, step_cache):
    return step_cache.read(get_key(step_args, request_id))


@register_cacher(read=cache_read)
def cache_write(result, step_args, request_id, step_cache):
    step_cache.write(get_key(step_args, request_id), result)

RequestMoreData

RequestMoreData enables the Snippet Framework to perform a loop to collect additional data. It must be used in conjunction with a Requestor that supports rewind.

RequestMoreData step should raise an exception that inherits from silo.low_code.RequestMoreData or have no return. The exception that is raised should have information that is known to the previous Requestor that will inform the Requestor on how to perform the new request.

If an exception is not raised, the following step will receive an OrderedDict containing all collected results. If the exception is raised, the current index (from set_index or the current request_id) and the current result will be inserted into the OrderedDict. If the same index is used twice in a row, the Snippet Framework will identify this scenario as a repeating loop and end the RequestMoreData cycle and continue to the next step.

Note

An OrderedDict is accessed the same way as a normal dict but the insertion order is preserved.

The RequestMoreData step can request the set_max_iterations callable which sets the maximum number of times the Snippet Framework will loop when gathering additional information.

When registering a RequestMoreData, the following information can be specified in the decorator:

silo.low_code.register_rmd(*args, **kwargs)

Decorator to register a RequestMoreData step

A RequestMoreData can request any of the Standard Parameters and additional arguments:

  • set_index: Callable for setting the index of the current iteration. This should be used if you need to easily identify the information in the following step.

  • set_max_iterations: Callable for setting the maximum number of iterations before the Snippet Framework stops performing the collections. The default value for the amount of iterations is 50.

Parameters:
  • get_req_id (callable) – Function used for generating the request id. If one is not provided, the request_id will be <step_name>_<step_args_in_string_form>. This function can utilize any of the Standard Parameters

  • name (str) – Name for the step that will be referenced in the snippet argument. If a name is not specified, it will use the pythonic name of the callable. For example, if you use have def cool_beans(result): ... the name would be cool_beans.

  • metadata (dict) – Metadata related to the step

  • required_args (list) – List of required top-level arguments for the step. If these arguments are not present in the step_args, the Snippet Framework will not attempt to execute the collection and log a warning message instead.

  • arg_required (bool) – If an argument is required for a step. This denotes a step must have an argument, regardless of the type. This can be used in conjunction with required_args if your step accepts a dictionary or a single value.

Returns:

Original object

Return type:

object

Example

Iterating over a static loop

This sample Requestor will return the step argument as the result. It also supports rewind functionality where it will iterate until the number is 5 or greater. Since we are not specifying the index, the index will be calculated based on the request id for the step. Assuming that the initial number provided is 0, RequestMoreData will execute 5 times (0, 1, 2, 3, 4). When it processes rmd_step on the fifth iteration, it will not raise RequestMoreData. The final result will be {'static_value:1': 1, 'static_value:2': 2, 'static_value:3': 3, 'static_value:4': 4 'static_value:5_rmd_step_None': 5}. The final result key is different due to how the framework processes the place within the loop. If you require consistent naming, it is best to use set_index to set the name.

low_code:
    version: 2
    steps:
        - static_value: 0
        - rmd_step
from silo.low_code import RequestMoreData


@register_rmd
def rmd_step(result):
    if result < 5:
        raise RequestMoreData(result=result)

def range_increment(data_request):
    return data_request.result + 1


@register_requestor(
   rewind=range_increment
)
def static_value(step_args):
    return int(step_args)

This sample Requestor will return the step argument as the result. It also supports rewind functionality where it will iterate until the number is 5 or greater and increase by the current amount each iteration. This example will also set the index, which allows for easier lookups in the result if you add identifying information. Assuming that the initial number provided is 1, RequestMoreData will execute 3 times (1, 2, 4). When it processes rmd_step on the fourth iteration, it will not raise RequestMoreData. The final value will be {'offset_1': 1, 'offset_2': 2, 'offset_4': 4, 'offset_8': 8}

low_code:
    version: 2
    steps:
        - static_value: 1
        - rmd_step
from silo.low_code import RequestMoreData


@register_rmd
def rmd_step(result, set_index):
    set_index("offset_" + str(result))
    if result < 5:
        raise RequestMoreData(result=result, amount=result)

def range_increment(data_request):
    // Amount is the amount it increments each time
    return data_request.result + data_request.amount


@register_requestor(
   rewind=range_increment
)
def static_value(step_args):
    return int(step_args)

Syntax

A Syntax step defines how to convert a Snippet Argument into an Execution Plan that the Snippet Framework can process and execute. A Syntax step can be either a Top-Level Syntax or Macro Syntax.

When registering a Syntax, the following information can be specified in the decorator:

silo.low_code.register_syntax(*args, **kwargs)

Decorator for registering a Top-Level or Macro Syntax

Register a Syntax step that will be validated once the Framework is initialized. All arguments are optional during registration.

A Top-Level Syntax is a step that converts a text-based Snippet Argument into an Execution Plan that the Snippet Framework can process. The most familiar example of this is the low_code Syntax, which transforms a YAML-based instruction set into it’s equivalent pythonic Execution Plan. Top-Level Syntaxes can only be used to parse a Snippet Argument - they must not be referenced within another Execution Plan, such as one generated by a Macro Syntax. A Top-Level Syntax step must return a dictionary containing the key execution that maps to the Execution Plan list. Optionally, it can also include a name key that maps to a string representing the identifier of the Collection.

A Top-Level Syntax can request any of the Standard Parameters and the following additional parameters:

  • snippet_arg (str): The Snippet Argument for the Collection

    (after substitution) with the Syntax identifying header

  • snippet_arg_content (str): The Snippet Argument for the Collection

    (after substitution) without the Syntax identifying header

When utilizing a Top-Level Syntax, the Snippet Argument must start with a Syntax Identifier (the name given to the Top-Level Syntax step) followed by a colon (:).

Example 1: Syntax Identifier

low_code:    (Syntax Identifier)
  version: 2
  steps:
    - ...

Example 2: Inline Syntax Identifier

new_syntax_identifier: {"test": True}    (Inline Syntax Identifier)

A Macro Syntax is a nestable Syntax step that creates a set of instructions known as an Execution Plan that expands automatically in-place inside the Execution Plan that contains (references) it. Macro Syntaxes cannot be used to parse a Snippet Argument - they must be referenced (nested) within another Execution Plan, such as one generated by a Top-Level Syntax. A Macro Syntax inherits step arguments (step_args) from the Execution Plan that contains it.

Unlike a Top-Level Syntax, a Macro Syntax cannot request snippet_arg or snippet_arg_content as parameters. This is due to the fact that by the time a Macro Syntax executes, the Snippet Argument has already been transformed into an Execution Plan by the Top-Level Syntax. However, a Macro Syntax can still request any of the Standard Parameters, as well as Substitution Parameters and any Custom Substitution keys defined by the user as parameters.

When utilizing a Macro Syntax, you must reference the Macro Syntax Identifier (the name given to the Macro Syntax step) in the Execution Plan created from it’s containing Syntax. Ex:

low_code:    (Top-Level Syntax Identifier)
  version: 2
  steps:
    - custom_macro    (Macro Syntax Identifier)
    - ...
Parameters:
  • get_req_id (callable) –

    Function used for generating the request id. If one is not provided, the request_id will be <step_name>_<step_args_in_string_form>. This function can utilize any of the Standard Parameters and additional parameters:

    • reg_info: Registration information for the step

    • caller: Callable assigned to the step

  • name (str) – Name for the Syntax that will be referenced in the snippet argument. If a name is not specified, it will use the pythonic name of the callable. For example, if you use have def cool_beans(result): ... the name would be cool_beans.

  • macro (bool) – Indicates if the Syntax step is a Macro or Top-Level Syntax. Setting this flag enables the additional functionality only available to Macro Syntax steps, such as substitution values as parameters.

  • metadata (dict) – Metadata related to the step. The macro flag can also be set here (metadata={"macro": True}) which achieves the same behavior as setting the macro flag directly in the decorator.

  • required_args (list) – (Macro only) List of required step arguments. If these arguments are not present in the step_args, the Snippet Framework will not attempt to expand the Macro and will throw an error instead.

  • arg_required (bool) – (Macro only) If any step argument is required. This denotes a Macro must have an argument, regardless of the type. This can be used in conjunction with required_args if your Macro accepts a dictionary or a single value.

Returns:

Original object

Return type:

object

Warning

Syntax steps of any kind cannot execute any blocking operations such as I/O tasks or network requests while defining an Execution Plan. This will significantly hinder the performance of the Snippet Framework.

Top-Level Syntax

A Top-Level Syntax is a step that converts a text-based Snippet Argument into an Execution Plan that the Snippet Framework can process. Top-Level Syntaxes serve as the entry point for all Snippet processing - they are the first step executed when a Collection is processed and are responsible for parsing the entire Snippet Argument into a structured execution plan. A Top-Level Syntax step must return a dictionary containing the key execution that maps to the Execution Plan list. Optionally, it can also include a name key that maps to a string representing the identifier of the Collection.

The Execution Plan created by a Top-Level Syntax cannot include references to any Top-Level Syntaxes, but it can include references to Macro Syntaxes, which will be automatically expanded in-place during the Syntax Processing phase before the main pipeline execution begins. When the framework processes a Collection, it first identifies the Top-Level Syntax name from the Snippet Argument’s Syntax Identifier, then it executes that Syntax step to generate an initial Execution Plan, and finally it recursively expands any nested Macro Syntaxes found within that plan.

When utilizing a Top-Level Syntax, the Snippet Argument must start with a Syntax Identifier (the name given to the Top-Level Syntax step) followed by a colon (:), followed by the Snippet Argument content to be parsed.

The low_code Syntax is the most popular Top-Level Syntax. It transforms a YAML-based steps list into its equivalent pythonic Execution Plan. The following is an example of how the low_code Syntax step is processed into a structured dictionary containing an Execution Plan:

Snippet Argument:

low_code:
  id: example_syntax_collection
  version: 2
  steps:
    - static_value: '{"message": "Hello, World!"}'
    - json
    - jmespath:
        value: message

Processed Syntax:

{
    "name": "example_syntax_collection",
    "execution": [
        ('static_value': '{"message": "Hello, World!"}'),
        ('json': None),
        ('jmespath': {'value': 'message'})
    ]
}

Macro Syntax

A Macro Syntax is a nestable Syntax step that creates a reusable set of instructions known as an Execution Plan that can be expanded automatically in-place within other Syntax definitions, including other Macro Syntaxes. Macro Syntaxes cannot be used to parse a Snippet Argument - they must be referenced within another Execution Plan, such as one generated by a Top-Level Syntax. A Macro Syntax inherits step arguments (step_args) from the Execution Plan that contains (references) it. A Macro Syntax step must return an Execution Plan as a list.

Macro Syntaxes enable dynamic Execution Plan construction and promote reusability across different Snippet Argument formats. When the framework encounters a Macro Syntax reference during execution plan processing, it executes the Macro Syntax to generate a nested Execution Plan, then expands that plan in-place within the containing execution plan. This expansion process is recursive and supports up to 4 levels of nesting. The framework validates each expanded Execution Plan to ensure all referenced steps are properly registered and that the plan structure is valid.

A Macro Syntax step can also leverage substitution parameters that Top-Level Syntaxes cannot. These substitution parameters can be used to dynamically construct Execution Plans based on values specifically related to the current Collection.

The following is an example of how a nested Macro Syntax inside the Top-Level low_code Syntax is processed into an Execution Plan:

Snippet Argument:

low_code:
  version: 2
  steps:
    - get_message_wrapper
    - jmespath:
        value: message

Top-Level Execution Plan:

[
    ('get_message_wrapper', None),
    ('jmespath': {'value': 'message'})
]

Macro Syntax Definition:

@register_syntax(macro=True, name="get_message_wrapper")
def get_message_wrapper():
    return [
        ('static_value', '{"message": "hi"}'),
        ('json': None)
    ]

Final Execution Plan:

[
    ('static_value': '{"message": "hi"}'),
    ('json': None),
    ('jmespath': {'value': 'message'})
]

Execution Plan

An Execution Plan defines the sequential order and arguments of steps to be executed for a Collection. It serves as the bridge between human-readable Snippet Arguments and the Snippet Framework’s internal processing pipeline.

Structure and Format:

The Execution Plan is represented as a list of tuples, where each tuple contains exactly 2 elements:

  1. Step Name (str): The registered name of the step to execute

  2. Step Arguments (Any): The arguments to pass to the step (can be None, dict, str, int, etc.)

execution_plan = [
    ("static_value", '{"message": "Hello World"}'),
    ("json", None),
    ("simple_key", "message"),
    ("cache_write", {"key": "greeting_cache"}),
]

The framework processes this plan sequentially, where each step receives the output from the previous step as its result parameter, along with any specified step arguments.

Note

Step arguments cannot be objects, callables, or other non-serializable objects. They should all be simple data types (e.g., None, int, str, dict, list, etc.)

Generation and Processing:

Execution Plans are generated by Top-Level Syntax steps when they parse Snippet Arguments. The framework then processes these plans through several phases:

  1. Syntax Expansion: Any Macro Syntax references within the plan are recursively expanded in-place

  2. Validation: All step names are verified as registered and step arguments are validated

  3. Caching Analysis: Steps that support caching are identified for potential fast-forwarding

  4. Request ID Generation: Unique identifiers are created for each step execution

Nested Syntax Expansion:

When an Execution Plan contains references to Macro Syntaxes, the framework automatically expands them during processing:

# Initial execution plan from Top-Level Syntax
initial_plan = [
    ("static_value", "test_data"),
    ("custom_macro", "banana"),  # This is a Macro Syntax
    ("simple_key", "result"),
]

# After macro expansion (custom_macro generates its own execution plan)
expanded_plan = [
    ("static_value", "test_data"),
    ("static_value", '{"test": {"macro": "banana"}}'),  # From custom_macro
    ("json", None),                                     # From custom_macro
    ("simple_key", "result"),
]

Validation Requirements:

The framework enforces strict validation on Execution Plans to ensure reliable execution:

Structure Validation:
  • Must be a list containing tuples of exactly 2 elements

  • Step names must be strings

  • Step arguments must contain only simple data types (str, int, float, bool, None) and collections (list, dict)

Step Validation:
  • All referenced steps must be registered with the framework

  • Required step arguments must be present when specified by the step registration

Depth Validation:
  • Nested step arguments cannot exceed 15 levels of depth

  • Macro syntax expansion cannot exceed 4 levels of nesting

Example Usage in Syntax Development:

When developing a Top-Level Syntax, your step must return a dictionary containing an execution key:

@register_syntax
def custom_format_parser(snippet_arg_content):
    """Parse custom format into execution plan"""
    # Parse the snippet argument content
    config = parse_custom_format(snippet_arg_content)

    # Build execution plan as a list of tuples
    execution_plan = [
        ("static_value", config.get("data")),
        ("json", None),
        ("simple_key", config.get("key"))
    ]

    return {
        "execution": execution_plan,
        "name": config.get("collection_name")  # Optional - Defaults to None
    }

For Macro Syntaxes, the step must return the Execution Plan list directly:

@register_syntax(macro=True)
def reusable_data_processor(step_args):
    """Macro syntax for common data processing pattern"""
    return [
        ("static_value", '{"data": "' + step_args + '"}'),
        ("json", None),
        ("simple_key", "data")
    ]

The Execution Plan serves as the foundation for all Snippet Framework operations, providing a standardized way to represent complex data processing workflows in a simple, sequential format.

Examples

Defining a Top-Level Syntax

The following example shows a simple Syntax step that converts a custom string format into an Execution Plan:

Snippet Argument:

simple_syntax:static_value={"message": "Hello, World!"}|||json

Top-Level Syntax Definition:

@register_syntax(name="simple_syntax")
def simple_syntax(snippet_arg_content):
    steps = snippet_arg_content.split("|||")
    execution_plan = []
    for step in steps:
        step_data = step.strip().split("=", 1)
        step_name = step_data[0]
        step_args = step_data[1] if len(step_data) == 2 else None
        execution_plan.append((step_name, step_args))

    return {
        "execution": execution_plan
    }

Execution Plan:

[
    ('static_value': '{"message": "Hello, World!"}'),
    ('json': None),
]
Defining a Macro Syntax

The following example shows how a Macro Syntax step can be used to consolidate a repeated set of steps into a single one:

Snippet Argument:

low_code:
  version: 2
  steps:
    - paginated_requestor:
        url: "https://appengine.googleapis.com/v1/apps/${project_id}/"
    - jmespath:
        value: values(@)[*].authDomain

Macro Syntax Definition:

@register_syntax(macro=True, name="paginated_requestor")
def paginated_requestor(step_args):
    return [
        ("http", step_args),
        ("json", None),
        ("token_paginator", None)
    ]

Execution Plan:

[
    ("http", {'url': 'https://appengine.googleapis.com/v1/apps/42/'}),
    ("json", None),
    ("token_paginator", None)
    ('jmespath': {'value': 'values(@)[*].authDomain'}),
]
Nesting Macro Syntaxes

The following example shows how Macro Syntaxes can be nested inside one another to reuse common steps:

Snippet Argument:

low_code:
  version: 2
  steps:
    - get_device_data
    - jmespath:
        value: devices[*].count

Macro Syntaxes Definitions:

@register_syntax(macro=True, name="get_http_result", required_args=["endpoint"])
def get_http_result(step_args, credential):
    url = f"{credential.fields['cred_url']}/{step_args['endpoint']}"
    return [
        ("http", {"url": url}),
        ("verify_result", None)
    ]


@register_syntax(macro=True, name="get_data")
def get_device_data():
    return [
        ("get_http_result", {"endpoint": "api/devices"}),
        ("json", None),
    ]

Execution Plan:

[
    ("http", {'url': 'localhost/api/devices'}),
    ("verify_result", None)
    ("json", None),
    ('jmespath': {'value': 'devices[*].count'}),
]
Dynamic Execution Plan Construction

The following example shows how a Macro Syntax step can leverage both provided and custom substitution parameters to create an Execution Plan that when executed in the pipeline will result in a dynamic message:

Snippet Argument:

low_code:
  version: 2
  steps:
    - dynamic_message

Macro Syntax Definition:

@register_syntax(macro=True, name="dynamic_message")
def dynamic_message(is_happy, silo_did):
    default_val = "happy" if is_allowed else "sad"
    return [
        ("static_value", f"This is a {default_val} day for device {silo_did}.")
    ]

custom_substitution = {"is_happy": True}

Execution Plan:

[
    ("static_value", "This is a happy day for device 1.")
]

Using the new Step

After the step has been created and tested, it must be registered into the Snippet Framework. If the step is written within the Snippet, it will automatically be registered. However, if the Step is being included in a ScienceLogic Library, one of the following additional actions are required for the Step to be added to the Snippet Framework:

  • Create a wheel that includes the correct entry point (preferred method)

  • Update the default snippet to include the import

Creating a wheel

A wheel is a standard Python package format used for distribution that provides the required metadata for installation. When creating a wheel, adding the entry_point sf_step enables the Snippet Framework to automatically register your step.

In this example, we will assume that there is a package, my_custom_package, that has defined __all__ within my_custom_package.__init__.py. Since all steps are imported when loading the package, the entry_point will use the top-level package. Below is an example of a snippet from setup.cfg that enables the auto-import.

[options.entry_points]
sf_step =
    my_custom_package = my_custom_package

Advanced Features

Utilizing Metadata Across Steps

Metadata enables a step to store relevant information that will be referenced at a later time. This is useful when its not required for the result, but a later step will consume the metadata.

In this example, Zillow API is used which provides a public API of housing data. This example will merge results on housing prices per city from one API call with rental prices per city from a different API call and then merge the results and calculate ROI.

The first step takes the results from the API which contains a large number of columns relating to the historical sales price of homes and only takes the most recent datapoint as 2023-08-31, which is defined in the snippet argument along with the other pieces of information and stores that into the metadata. Then the Rental API is called to get all the rental information. Finally the merge_data steps is then used to loop through the sales data and looking for a match in the rental data. When found a new record is created that contains data from both API calls along with the calculated ROI.

low_code:
  version: 2
  steps:
    - http:
        url: https://files.zillowstatic.com/research/public_csvs/zhvi/County_zhvi_uc_sfr_tier_0.33_0.67_sm_sa_month.csv?t=1695988357
    - jc: csv
    - reduce_data:
        filter_date: 2023-08-31
    - store_data: region_data
    - http:
        url: https://files.zillowstatic.com/research/public_csvs/zori/County_zori_uc_sfrcondomfr_sm_month.csv?t=1706113741
    - jc: csv
    - merge_data:
        data_key: region_data
        filter_date: 2023-08-31
    - jmespath:
        value: "[].{ROI: ROI, Location: join(', ', [County, State])}"
import datetime

@register_processor(required_args=["filter_date"])
def reduce_data(result, step_args):
    results = {}
    filter_date = step_args["filter_date"]
    if isinstance(filter_date, datetime.date):
        filter_date = filter_date.strftime("%Y-%m-%d")
    for region in result:
        region_data = {}
        price = region[filter_date].rsplit(".")[0]
        region_id = region["RegionID"]
        region_data["County"] = region["RegionName"]
        region_data["Price"] = int(price) if price else 0
        region_data["State"] = region["State"]
        results[region_id] = region_data
    return results


@register_processor(required_args=["data_key", "filter_date"])
def merge_data(result, step_args, metadata):
    results = []
    data_key = step_args["data_key"]
    filter_date = step_args["filter_date"]
    if isinstance(filter_date, datetime.date):
        filter_date = filter_date.strftime("%Y-%m-%d")
    # Retrieve data previously stored using store_data
    sales_data = metadata[data_key]
    for region in result:
        region_id = region["RegionID"]
        sale_data = sales_data[region_id]
        sale_price = sale_data["Price"]
        rent_price = region[filter_date].rsplit(".")[0]
        rent_price = int(rent_price) if rent_price else 0
        if sale_price and rent_price:
            roi_data = sale_data.copy()
            roi_data["RegionID"] = region_id
            roi_data["Rent"] = rent_price
            # Calculate ROI Percentage
            roi_data["ROI"] = round(12 * 100 * rent_price / sale_price, 2)
            results.append(roi_data)
    # Sort from the highest ROI to the lowest
    return sorted(results, key=lambda x: x.get("ROI"), reverse=True)

Mutable Step

The Snippet Framework reduces the amount of executing code during the de-duplication process by executing unique calls only once. When a divergence is detected, the Snippet Framework will determine how to push the current data into the other divergent branches. To ensure that one branch does not interfere with another branch, a memory-intensive operation is performed to clone the objects for each branch. However this is not always required if a protected attribute is not being modified.

Protected Attributes

Name

Type

Description

result

object

Current result within the pipeline

metadata

dict

Metadata associated with the collection

error_data

object

Information for errors that occurred

rmd_data

OrderedDict

Data related to RequestMoreData iterations

By marking a custom step as mutable=False, the memory-intensive cloning will not occur and offer a memory and speed boost to the Snippet Framework. When a step is not marked as mutable, the Snippet Framework will copy the reference of each protected attributes rather than cloning the attributes. To safely mark a step as mutable, the step must not update the reference of a protected attribute.

mutable should be defined as a boolean during the registration process. If it is not specified, the step is assumed to be mutable.

An example of a step that should be marked as mutable:

@register_processor(
    metadata={
        "author": "ScienceLogic",
        "title": "Pop Metadata Key",
        "mutable": True
    },
)
def pop_metadata_key(step_args, metadata):
    # This step is mutable as `.pop` modifies the metadata reference
    return metadata.pop(step_args)

Since the pop method of a dict will remove a given key and alter the reference to the metadata dict in memory, this step mutates the data and should not be considered as mutable.