Available Steps¶
Syntax¶
Low-Code¶
The low_code Syntax is a YAML configuration format for specifying your collections. You explicitly provide the steps you want to run, in the order you need them to be executed. There are multiple versions of the low_code Syntax.
Framework Name |
|
All versions support specifying configuration. The configuration will be
all sub-elements under the step. For example, if you had a step cool_step
and wanted to specify two key values, you would provide the following:
low_code:
id: eastwood1
version: 2
steps:
- cool_step:
key1: value1
key2: value2
Note
Notice that, in the example above, key1
and key2
are indented
one level beyond cool_step
. Setting them at the same indentation
level as cool_step
, as shown below, will result in an error.
low_code:
id: eastwood1
version: 2
steps:
- cool_step:
key1: value1 # key1 is not properly indented
key2: value2 # key2 is not properly indented
To provide a list in YAML you will need to utilize the -
symbol. For example,
if you had a step cool_step
and wanted to specify a list
of two elements, you would provide the following:
low_code:
id: eastwood1
version: 2
steps:
- cool_step:
- key1
- key2
Version 2¶
Version 2 of the low_code Syntax provides more flexibility when defining the order of step execution. This version can utilize multiple versions of Requestors (if supported) and allows for steps to run before a Requestor executes.
Format¶
low_code:
id: eastwood1
version: 2
steps:
- static_value: '{"key": "value"}'
- json
- simple_key: "key"
id: Identification for the request.
Note
If this value is not specified, the Snippet Framework will automatically create one. This allows for easier tracking when debugging when an ID is not required for the collection.
version: Specify the version of the low_code Syntax.
steps: Order of the steps for the Snippet Framework to execute.
Version 1¶
Version 1 was the original low_code syntax. It allowed for a single Requestor and any number of processors. It lacks support for multiple Requestors so it is not preferred.
Format¶
low_code:
id: my_request
network:
static_value: '{"key": "value"}'
processing:
- json
- simple_key: "key"
id: Identification for the request.
Note
If this value is not specified, the Snippet Framework will automatically create one. This allows for easier tracking when debugging when an ID is not required for the collection.
version: Specify the version of the low_code Syntax. If not provided, it will default to 1.
network: Section for the data requester step.
processing: Section for listing the steps required to transform your data to the desired output for SL1 to store.
Promql¶
The PromQL syntax is a YAML configuration format for specifying the data to collect from Prometheus. You explicitly provide a PromQL query, the result type, an aggregation function if needed, and the labels that will be taken as indices.
Syntax name |
|
PromQL Format¶
promql:
id: RequestID1
query: promql_query
result_type: type
id: Identification for the request.
Note
If this value is not specified, the Snippet Framework will automatically create one. Specifying the ID allows easier tracking when debugging.
query: PromQL query.
result_type: The type of the result.
Note
The “result_type” is an attribute of the result returned by a PromQL query, indicating the type of the result. The two possible options are:
vector
andmatrix
. If this value is not specified, the toolkit will assume that the expected result type isvector
. You should know in advance the result type your PromQL query will generate.
Additionally, you can specify which labels you want to use as indices by using
the key labels
.
promql:
id: RequestID1
query: prometheus_query
result_type: type
labels:
- label1
- label2
id: Identification for the request.
query: PromQL query.
result_type: Result type.
labels: The labels in the order you would like to get as indices.
Note
If the labels key is not provided, all the labels will be retrieved as you would get them in the Prometheus expression browser.
In the case of providing labels that do not define the uniqueness of an index to identify a value, only the first retrieved value will be displayed and a log message will report the collision.
When you are using a PromQL query that will return a matrix
result type,
you will need to apply an aggregation function. A matrix
result type
represents a range of data points. To apply an aggregation function, you can
use the following configuration.
promql:
id: RequestID1
query: prometheus_query
result_type: matrix
aggregation: function
id: Identification for the request.
query: PromQL query.
result_type: Use
matrix
as the result type.aggregation: Aggregation function for a matrix result type.
The available options for aggregation functions are:
mean
, median
, mode
, min
, max
, and percentile
.
If percentile
is the aggregation function specified, you should
also provide the percentile position by using the percentile key,
an integer value between 1 and 99, as you can see below.
promql:
id: RequestID1
query: prometheus_query
result_type: matrix
aggregation: percentile
percentile: 95
id: Identification for the request.
query: PromQL query.
result_type: Use
matrix
as the result type.aggregation: Use
percentile
as the aggregation function.percentile: Percentile position.
Example of use¶
If we want to collect the Kafka Exporter metrics for the number of in-sync replicas for a topic partition, the PromQL query should be:
kafka_topic_partition_in_sync_replica
The response is like this:
{
'kafka_topic_partition_in_sync_replica{instance="kafka-service-metrics.default.svc:9308", job="kubernetes-services", kubernetes_namespace="default", partition="0", port_name="http-metrics", service_name="kafka-service-metrics", topic="__consumer_offsets"}': '3',
'kafka_topic_partition_in_sync_replica{instance="kafka-service-metrics.default.svc:9308", job="kubernetes-services", kubernetes_namespace="default", partition="0", port_name="http-metrics", service_name="kafka-service-metrics", topic="apl.prod.agent.avail.trigger"}': '3',
'kafka_topic_partition_in_sync_replica{instance="kafka-service-metrics.default.svc:9308", job="kubernetes-services", kubernetes_namespace="default", partition="0", port_name="http-metrics", service_name="kafka-service-metrics", topic="apl.prod.app.agg.trigger"}': '3'
...
}
The Snippet Argument should look like this:
promql:
id: InSyncReplicas
query: kafka_topic_partition_in_sync_replica
result_type: vector
If we want to index by topic, the Snippet Argument should look like this:
promql:
id: InSyncReplicas
query: kafka_topic_partition_in_sync_replica
result_type: vector
labels:
- topic
The collected data is like this:
{
'{topic="__consumer_offsets"}': '3',
'{topic="apl.prod.agent.avail.trigger"}': '3',
'{topic="apl.prod.app.agg.trigger"}': '3'
...
}
Promql syntax takes the query, puts it in the http
step as param,
and sends it to the Prometheus server as a REST API request. Then, it takes
the response and parses it using the json
step. Finally, it takes
the parsed response and indexes it by the labels using the
promql_selector
step and applies a aggregation function if needed.
Requestors¶
HTTP¶
The HTTP Data Requestor provides HTTP request functionality to gather information for the framework. The full URI is built from the credential and argument of the requestor. The credential contains the base URL and the port.
Step details:
Framework Name |
|
Supported Credentials |
Basic, SOAP/XML |
Supported Fields of Basic Cred. |
|
Supported Fields of SOAP Cred. |
|
Parameters |
|
Note
This step supports all the parameters mentioned in
requests.Session.request
Note that the parameters mentioned above will override the credential.
For example, if you define verify: False
in the credential but
verify: True
in the step parameters, the verify=True
will be used in the request.
Example of use¶
To access the API of an SL1 System, you would use the URI:
https://SL1_IP_ADDRESS/api/account
The resource path for this example is:
/api/account
The SL1_IP_ADDRESS can be provided with the credential.
The output of this step:
{
"searchspec":{},
"total_matched":4,
"total_returned":4,
"result_set":
[
{
"URI":"/api/account/2",
"description":"AutoAdmin"
},
{
"URI":"/api/account/3",
"description":"AutoRegUser"
},
{
"URI":"/api/account/1",
"description":"em7admin"
},
{
"URI":"/api/account/4",
"description":"snadmin"
}
],
}
The Snippet Argument should look like this:
low_code:
id: my_request
version: 2
steps:
- http:
uri: "/api/account"
Response Status Code Checking¶
When the parameter check_status_code
is set
to True
(default), and the response’s status code
meets the following condition:
\(400 <= status code < 600\)
An exception will be raised, thus stopping the current collection.
When the parameter check_status_code
is set
to False
, no exception will be raised for any
status code value.
Pagination Support¶
A custom step is required to raise the exception silo.low_code_steps.rest.HTTPPageRequest
to rewind execution back to the previous network requestor. This exception
is specific to the http and requires a dictionary as its own argument.
The dictionary can either replace or update the existing action_arg
dictionary passed to the http step.
If our Snippet Argument looked like this:
low_code:
id: my_request
version: 2
setps:
- http:
uri: "/account"
- pagination_trimmer
- pagination_request:
index: "request_key"
replace: True
Our pagination_request
step could look like this:
@register_processor(type=REQUEST_MORE_DATA_TYPE)
def pagination_request(result, action_arg):
if result:
# Replacement of action_arg
raise HTTPPageRequest({"uri": "/account", "params": result}, index=action_arg.get("index"), replace=action_arg.get("replace", False))
This assumes that the result will contain the next pagination action argument.
The step issues the HTTPPaginateRequest
and sets the new action_arg with
the first positional parameter. With the kwarg replace
set to True, the http step
will receive a new action_arg.
Static Value¶
The Static Value Data Requester is used to mock network responses from a device for testing purposes or when a step needs a static name.
Step details:
Framework Name |
|
Supported Credentials |
N/A |
Example of use¶
If we wanted to mock:
"Apple1,Ball1,Apple2,Ball2"
The Snippet Argument should look like this:
low_code:
id: my_request
version: 2
steps:
- static_value: "Apple1,Ball1,Apple2,Ball2"
- firstComponentExecution
- secondComponentExecution
Processors¶
Aggregators¶
Aggregation Max¶
The aggregation max is one of the aggregation functions you can use for matrix result type. It receives a dictionary of key and list of values and selects the maximum value of each list. The lists should consist of numbers only.
Step details:
Example Usage¶
If the incoming data to the aggregation function is:
{
'{job="prometheus"}': [12, 40, 7, 39, 71, 80, 7, 52],
'{job="node"}': [50, 40, 40, 30, 20, 18, 16, 14, 12, 10],
}
If we wanted to use the max aggregation function:
aggregation: max
The output of this step will be:
{
'{job="prometheus"}': 80,
'{job="node"}': 50,
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
aggregation: max
Aggregation Mean¶
The aggregation mean is one of the aggregation functions you can use for matrix result type. It receives a dictionary of key and list of values and selects the mean value of each list. The lists should consist of numbers only.
Step details:
Example Usage¶
If the incoming data to the step is:
{
'{job="prometheus", instance="localhost:9090"}': [50, 40, 40, 30, 20, 18, 16, 14, 12, 10],
'{job="node", instance="localhost:9091"}': [2, 1, 5, 7, 3, 9, 4, 6, 8],
}
If we wanted to use the mean aggregation function:
aggregation: mean
The output of this step will be:
{
'{job="prometheus", instance="localhost:9090"}': 25,
'{job="node", instance="localhost:9091"}': 5,
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
aggregation: mean
Aggregation Median¶
The aggregation median is one of the aggregation functions you can use for matrix result type. It receives a dictionary of key and list of values and calculates the average value of each list. The lists should consist of numbers only.
Step details:
Example Usage¶
If the incoming data to the step is:
{
'{job="prometheus"}': [12, 40, 7, 39, 71, 80, 7, 52],
'{job="node"}': [50, 40, 40, 30, 20, 18, 16, 14, 12, 10],
}
If we wanted to use the median aggregation function:
aggregation: median
The output of this step will be:
{
'{job="prometheus"}': 39.5,
'{job="node"}': 19.0,
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
aggregation: median
Aggregation Min¶
The aggregation min is one of the aggregation functions you can use for matrix result type. It receives a dictionary of key and list of values and selects the minimum value of each list. The lists should consist of numbers only.
Step details:
Example Usage¶
If the incoming data to the step is:
{
'{job="prometheus"}': [12, 40, 7, 39, 71, 80, 7, 52],
'{job="node"}': [50, 40, 40, 30, 20, 18, 16, 14, 12, 10],
}
If we wanted to use the min aggregation function:
aggregation: min
The output of this step will be:
{
'{job="prometheus"}': 7,
'{job="node"}': 10,
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
aggregation: min
Aggregation Mode¶
The aggregation mode is one of the aggregation functions you can use for matrix result type. It receives a dictionary of key and list of values and selects the mode value of each list. The lists should consist of numbers only.
Step details:
Example Usage¶
If the incoming data to the step is:
{
'{job="prometheus"}': [12, 40, 7, 39, 71, 80, 7, 52],
'{job="node"}': [50, 40, 40, 30, 20, 18, 16, 14, 12, 10],
}
If we wanted to use the mode aggregation function:
aggregation: mode
The output of this step will be:
{
'{job="prometheus"}': 7,
'{job="node"}': 40,
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
aggregation: mode
Aggregation Percentile¶
The aggregation percentile is one of the aggregation functions you can use for matrix result type. It receives a dictionary of key and list of values and calculate the n-percentile value of each list. The lists should consist of numbers only.
Step details:
Framework Name |
|
Parameters |
|
Example Usage¶
If the incoming data to the step is:
{
'up{instance="localhost:9090", job="prometheus"}': [2, 10, 5, 4],
'up{instance="localhost:9091", job="node"}': [12, 40, 7, 39, 71],
}
If we wanted to use the percentile aggregation function:
aggregation: percentile
percentile: 50
The output of this step will be:
{
'up{instance="localhost:9090", job="prometheus"}': 4.5,
'up{instance="localhost:9091", job="node"}': 39.0,
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
aggregation: percentile
percentile: 50
Data Parsers¶
CSV¶
The CSV Data Parser converts a string into the format requested by the Args.
Step details:
Framework Name |
|
Parameters |
All the arguments inside:
|
Reference |
Example Usage¶
If the incoming data to the step is:
"A1,B1,C1
A2,B2,C2 A3,B3,C3 “
If we wanted to provide these input parameters:
"type": "dict"
The output of this step will be:
[
["A1", "B1", "C1"],
["A2", "B2", "C2"],
["A3", "B3", "C3"],
]
The Snippet Argument should look like this:
low_code:
id: my_request
version: 2
steps:
- <network_request>
- csv:
type: dict
If the incoming data to the step is:
"First,Last,Email
Jonny,Lask,jl@silo.com Bobby,Sart,bs@silo.com Karen,Sift,ks@silo.com “
If we wanted to provide these input parameters:
"type": "dict"
The output of this step will be:
[
{"First": "Jonny", "Last": "Lask", "Email": "jl@silo.com"},
{"First": "Bobby", "Last": "Sart", "Email": "bs@silo.com"},
{"First": "Karen", "Last": "Sift", "Email": "ks@silo.com"},
]
The Snippet Argument should look like this:
low_code:
id: my_request
version: 2
steps:
- <network_request>
- csv:
type: dict
If the incoming data to the step is:
"booker12,9012,Rachel,Booker"
If we wanted to provide these input parameters:
"fieldnames": ["Username", "Identifier", "First name", "Last name"]
"type": "dict"
The output of this step will be:
[
{'Username': 'booker12', 'Identifier': '9012', 'First name': 'Rachel', 'Last name': 'Booker' }
]
The Snippet Argument should look like this:
low_code:
id: my_request
version: 2
steps:
- <network_request>
- csv:
fieldnames:
- Username
- Identifier
- First name
- Last name
type: dict
jc¶
JC is a third-party library that enables easy conversion of text output to python primitives. While this is primarily used for *nix parsing, it includes other notable parsers such as xml, ini, and yaml to name a few. The entire list of supported formats and their respective outputs can be found on their github.
Note
The Snippet Framework does not support streaming parsers (parsers that end in _s). These will not appear in the list of available parsers.
Step details:
Framework Name |
|
Parameters |
|
Reference |
Supplying additional key / value pairs within the action argument will
pass-through this information to the parser. For example, if you
needed to run the parser example_parser
and it took the option
sample_argument
, you would use the following:
jc:
parser_name: example_parser
sample_argument: value_pass_through
If no additional parameters need to be supplied, you can specify the parser_name as the action argument to reduce typing.
jc: example_parser
There are current 127 parsers available in the installed version of jc v1.23.2. The list of available parsers are as follows:
acpi, airport, arp, asciitable, asciitable_m, blkid, bluetoothctl, cbt, cef, certbot, chage, cksum, clf, crontab, crontab_u, csv, date, datetime_iso, df, dig, dir, dmidecode, dpkg_l, du, email_address, env, file, findmnt, finger, free, fstab, git_log, git_ls_remote, gpg, group, gshadow, hash, hashsum, hciconfig, history, hosts, id, ifconfig, ini, ini_dup, iostat, ip_address, iptables, iw_scan, iwconfig, jar_manifest, jobs, jwt, kv, last, ls, lsblk, lsmod, lsof, lspci, lsusb, m3u, mdadm, mount, mpstat, netstat, nmcli, ntpq, openvpn, os_prober, passwd, pci_ids, pgpass, pidstat, ping, pip_list, pip_show, plist, postconf, proc, ps, route, rpm_qi, rsync, semver, sfdisk, shadow, ss, ssh_conf, sshd_conf, stat, sysctl, syslog, syslog_bsd, systemctl, systemctl_lj, systemctl_ls, systemctl_luf, systeminfo, time, timedatectl, timestamp, toml, top, tracepath, traceroute, udevadm, ufw, ufw_appinfo, uname, update_alt_gs, update_alt_q, upower, uptime, url, ver, vmstat, w, wc, who, x509_cert, xml, xrandr, yaml, zipinfo, zpool_iostat, zpool_status
JSON¶
The JSON Data Parser converts a JSON string into a python dictionary.
Framework Name |
|
Example Usage¶
If the incoming data to the step is:
'{
"project": "low_code", "tickets": {"t123": "selector work",
"t321": "parser work"}, "name": "Josh", "teams": '["rebel", "sprinteastwood"]
}'
The output of this step will be:
{
"name": "Josh",
"project": "low_code",
"teams": ["rebel", "sprinteastwood"],
"tickets": {"t123": "selector work", "t321": "parser work",},
}
The Snippet Argument should look like this:
low_code:
id: my_request
version: 2
steps:
- <network_request>
- json
Regex¶
The Regex step uses regular expressions to extract the required information and returns a dictionary.
Step details:
Framework Name |
|
Parameters |
|
Reference |
Example Usage¶
If the incoming data to the step is:
"some text where the regex will be applied"
If we wanted to provide these input parameters:
flags: "I","M"
method: search
regex: "(.*)"
The output of this step will be:
{
"match": "some text where the regex will be applied",
"groups": ("some text where the regex will be applied",),
"span": (0, 41)
}
The Snippet Argument should look like this:
low_code:
id: my_request
version: 2
steps:
- <network_request>
- regex_parser
flags:
- I
- M
method: search
regex: "(.*)"
String Float¶
The String Float parses a string to float. If the string contains multiple floats, it returns a list of them.
Framework Name |
|
Example Usage¶
If the incoming data to the step is:
"1.1 , 2.2"
The output of this step will be:
[1.1, 2.2]
The Snippet Argument should look like this:
low_code:
id: my_request
version: 2
steps:
- <network_request>
- stringfloat
Paginators¶
paginator_offset_limit¶
The Paginator Offset Limit processor is the step to perform
pagination over offset-limit-based APIs that handle GET Request. It
raises the exception silo.low_code_steps.rest.HTTPPageRequest
to rewind the execution to the previous HTTP network requestor.
Step details:
Framework Name |
|
Parameters |
|
Example of use¶
To use this step, you also need to use the HTTP requestor and provide the corresponding parameters as shown below:
low_code:
id: eastwood1
version: 2
steps:
- http:
uri: /api/device
params:
limit: 10
offset: 0
- json
- paginator_offset_limit:
limit: 5
path: result_set
- jmespath:
index: true
value: "values(@)|[].result_set[].{_index: URI, _value: description}"
The parameters for the Paginator Offset Limit are optional. If the limit is
not provided, the defined limit in the HTTP requestor will remain unmodified.
The value for this parameter must be greater than 0.
If the results are not directly in the root node, you need to provide a
path to the results, the same way you would for the Simple Key selector.
This processor will enable the rewind
capability to return to the
HTTP requestor. When there are no more entries to retrieve, the loop will
finish. Next, you will require a step to combine the results. In the
example above, you can see how the results were combined by using the
JMESPath step.
Let’s say that the payload returned by the HTTP looks like this:
{
'searchspec':{},
'total_matched':7,
'total_returned':3,
'result_set':[
{
'URI': '/api/device/4',
'description': 'AutoRegUser'
},
{
'URI': '/api/device/5',
'description': 'AutoAdmin'
},
{
'URI': '/api/device/8',
'description': 'toolkit device'
},
],
}
If we provide the following input parameters:
limit: 3
path: result_set
The step will raise the exception, and the rewind process will allow returning to the HTTP requestor to get the following three entries.
The Snippet Argument should look like this:
- paginator_offset_limit:
limit: 3
path: result_set
When there are no more entries to retrieve, the step will return a dictionary with all the results, as you can see below:
{'offset_3': {'result_set': [{'URI': '/api/device/1',
'description': 'REST toolkit device'},
{'URI': '/api/device/2',
'description': 'snadmin'},
{'URI': '/api/device/3',
'description': 'em7admin'}],
'searchspec': {},
'total_matched': 7,
'total_returned': 3},
'offset_6': {'result_set': [{'URI': '/api/device/4',
'description': 'AutoRegUser'},
{'URI': '/api/device/5',
'description': 'AutoAdmin'},
{'URI': '/api/device/8',
'description': 'toolkit device'}],
'searchspec': {},
'total_matched': 7,
'total_returned': 3},
'offset_9': {'result_set': [{'URI': '/api/device/9',
'description': 'device'}],
'searchspec': {},
'total_matched': 7,
'total_returned': 1}}
Note
The Paginator Offset Limit has a default maximum number of iterations of 100. More information about the maximum number of iterations can be found at Rewind / Collect more data
Selectors¶
JMESPath¶
JMESPath is a query language for JSON data. The JMESPath step can be used on data after being JSON parsed. It uses a path expression as a parameter to specify the location of the desired element (or a set of elements). Paths use the dot notation.
The JMESPath step accepts one path expression. Additionally, the JMESPath step provides a capability to build custom indexable output. This requires the expression path to be a multiselect hash with defined _index and _value keys. See below for more information.
Step details:
Framework Name |
|
Parameters |
|
Reference |
Example Usage¶
If the incoming data to the step is:
{
"data":
[
{
"id": 1,
"name": "item1",
"children": {
"cname": "myname1"
}
},
{
"id": 2,
"name": "item2",
"children": {
"cname": "myname2"
}
}
]
}
If we provide the following input parameters:
index: true
value: "data[].{_index: id, _value: children.cname}"
The output of this step will be:
[(1, 'myname1'), (2, 'myname2')]
The Snippet Argument should look like this:
low_code:
id: my_request
version: 2
steps:
- <network_request>
- json
- jmespath:
index: true
value: "data[].{_index: id, _value: children.cname}"
If we provide the following input parameters:
index: false
value: "data[].children.cname"
The output of this step will be:
["myname1","myname2"]
The Snippet Argument should look like this:
low_code:
id: my_request2
version: 2
steps:
- <network_request>
- json
- jmespath:
value: "data[].children.cname"
JSONPath¶
The JSONPath is a query language for JSON data. The JSONPath selector can be used on any data once it has been parsed. It uses path expressions as parameters to specify a path to the element (or a set of elements). Paths use the dot notation.
The JSONPath parser can accept one or two paths. If one is given, that is the path to the data. If two are given, that provides the path to the index and the path to the data.
Step details:
Framework Name |
|
Parameters |
|
Reference |
Example Usage¶
If the incoming data to the step is:
{
"data":
[
{
"id": 1,
"name": "item1",
"children": {
"cname": "myname1"
}
},
{
"id": 2,
"name": "item2",
"children": {
"cname": "myname2"
}
}
]
}
If we wanted to provide these input parameters:
value: "$.data[*].children.cname"
index: "$.data[*].id"
The output of this step will be:
[(1, 'myname1'), (2, 'myname2')]
The Snippet Argument should look like this:
low_code:
id: my_request
version: 2
steps:
- <network_request>
- jsonpath:
value: "$.data[*].children.cname"
index: "$.data[*].id
PromQL Matrix¶
The PromQL Matrix selector processes responses in the matrix format as it returns the expression browser at a Prometheus server. This step returns dictionaries where the keys are built by the labels and the values are a result of applying an aggreation operation. It also allows you to only show the labels of interest by providing a list of labels as part of the arguments.
Step details:
Framework Name |
|
Parameters |
|
Note
When querying metrics in Prometheus, you may get some special values such as NaN, +Inf, and -Inf. SL1 does not support these values. To ensure that your monitoring data is accurate and reliable, these values are automatically filtered out.
Example Usage for Matrix Result Type¶
If the incoming data to the step is:
{
"status": "success",
"data": {
"resultType": "matrix",
"result": [
{
"metric": {"__name__": "up", "job": "prometheus", "instance": "localhost:9090"},
"values": [[1435781430.781, "1"], [1435781445.781, "1"], [1435781460.781, "1"]],
},
{
"metric": {"__name__": "up", "job": "node", "instance": "localhost:9091"},
"values": [[1435781430.781, "0"], [1435781445.781, "0"], [1435781460.781, "1"]],
},
],
},
}
The output of this step will be:
{
'up{instance="localhost:9090", job="prometheus"}': [1, 1, 1],
'up{instance="localhost:9091", job="node"}': [0, 0, 1],
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
If the incoming data to the step is:
{
"status": "success",
"data": {
"resultType": "matrix",
"result": [
{
"metric": {
"__name__": "prometheus_http_request_duration_seconds_count",
"container": "prometheus",
"endpoint": "http-web",
"handler": "/",
"instance": "10.42.0.145:9090",
"job": "prom-op-kube-prometheus-st-prometheus",
"namespace": "kube-system",
"pod": "prometheus-prom-op-kube-prometheus-st-prometheus-0",
"service": "prom-op-kube-prometheus-st-prometheus"
},
"values": [
[
1681818434.852,
"10"
],
[
1681818464.852,
"11"
],
[
1681818494.852,
"12"
],
[
1681818524.852,
"12"
]
]
},
{
"metric": {
"__name__": "prometheus_http_request_duration_seconds_count",
"container": "prometheus",
"endpoint": "http-web",
"handler": "/static/*filepath",
"instance": "10.42.0.145:9090",
"job": "prom-op-kube-prometheus-st-prometheus",
"namespace": "kube-system",
"pod": "prometheus-prom-op-kube-prometheus-st-prometheus-0",
"service": "prom-op-kube-prometheus-st-prometheus"
},
"values": [
[
1681818434.852,
"80"
],
[
1681818464.852,
"80"
],
[
1681818494.852,
"80"
],
[
1681818524.852,
"88"
]
]
}
]
}
}
If we wanted to provide these input parameters:
labels: ["handler"]
The output of this step will be:
{
'{handler="/"}':[10, 11, 12, 12],
'{handler="/static/*filepath"}': [80, 80, 80, 88],
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
labels:
- handler
PromQL Vector¶
The PromQL Vector selector processes responses in a similar format as the expression browser in a Prometheus server. This step returns dictionaries where the keys are built by the labels. It also allows you to only show the labels of interest by providing a list of labels as part of the arguments.
Step details:
Framework Name |
|
Parameters |
|
Note
When querying metrics in Prometheus, you may get some special values such as NaN, +Inf, and -Inf. SL1 does not support these values. To ensure that your monitoring data is accurate and reliable, these values are automatically filtered out.
Example Usage for Vector Result Type¶
If the incoming data to the step is:
{
"data": {
"result": [
{
"metric": {"consumergroup": "AIML_anomaly_detection.alert"},
"value": [1658874518.797, "0"],
},
{
"metric": {"consumergroup": "AIML_anomaly_detection.autoselector"},
"value": [1658874518.797, "0"],
},
{
"metric": {"consumergroup": "AIML_anomaly_detection.storage"},
"value": [1658874518.797, "3"],
},
{
"metric": {"consumergroup": "AIML_anomaly_detection.train"},
"value": [1658874518.797, "1"],
},
{
"metric": {"consumergroup": "sl_event_storage"},
"value": [1658874518.797, "0"]
},
],
"resultType": "vector",
},
"status": "success",
}
If we wanted to provide these input parameters:
labels: ["consumergroup"]
The output of this step will be:
{
'{consumergroup="AIML_anomaly_detection.alert"}':"0",
'{consumergroup="AIML_anomaly_detection.autoselector"}':"0",
'{consumergroup="AIML_anomaly_detection.storage"}':"3",
'{consumergroup="AIML_anomaly_detection.train"}':"1",
'{consumergroup="sl_event_storage"}': "0",
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: vector
labels:
- consumergroup
If the incoming data to the step is:
{
"data": {
"result": [
{
"metric": {"consumergroup": "AIML_anomaly_detection.alert"},
"value": [1658874518.797, "0"],
},
{
"metric": {"consumergroup": "AIML_anomaly_detection.autoselector"},
"value": [1658874518.797, "0"],
},
{
"metric": {"consumergroup": "AIML_anomaly_detection.storage"},
"value": [1658874518.797, "3"],
},
{
"metric": {"consumergroup": "AIML_anomaly_detection.train"},
"value": [1658874518.797, "1"],
},
{
"metric": {"consumergroup": "sl_event_storage"},
"value": [1658874518.797, "0"],
},
],
"resultType": "vector",
},
"status": "success",
}
The output of this step will be:
{
'{consumergroup="AIML_anomaly_detection.alert"}':"0",
'{consumergroup="AIML_anomaly_detection.autoselector"}':"0",
'{consumergroup="AIML_anomaly_detection.storage"}':"3",
'{consumergroup="AIML_anomaly_detection.train"}':"1",
'{consumergroup="sl_event_storage"}': "0",
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: vector
If the incoming data to the step is:
{
"data": {
"result": [
{
"metric": {
"__name__": "kafka_topic_partition_in_sync_replica",
"instance": "kafka-service-metrics.default.svc:9308",
"job": "kubernetes-services",
"kubernetes_namespace": "default",
"partition": "0",
"port_name": "http-metrics",
"service_name": "kafka-service-metrics",
"topic": "__consumer_offsets",
},
"value": [1658944573.158, "3"],
},
{
"metric": {
"__name__": "kafka_topic_partition_in_sync_replica",
"instance": "kafka-service-metrics.default.svc:9308",
"job": "kubernetes-services",
"kubernetes_namespace": "default",
"partition": "0",
"port_name": "http-metrics",
"service_name": "kafka-service-metrics",
"topic": "apl.prod.app.agg.trigger",
},
"value": [1658944573.158, "3"],
},
{
"metric": {
"__name__": "kafka_topic_partition_in_sync_replica",
"instance": "kafka-service-metrics.default.svc:9308",
"job": "kubernetes-services",
"kubernetes_namespace": "default",
"partition": "9",
"port_name": "http-metrics",
"service_name": "kafka-service-metrics",
"topic": "swap.data",
},
"value": [1658944573.158, "3"],
},
],
"resultType": "vector",
},
"status": "success",
}
If we wanted to provide these input parameters:
labels: ["service_name", "topic"]
The output of this step will be:
{
'{service_name="kafka-service-metrics", topic="__consumer_offsets"}': "3",
'{service_name="kafka-service-metrics", topic="apl.prod.app.agg.trigger"}': "3",
'{service_name="kafka-service-metrics", topic="swap.data"}': "3",
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: vector
labels:
- service_name
- topic
If the incoming data to the step is:
{
"data": {
"result": [
{
"metric": {
"__name__": "kafka_topic_partition_in_sync_replica",
"instance": "kafka-service-metrics.default.svc:9308",
"job": "kubernetes-services",
"kubernetes_namespace": "default",
"partition": "0",
"port_name": "http-metrics",
"service_name": "kafka-service-metrics",
"topic": "__consumer_offsets",
},
"value": [1658944573.158, "3"],
},
{
"metric": {
"__name__": "kafka_topic_partition_in_sync_replica",
"instance": "kafka-service-metrics.default.svc:9308",
"job": "kubernetes-services",
"kubernetes_namespace": "default",
"partition": "0",
"port_name": "http-metrics",
"service_name": "kafka-service-metrics",
"topic": "apl.prod.app.agg.trigger",
},
"value": [1658944573.158, "3"],
},
{
"metric": {
"__name__": "kafka_topic_partition_in_sync_replica",
"instance": "kafka-service-metrics.default.svc:9308",
"job": "kubernetes-services",
"kubernetes_namespace": "default",
"partition": "9",
"port_name": "http-metrics",
"service_name": "kafka-service-metrics",
"topic": "swap.data",
},
"value": [1658944573.158, "3"],
},
],
"resultType": "vector",
},
"status": "success",
}
The output of this step will be:
{
'kafka_topic_partition_in_sync_replica{instance="kafka-service-metrics.default.svc:9308", job="kubernetes-services", kubernetes_namespace="default", partition="0", port_name="http-metrics", service_name="kafka-service-metrics", topic="__consumer_offsets"}': "3",
'kafka_topic_partition_in_sync_replica{instance="kafka-service-metrics.default.svc:9308", job="kubernetes-services", kubernetes_namespace="default", partition="0", port_name="http-metrics", service_name="kafka-service-metrics", topic="apl.prod.app.agg.trigger"}': "3",
'kafka_topic_partition_in_sync_replica{instance="kafka-service-metrics.default.svc:9308", job="kubernetes-services", kubernetes_namespace="default", partition="9", port_name="http-metrics", service_name="kafka-service-metrics", topic="swap.data"}': "3",
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: vector
If the incoming data to the step is:
{
'data': {'result': [{'metric': {},
'value': [1659022840.388, '5.100745223340136']}],
'resultType': 'vector'},
'status': 'success'}
}
The output of this step will be:
{"{}":"5.100745223340136"}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: vector
If the incoming data to the step is:
{
"data": {
"result": [
{
"metric": {
"__name__": "kafka_topic_partition_in_sync_replica",
"instance": "kafka-service-metrics.default.svc:9308",
"job": "kubernetes-services",
"kubernetes_namespace": "default",
"partition": "0",
"port_name": "http-metrics",
"service_name": "kafka-service-metrics",
"topic": "__consumer_offsets",
},
"value": [1658944573.158, "1"],
},
{
"metric": {
"__name__": "kafka_topic_partition_in_sync_replica",
"instance": "kafka-service-metrics.default.svc:9308",
"job": "kubernetes-services",
"kubernetes_namespace": "default",
"partition": "0",
"port_name": "http-metrics",
"service_name": "kafka-service-metrics",
"topic": "apl.prod.app.agg.trigger",
},
"value": [1658944573.158, "3"],
},
],
"resultType": "vector",
},
"status": "success",
}
If we wanted to provide these input parameters:
labels: ["port_name"]
The output of this step will be:
{
'{port_name="http-metrics"}': "1",
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: vector
labels:
- port_name
Note
In the example above, the label port_name
has the same value
(http-metrics
) for all elements. However, only the first entry will be
returned. That is emphasized with a info log message like the one bellow:
The following labels were duplicated: {port_name="http-metrics"}.
Only the first entry is being displayed.
Simple Key¶
The Simple Key selector is for dictionaries, lists, etc. It returns the specified key by using periods as separators.
Step details:
Framework Name |
|
Format |
path.to.value |
Example Usage¶
If the incoming data to the step is:
{
"key": {
"subkey": {
"subsubkey": "subsubvalue",
"num": 12
}
}
}
If we wanted to provide these input parameters:
"key.subkey.num"
The output of this step will be:
12
The Snippet Argument should look like this:
low_code:
id: my_request
version: 2
steps:
- <network_request>
- simple_key: "key.subkey.num"
If the incoming data to the step is:
{
"key": {
1: {
"2": ["value0", "value1", "value2"]
}
}
}
If we wanted to provide these input parameters:
"key.1.2.0"
The output of this step will be:
"value0"
The Snippet Argument should look like this:
low_code:
id: my_request
version: 2
steps:
- <network_request>
- simple_key: "key.1.2.0"
If the incoming data to the step is:
[
{"id": "value0"},
{"id": "value1"}
]
If we wanted to provide these input parameters:
"id"
The output of this step will be:
["value0", "value1"]
The Snippet Argument should look like this:
low_code:
id: my_request
version: 2
steps:
- <network_request>
- simple_key: "id"
Ungrouped¶
Store Data¶
The store_data step allows a user to store the current result into a key of their choosing. This enables a pre-processed dataset to be used at a later time where it may be necessary to have the full result. An example of this could be trimming data you do not need but requiring the whole payload to make a decision in the future.
This step does not update request_id so it will not affect the automatic cache_key generated by the Snippet Framework.
Framework Name |
|
key |
storage_key |
For example, if you wanted to store the current result into
the key storage_key
, you would use the following step
definition:
store_data: storage_key
To access this data in a later step, you would use the following:
result_container.metadata["storage_key"]
Cachers¶
cache_writer¶
The cache_writer step enables user defined caching (read and write) to SL1’s DB instance.
Step details:
Framework Name |
|
|
Parameters |
key: |
Metadata key that the DB will use for cache R/W (default: request_id) |
reuse_for: |
Time in minutes that specifies valid cache entry duration times to be Read. (default: 5) |
|
cleanup_after: |
Time in minutes that specifies when caches will expire and should be removed from the DB (default: 15) |
Note
When the parameter reuse_for
is defined as 0 minutes, the cache_writer
will not allow a fast-forward in the pipeline execution.
Note
When the parameter cleanup_after
is defined to be smaller than
reuse_for
, the cache_writer will fail to find valid data and run
through the step execution up to the point of cache_writer.
Example Usage¶
Below is an example where we want to make a network request, process the data (json->dict) and then select a subset of that data.
The Snippet Argument should look like this:
low_code:
id: my_request
version: 2
steps:
- <network_request>
- json
- simple_key: "id"
Let’s assume that the network request and json processing are steps that we would like to cache
and possibly reuse in another collection and/or dynamic application. I am going use a custom key,
here
with a cache reuse time, reuse_for
, of 5 minutes and a clean up, cleanup_after
on
my cache entries after 15 minutes.
low_code:
id: my_request
version: 2
steps:
- <network_request>
- json
- cache_writer:
key: here
reuse_for: 5
cleanup_after: 15
- simple_key: "id"
It there is a cache entry that is 5 minutes or newer since the start of the collection cycle the step will
read the cached value and fast forward to the simple_key
step.