Steps
Steps are the fundamental building blocks used to create Dynamic Applications. These are the most important aspects of the Snippet Framework. When building Dynamic Applications, a set of Steps are defined that are then executed by the Snippet Framework to obtain the defined collection object.
The snippet framework executes these steps in the order they are defined in the snippet arguments for each collection object in a Dynamic Application.
aggregation_max
The aggregation max is one of the aggregation functions you can use for matrix result type. It receives a dictionary of key and list of values and selects the maximum value of each list. The lists should consist of numbers only.
Step details:
Example Usage
If the incoming data to the aggregation function is:
{
'{job="prometheus"}': [12, 40, 7, 39, 71, 80, 7, 52],
'{job="node"}': [50, 40, 40, 30, 20, 18, 16, 14, 12, 10],
}
If we wanted to use the max aggregation function:
aggregation: max
The output of this step will be:
{
'{job="prometheus"}': 80,
'{job="node"}': 50,
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
aggregation: max
aggregation_mean
The aggregation mean is one of the aggregation functions you can use for matrix result type. It receives a dictionary of key and list of values and selects the mean value of each list. The lists should consist of numbers only.
Step details:
Example Usage
If the incoming data to the step is:
{
'{job="prometheus", instance="localhost:9090"}': [50, 40, 40, 30, 20, 18, 16, 14, 12, 10],
'{job="node", instance="localhost:9091"}': [2, 1, 5, 7, 3, 9, 4, 6, 8],
}
If we wanted to use the mean aggregation function:
aggregation: mean
The output of this step will be:
{
'{job="prometheus", instance="localhost:9090"}': 25,
'{job="node", instance="localhost:9091"}': 5,
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
aggregation: mean
aggregation_median
The aggregation median is one of the aggregation functions you can use for matrix result type. It receives a dictionary of key and list of values and calculates the average value of each list. The lists should consist of numbers only.
Step details:
Example Usage
If the incoming data to the step is:
{
'{job="prometheus"}': [12, 40, 7, 39, 71, 80, 7, 52],
'{job="node"}': [50, 40, 40, 30, 20, 18, 16, 14, 12, 10],
}
If we wanted to use the median aggregation function:
aggregation: median
The output of this step will be:
{
'{job="prometheus"}': 39.5,
'{job="node"}': 19.0,
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
aggregation: median
aggregation_min
The aggregation min is one of the aggregation functions you can use for matrix result type. It receives a dictionary of key and list of values and selects the minimum value of each list. The lists should consist of numbers only.
Step details:
Example Usage
If the incoming data to the step is:
{
'{job="prometheus"}': [12, 40, 7, 39, 71, 80, 7, 52],
'{job="node"}': [50, 40, 40, 30, 20, 18, 16, 14, 12, 10],
}
If we wanted to use the min aggregation function:
aggregation: min
The output of this step will be:
{
'{job="prometheus"}': 7,
'{job="node"}': 10,
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
aggregation: min
aggregation_mode
The aggregation mode is one of the aggregation functions you can use for matrix result type. It receives a dictionary of key and list of values and selects the mode value of each list. The lists should consist of numbers only.
Step details:
Example Usage
If the incoming data to the step is:
{
'{job="prometheus"}': [12, 40, 7, 39, 71, 80, 7, 52],
'{job="node"}': [50, 40, 40, 30, 20, 18, 16, 14, 12, 10],
}
If we wanted to use the mode aggregation function:
aggregation: mode
The output of this step will be:
{
'{job="prometheus"}': 7,
'{job="node"}': 40,
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
aggregation: mode
aggregation_percentile
The aggregation percentile is one of the aggregation functions you can use for matrix result type. It receives a dictionary of key and list of values and calculate the n-percentile value of each list. The lists should consist of numbers only.
Step details:
Framework Name |
|
Parameters |
|
Example Usage
If the incoming data to the step is:
{
'up{instance="localhost:9090", job="prometheus"}': [2, 10, 5, 4],
'up{instance="localhost:9091", job="node"}': [12, 40, 7, 39, 71],
}
If we wanted to use the percentile aggregation function:
aggregation: percentile
percentile: 50
The output of this step will be:
{
'up{instance="localhost:9090", job="prometheus"}': 4.5,
'up{instance="localhost:9091", job="node"}': 39.0,
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
aggregation: percentile
percentile: 50
cache_writer
The cache_writer step enables user defined caching (read and write) to SL1’s DB instance.
Step details:
Framework Name |
|
|
Parameters |
key: |
Metadata key that the DB will use for cache R/W (default: request_id) |
reuse_for: |
Time in minutes that specifies valid cache entry duration times to be Read. (default: 5) |
|
cleanup_after: |
Time in minutes that specifies when caches will expire and should be removed from the DB (default: 15) |
Note
When the parameter reuse_for
is defined as 0 minutes, the cache_writer
will not allow a fast-forward in the pipeline execution.
Note
When the parameter cleanup_after
is defined to be smaller than
reuse_for
, the cache_writer will fail to find valid data and run
through the step execution up to the point of cache_writer.
Example - Writing to Cache
Below is an example where we want to make a network request, process the data (json->dict) and then select a subset of that data.
The Snippet Argument should look like this:
low_code:
version: 2
steps:
- <network_request>
- json
- simple_key: "id"
Let’s assume that the network request and json processing are steps that we would like to cache
and possibly reuse in another collection and/or Dynamic Application. I am going use a custom key,
here
with a cache reuse time, reuse_for
, of 5 minutes and a clean up, cleanup_after
on
my cache entries after 15 minutes.
low_code:
version: 2
steps:
- <network_request>
- json
- cache_writer:
key: here
reuse_for: 5
cleanup_after: 15
- simple_key: "id"
It there is a cache entry that is 5 minutes or newer since the start of the collection cycle the step will
read the cached value and fast forward to the simple_key
step.
csv
The csv
step wraps the python standard library’s
CSV module
to parse CSV files. It returns the parsed data in a list
of dictionaries or lists.
Note
The line terminator must be one of the following \r\n
, \r
,
\n
, \n\r
.
Note
The jc step provides a csv parser.
Step details:
Framework Name |
|
Parameters |
All the arguments inside:
|
Reference |
The following Step Arguments are passed transparently to the library as defined in csv.DictReader and this should be considered the source of truth.
delimiter:
,
- This defines the delimiter used. The default value for this is a comma.fieldnames: - This is only needed when the first row does NOT contain the field names. In the first example the first row has the field names. In the second case the field names are explicitly defined. Note that these values will take precedence when the first row contains labels.
restkey: string - This defines the fieldname that will be used when there are additional entries in a row. All additional entries will be placed in this field. The default is
None
.restval: string - This defines the values that will be placed into fields when there are not enough items in the row. The default is
None
.
Example - Parsing a CSV Response
Dictionary Output
Consider the following Snippet Argument.
low_code:
id: my_request
version: 2
steps:
- static_value: "Username,Identifier,Onetime password,Recovery Code,First Name,Last Name,Dept,location\r\n
booker12,9012,12se74,rb9012,Rachel,Booker,Sales,Manchester\r\n
grey07,2070,04ap67,lg2070,Laura,Grey,Depot,London\r\n
johnson81,4081,30no86,cj4081,Craig,Johnson,Depot,London\r\n
jenkins46,9346,14ju73,mj9346,Mary,Jenkins,Engineering,Manchester\r\n
smith79,5079,09ja61,js5079,Jamie,Smith,Engineering,Manchester\r\n"
- csv:
type: dict
Output
[
OrderedDict(
[
("Username", " booker12"),
("Identifier", "9012"),
("Onetime password", "12se74"),
("Recovery Code", "rb9012"),
("First Name", "Rachel"),
("Last Name", "Booker"),
("Dept", "Sales"),
("location", "Manchester"),
]
),
OrderedDict(
[
("Username", " grey07"),
("Identifier", "2070"),
("Onetime password", "04ap67"),
("Recovery Code", "lg2070"),
("First Name", "Laura"),
("Last Name", "Grey"),
("Dept", "Depot"),
("location", "London"),
]
),
OrderedDict(
[
("Username", " johnson81"),
("Identifier", "4081"),
("Onetime password", "30no86"),
("Recovery Code", "cj4081"),
("First Name", "Craig"),
("Last Name", "Johnson"),
("Dept", "Depot"),
("location", "London"),
]
),
OrderedDict(
[
("Username", " jenkins46"),
("Identifier", "9346"),
("Onetime password", "14ju73"),
("Recovery Code", "mj9346"),
("First Name", "Mary"),
("Last Name", "Jenkins"),
("Dept", "Engineering"),
("location", "Manchester"),
]
),
OrderedDict(
[
("Username", " smith79"),
("Identifier", "5079"),
("Onetime password", "09ja61"),
("Recovery Code", "js5079"),
("First Name", "Jamie"),
("Last Name", "Smith"),
("Dept", "Engineering"),
("location", "Manchester"),
]
),
]
List Output
Below is the same Snippet Argument as above without the type specified.
low_code:
id: my_request
version: 2
steps:
- static_value: "Username,Identifier,Onetime password,Recovery Code,First Name,Last Name,Dept,location\r\n
booker12,9012,12se74,rb9012,Rachel,Booker,Sales,Manchester\r\n
grey07,2070,04ap67,lg2070,Laura,Grey,Depot,London\r\n
johnson81,4081,30no86,cj4081,Craig,Johnson,Depot,London\r\n
jenkins46,9346,14ju73,mj9346,Mary,Jenkins,Engineering,Manchester\r\n
smith79,5079,09ja61,js5079,Jamie,Smith,Engineering,Manchester\r\n"
- csv:
Output
[
[
"Username",
"Identifier",
"Onetime password",
"Recovery Code",
"First Name",
"Last Name",
"Dept",
"location",
],
[
" booker12",
"9012",
"12se74",
"rb9012",
"Rachel",
"Booker",
"Sales",
"Manchester",
],
[" grey07", "2070", "04ap67", "lg2070", "Laura", "Grey", "Depot", "London"],
[" johnson81", "4081", "30no86", "cj4081", "Craig", "Johnson", "Depot", "London"],
[
" jenkins46",
"9346",
"14ju73",
"mj9346",
"Mary",
"Jenkins",
"Engineering",
"Manchester",
],
[
" smith79",
"5079",
"09ja61",
"js5079",
"Jamie",
"Smith",
"Engineering",
"Manchester",
],
]
Using the Snippet Arguments above we can use the JMESPath step to select certain rows or fields for a Collection Object.
No Fieldnames
For the case where the first row does not contain the fieldnames, then the following Snippet Argument can be used:
low_code:
version: 2
steps:
- static_value: "
booker12,9012,12se74,rb9012,Rachel,Booker,Sales,Manchester\r\n
grey07,2070,04ap67,lg2070,Laura,Grey,Depot,London\r\n
johnson81,4081,30no86,cj4081,Craig,Johnson,Depot,London\r\n
jenkins46,9346,14ju73,mj9346,Mary,Jenkins,Engineering,Manchester\r\n
smith79,5079,09ja61,js5079,Jamie,Smith,Engineering,Manchester\r\n"
- csv:
type: dict
fieldnames:
- Username
- Identifier
- One-time password
- Recovery code
- first name
- last name
- department
- location
Output
[
OrderedDict(
[
("Username", " booker12"),
("Identifier", "9012"),
("One-time password", "12se74"),
("Recovery code", "rb9012"),
("first name", "Rachel"),
("last name", "Booker"),
("department", "Sales"),
("location", "Manchester"),
]
),
OrderedDict(
[
("Username", " grey07"),
("Identifier", "2070"),
("One-time password", "04ap67"),
("Recovery code", "lg2070"),
("first name", "Laura"),
("last name", "Grey"),
("department", "Depot"),
("location", "London"),
]
),
OrderedDict(
[
("Username", " johnson81"),
("Identifier", "4081"),
("One-time password", "30no86"),
("Recovery code", "cj4081"),
("first name", "Craig"),
("last name", "Johnson"),
("department", "Depot"),
("location", "London"),
]
),
OrderedDict(
[
("Username", " jenkins46"),
("Identifier", "9346"),
("One-time password", "14ju73"),
("Recovery code", "mj9346"),
("first name", "Mary"),
("last name", "Jenkins"),
("department", "Engineering"),
("location", "Manchester"),
]
),
OrderedDict(
[
("Username", " smith79"),
("Identifier", "5079"),
("One-time password", "09ja61"),
("Recovery code", "js5079"),
("first name", "Jamie"),
("last name", "Smith"),
("department", "Engineering"),
("location", "Manchester"),
]
),
]
format_build_string
The Format Build String step formats strings for lists of related values. The input data is a dictionary of value lists, and the output is the list of strings. The input dictionary is represented as a table with keys shown in columns, and the list items shown in the cells. The output is a list of strings with each string shown as a row in the table.
The Format Build String step is used when receiving a table’s data that provides a composite output for a collection object. For example, a process listing’s process name and process arguments are located in different fields; meaning, both fields can be combined into a single collection object that shows the executed command.
Step details:
Step |
|
Incoming data type |
dictionary |
Return data type |
list of strings |
Configuration of arguments
The following argument can be configured for the Format Build String step:
Argument |
Type |
Default |
Description |
---|---|---|---|
keys_to_replace |
str |
Required. This argument will set the expected format to the
result. Provide the keys surrounded by curly brackets
{} and extra text if needed.
|
Below are three examples of the Format Build String step.
Incoming data to step:
{ "title": ["Mr.", "Ms."], "name": ["Joe", "Leah"], "year": [1999, 2018], "color": ["red", "blue"], }
Input arguments to step:
keys_to_replace: "{title}"
Output:
["Mr.", "Ms."]
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - format_build_string: keys_to_replace: "{title}"
Incoming data to step:
{ "title": ["Mr.", "Ms."], "name": ["Joe", "Leah"], "year": [1999, 2018], "color": ["red", "blue"], }
Input arguments to step:
keys_to_replace: "{title} {name}"
Output:
["Mr. Joe", "Ms. Leah"]
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - format_build_string: keys_to_replace: "{title} {name}"
Incoming data to step:
{ "title": ["Mr.", "Ms."], "name": ["Joe", "Leah"], "year": [1999, 2018], "color": ["red", "blue"], }
Input arguments to step:
keys_to_replace: "{name} {color} - since {year}"
Output:
["Joe red - since 1999", "Leah blue - since 2018"]
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - format_build_string: keys_to_replace: "{name} {color} - since {year}"
format_remove_unit
The Formatter Remove Unit step removes units from strings. The incoming string data must be formed by a value and a unit, in that order, and separated by a white space character.
Step details:
Step |
|
Incoming data type |
string or list of strings |
Return data type |
string or list of strings |
Below are three examples of the Format Remove Unit step. In all of the following examples, the Snippet Argument should look like this:
low_code:
version: 2
steps:
- <network_request>
- format_remove_unit
Incoming data to step:
8192 KB
Output:
8192
Incoming data to step:
["12346", "kb"]
Output:
12346
Incoming data to step:
["12345 kb", "7890 kb"]
Output:
["12345", "7890"]
http
The HTTP Data Requestor creates and executes an HTTP/HTTPS call to a specified endpoint and returns the results. The endpoint can be defined in 2 ways:
uri
: The host and port in the credential will be used as the base endpoint and theuri
defined in thestep_args
will be appended to it.
url
: The endpoint defined as theurl
in thestep_args
will be called directly.
Step details:
Framework Name |
|
Supported Credentials |
Basic, SOAP/XML |
Supported Fields of Basic Cred. |
|
Supported Fields of SOAP Cred. |
|
Parameters |
|
Note
This step supports all the parameters mentioned in
requests.Session.request
except the hooks
parameter. The parameters defined in the step will take precedent
over what is defined in the credential. For example, if you define verify: False
in the credential but verify: True
in the step parameters, the verify=True
will be used in the request.
Note
Any parameters that are specified by the step and the configuration will attempt to be combined.
Dictionary: Merge the two together, with user-provided values as the priority.
Other: Value replacement
Note
When using this step in conjunction with HTTPS proxies to make requests to HTTP endpoints, SSL verification with proxy does not occur. This will be addressed in a future release.
Calling a Relative Endpoint
To access the API of an SL1 System, the full URL would be:
https://SL1_IP_ADDRESS/api/account
If the SL1_IP_ADDRESS is defined in the credential, the relative URI can be used instead:
/api/account
For this example, we are assuming the base endpoint (SL1_IP_ADDRESS) is
defined in the credential, so we can call the uri
endpoint like this:
low_code:
version: 2
steps:
- http:
uri: "/api/account"
The output of this step:
{
"searchspec":{},
"total_matched":4,
"total_returned":4,
"result_set":
[
{
"URI":"/api/account/2",
"description":"AutoAdmin"
},
{
"URI":"/api/account/3",
"description":"AutoRegUser"
},
{
"URI":"/api/account/1",
"description":"em7admin"
},
{
"URI":"/api/account/4",
"description":"snadmin"
}
],
}
Calling a Direct Endpoint
To call an HTTP endpoint directly, define the url
in the step_args
. For example,
say we want to call an API to determine the top 20 most popular board games right now.
The full URL would look something like this:
https://boardgamegeek.com/xmlapi2/hot?boardgame
We tell the http
step to call that endpoint by setting it as the url
.
After making the call, we can use the jc step to parse the
response and finally use the jmespath selector to select our values.
low_code:
version: 2
steps:
- http:
url: https://boardgamegeek.com/xmlapi2/hot?boardgame
- jc: xml
- jmespath:
value: items.item[:20].name
Response Status Code Checking
When the parameter check_status_code
is set
to True
(default), and the response’s status code
meets the following condition:
\(400 <= status code < 600\)
An exception will be raised, thus stopping the current collection.
When the parameter check_status_code
is set
to False
, no exception will be raised for any
status code value.
Pagination Support
A custom step is required to raise the exception silo.low_code_steps.rest.HTTPPageRequest
to rewind execution back to the previous network requestor. This exception
is specific to the http and requires a dictionary as its own argument.
The dictionary can either replace or update the existing step_args
dictionary passed to the http step.
If our Snippet Argument looked like this:
low_code:
version: 2
setps:
- http:
uri: "/account"
- pagination_trimmer
- pagination_request:
index: "request_key"
replace: True
Our pagination_request
step could look like this:
@register_processor(type=REQUEST_MORE_DATA_TYPE)
def pagination_request(result, step_args):
if result:
# Replacement of step_args
raise HTTPPageRequest({"uri": "/account", "params": result}, index=step_args.get("index"), replace=step_args.get("replace", False))
This assumes that the result will contain the next pagination step arguments.
The step issues the HTTPPaginateRequest
and sets the new step_args with
the first positional parameter. With the kwarg replace
set to True, the http step
will receive new step_args.
jc
JC is a third-party library that enables easy
conversion of text output to python primitives. While this is primarily
used for *nix parsing, it includes other notable parsers such as
xml
, ini
, and yaml
to name a few. The entire list of supported
formats and their respective outputs can be found on their github.
Note
The Snippet Framework does not support streaming parsers (parsers that end in _s). These will not appear in the list of available parsers.
Step details:
Framework Name |
|
Parameters |
|
Reference |
There are currently 153 parsers available in the installed version of jc v1.25.3. The list of available parsers are as follows:
acpi, airport, apt_cache_show, apt_get_sqq, arp, asciitable, asciitable_m, blkid, bluetoothctl, cbt, cef, certbot, chage, cksum, clf, crontab, crontab_u, csv, curl_head, date, datetime_iso, debconf_show, df, dig, dir, dmidecode, dpkg_l, du, efibootmgr, email_address, env, ethtool, file, find, findmnt, finger, free, fstab, git_log, git_ls_remote, gpg, group, gshadow, hash, hashsum, hciconfig, history, host, hosts, http_headers, id, ifconfig, ini, ini_dup, iostat, ip_address, iptables, ip_route, iw_scan, iwconfig, jar_manifest, jobs, jwt, kv, kv_dup, last, ls, lsattr, lsb_release, lsblk, lsmod, lsof, lspci, lsusb, m3u, mdadm, mount, mpstat, needrestart, netstat, nmcli, nsd_control, ntpq, openvpn, os_prober, os_release, passwd, path, path_list, pci_ids, pgpass, pidstat, ping, pip_list, pip_show, pkg_index_apk, pkg_index_deb, plist, postconf, proc, ps, resolve_conf, route, rpm_qi, rsync, semver, sfdisk, shadow, srt, ss, ssh_conf, sshd_conf, stat, swapon, sysctl, syslog, syslog_bsd, systemctl, systemctl_lj, systemctl_ls, systemctl_luf, systeminfo, time, timedatectl, timestamp, toml, top, tracepath, traceroute, tune2fs, udevadm, ufw, ufw_appinfo, uname, update_alt_gs, update_alt_q, upower, uptime, url, ver, veracrypt, vmstat, w, wc, who, x509_cert, x509_csr, xml, xrandr, yaml, zipinfo, zpool_iostat, zpool_status
Step Arguments
Supplying additional key-value pairs in the step_args
will
pass them through to the parser. For example, if you needed to
run a parser example_parser_name
that expected an additional
argument such as split
, you would use the following:
jc:
parser_name: example_parser_name
split: ","
If no additional parameters need to be supplied, you can specify
the parser_name
directly as the step argument.
jc: example_parser_name
Example - Parsing a CLI Response
One of the commands that the jc
step supports parsing for is iostat
.
We can use the following Snippet Argument to define iostat
as the parser_name
for the jc
step and extract the current io statistics from the machine we ssh into:
low_code:
version: 2
steps:
- ssh:
command: iostat
- jc: iostat
- jmespath:
value: '[?type==`device`].{_index: device, _value: kb_read_s}'
index: true
Suppose we received the following result from running the iostat
command:
avg-cpu(perc): user nice system iowait steal idle
5.25 0.01 2.74 0.08 0.00 91.93
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 11.25 46.33 176.47 84813251 323042096
scd0 0.00 0.00 0.00 1 0
dm-0 1.34 38.36 2.15 70222779 3930730
dm-1 0.13 0.14 0.36 261788 665732
The jc
parser will parse the above output into JSON, which we can then use
to extract the desired information out of using a selector. In this example we
use the jmespath selector to extract the kb_read_s value
for each device.
{
"percent_user": 5.25,
"percent_nice": 0.01,
"percent_system": 2.74,
"percent_iowait": 0.08,
"percent_steal": 0.0,
"percent_idle": 91.93,
"type": "cpu"
},
{
"device": "sda",
"tps": 11.25,
"kb_read_s": 46.29,
"kb_wrtn_s": 176.45,
"kb_read": 84813635,
"kb_wrtn": 323281369,
"type": "device"
},
{
"device": "scd0",
"tps": 0.0,
"kb_read_s": 0.0,
"kb_wrtn_s": 0.0,
"kb_read": 1,
"kb_wrtn": 0,
"type": "device"
},
{
"device": "dm-0",
"tps": 1.34,
"kb_read_s": 38.33,
"kb_wrtn_s": 2.15,
"kb_read": 70222907,
"kb_wrtn": 3931900,
"type": "device"
},
{
"device": "dm-1",
"tps": 0.13,
"kb_read_s": 0.14,
"kb_wrtn_s": 0.36,
"kb_read": 261788,
"kb_wrtn": 665732,
"type": "device"
}
After using jmespath
to select only device information, the final
result would look like:
{
"dm-0": 38.33,
"dm-1": 0.14,
"scd0": 0.0,
"sda": 46.29
}
jmespath
The JMESPath step is our most performant step for selecting data. This step uses a 3rd party library, JMESPath. See JMESPath URL. JMESPath is a query language for JSON that is used to extract and transform elements from a JSON document.
It is important to understand the multi-select-hash as this is used to index the
data. When multiple Collection Objects are placed in a group, it is recommended
to always index the data explicitly. That is, each group of Collection Objects
that are stored within a Dynamic Application should always use the
index: True
option and ensure that a unique value is used for the index.
Step details:
Framework Name
jmespath
Parameters
index
: True or False (default: False) This specifies if the data is to be explicitly indexed. This should always be used whenever multiple collection objects are grouped.
value
: string defining the path to the data (required)Reference
Example - Selecting Attributes with jmespath
Selection
Consider the following Snippet Argument which fetches earthquake data detected within the last hour.
low_code:
version: 2
steps:
- http:
url: https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_hour.geojson
Output
{
"type": "FeatureCollection",
"metadata": {
"generated": 1706266177000,
"url": "https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_hour.geojson",
"title": "USGS All Earthquakes, Past Hour",
"status": 200,
"api": "1.10.3",
"count": 12
},
"features": [
{
"type": "Feature",
"properties": {
"mag": 1.4,
"place": "21 km SE of Valdez, Alaska",
"time": 1706265719345,
"updated": 1706265802101,
"tz": "",
"url": "https://earthquake.usgs.gov/earthquakes/eventpage/ak02417669tq",
"detail": "https://earthquake.usgs.gov/earthquakes/feed/v1.0/detail/ak02417669tq.geojson",
"felt": "",
"cdi": "",
"mmi": "",
"alert": "",
"status": "automatic",
"tsunami": 0,
"sig": 30,
"net": "ak",
"code": "02417669tq",
"ids": ",ak02417669tq,",
"sources": ",ak,",
"types": ",origin,phase-data,",
"nst": "",
"dmin": "",
"rms": 0.52,
"gap": "",
"magType": "ml",
"type": "earthquake",
"title": "M 1.4 - 21 km SE of Valdez, Alaska"
},
"geometry": {
"type": "Point",
"coordinates": [-146.0674, 60.9902, 27.5]
},
"id": "ak02417669tq"
}
],
"bbox": [-150.6143, 33.3551667, 0.1, -116.4315, 64.4926, 115.9]
}
Suppose we wanted to show only the magnitudes of the earthquakes in this list. This can be accomplished by using the JMESPath step given the following query string shown below.
low_code:
version: 2
steps:
- http:
url: https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_hour.geojson
- json
- jmespath:
value: features[].properties.mag
Output
[2.18000007, 0.9, 0.67, 1.4, 1.2, 0.71, 1.5, 0.76, 1]
Filtering
Using the same example Snippet Argument as above, we wish to only display magnitudes greater than 1.0. For this we will use the filtering feature of JMESPath.
low_code:
version: 2
steps:
- http:
url: https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_hour.geojson
- json
- jmespath:
value: features[?properties.mag > `1.0`]
Output
[
{
"type": "Feature",
"properties": {
"mag": 2.4,
"place": "10 km W of Point MacKenzie, Alaska",
"time": 1706268933977,
"updated": 1706269019706,
"tz": None,
"url": "https://earthquake.usgs.gov/earthquakes/eventpage/ak024176qd6d",
"detail": "https://earthquake.usgs.gov/earthquakes/feed/v1.0/detail/ak024176qd6d.geojson",
"felt": None,
"cdi": None,
"mmi": None,
"alert": None,
"status": "automatic",
"tsunami": 0,
"sig": 89,
"net": "ak",
"code": "024176qd6d",
"ids": ",ak024176qd6d,",
"sources": ",ak,",
"types": ",origin,phase-data,",
"nst": None,
"dmin": None,
"rms": 0.43,
"gap": None,
"magType": "ml",
"type": "earthquake",
"title": "M 2.4 - 10 km W of Point MacKenzie, Alaska",
},
"geometry": {"type": "Point", "coordinates": [-150.1724, 61.3705, 34.9]},
"id": "ak024176qd6d",
},
...
]
Reducing Payload Sizes
One common performance issue is working a large payload from an API. In most
cases, this data must be reduced to a few columns for further processing. We
will again reuse the previous Snippet Argument and reduce the data to just the
mag
, time
, and place
columns. This query uses a multi-select-hash
to save our fields of interest.
low_code:
version: 2
steps:
- http:
url: https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_hour.geojson
- json
- jmespath:
value: "features[].properties.{mag: mag, place: place, time: time}"
The output is a list of dictionaries with the fields that were requested.
[
{"mag": 1.6, "place": "10 km E of Willits, CA", "time": 1706276615270},
{"mag": 1.6, "place": "44 km NNW of Beluga, Alaska", "time": 1706276105156},
{"mag": 2.3, "place": "30 km NW of Karluk, Alaska", "time": 1706275723157},
{"mag": 1.8, "place": "42 km W of Tyonek, Alaska", "time": 1706275673972},
{"mag": 1.74000001, "place": "17 km W of Volcano, Hawaii", "time": 1706275545280},
{"mag": 1.9, "place": "4 km SSW of Salcha, Alaska", "time": 1706274091738},
{"mag": 1.12, "place": "7 km NE of Pala, CA", "time": 1706273608330},
]
Alternatively, we could also use a multi-select-list.
low_code:
version: 2
steps:
- http:
url: https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_hour.geojson
- json
- jmespath:
value: "features[].properties.[mag, place, time]"
The output is now a list of the three fields that were specified.
[
[1.1, "6 km SW of Salcha, Alaska", 1706276986399],
[1.6, "10 km E of Willits, CA", 1706276615270],
[1.6, "44 km NNW of Beluga, Alaska", 1706276105156],
[2.3, "30 km NW of Karluk, Alaska", 1706275723157],
[1.8, "42 km W of Tyonek, Alaska", 1706275673972],
[1.74000001, "17 km W of Volcano, Hawaii", 1706275545280],
[1.9, "4 km SSW of Salcha, Alaska", 1706274091738],
[1.12, "7 km NE of Pala, CA", 1706273608330],
]
Indexing
For the following examples the static_value
step provides the
equivalent converted json output.
[
{ "URI": "/api/account/2", "color": "red", "description": "AutoAdmin" },
{ "URI": "/api/account/3", "color": "yellow", "description": "AutoRegUser" },
{ "URI": "/api/account/1", "color": "green", "description": "user" },
{ "URI": "/api/account/4", "color": "blue", "description": "snadmin" }
]
This first Collection Object’s Snippet Argument selects the description
for
each URI
.
low_code:
version: 2
steps:
- static_value:
- URI: "/api/account/2"
color: red
description: AutoAdmin
- URI: "/api/account/3"
color: yellow
description: AutoRegUser
- URI: "/api/account/1"
color: green
description: user
- URI: "/api/account/4"
color: blue
description: snadmin
- jmespath:
index: true
value: '[].{_index: URI, _value: description}'
Output
{
"/api/account/1": "user",
"/api/account/2": "AutoAdmin",
"/api/account/3": "AutoRegUser",
"/api/account/4": "snadmin"
}
The next Collection Object’s Snippet Argument selects color
for each URI
.
low_code:
version: 2
steps:
- static_value:
- URI: "/api/account/2"
color: red
description: AutoAdmin
- URI: "/api/account/3"
color: yellow
description: AutoRegUser
- URI: "/api/account/1"
color: green
description: user
- URI: "/api/account/4"
color: blue
description: snadmin
- jmespath:
index: true
value: '[].{_index: URI, _value: color}'
Output
{
"/api/account/1": "green",
"/api/account/2": "red",
"/api/account/3": "yellow",
"/api/account/4": "blue"
}
In both Collection Objects, URI
is used as the index. In order
to properly ingest the data into SL1, we need create a label object with
the following Snippet Argument.
low_code:
version: 2
steps:
- static_value:
- URI: "/api/account/2"
color: red
description: AutoAdmin
- URI: "/api/account/3"
color: yellow
description: AutoRegUser
- URI: "/api/account/1"
color: green
description: user
- URI: "/api/account/4"
color: blue
description: snadmin
- jmespath:
index: true
value: '[].{_index: URI, _value: URI}'
Output
{
"/api/account/1": "/api/account/1",
"/api/account/2": "/api/account/2",
"/api/account/3": "/api/account/3",
"/api/account/4": "/api/account/4"
}
Make sure that all Collection Objects shown above are within the same group.
Notice that the index between the Collection Objects is fixed. Doing so will ensure that the collected data is properly associated within SL1.
jmx
The JMX Requestor connects to a remote JMX device and can gather information about the exposed beans. The result of the step produces a list of dictionaries with the following key-value pairs:
[
{
"attribute": Attribute[/Sub-attribute],
"bean": Bean Name,
"port": JMX Listening Port,
"result": Query Value,
"type": Result Type,
}
]
Note
When executing through the agent, the type
will always be either Boolean or Unknown.
Step details:
Framework Name |
jmx |
Supported Credentials |
|
Supported Fields (Basic/Snippet) |
|
Supported Fields (SOAP/XML) |
|
Step Parameters:
Parameter |
Required |
Type |
Default |
Description |
beans |
Required |
list |
List of beans to query
|
Note
The agent does not support wildcards or specification of multiple beans.
Metadata
The following metadata is available when executing through the agent:
{
"jvm": {
"file_timestamp": file_timestamp,
"jvm_name": jvm_name,
"pid": pid,
}
}
Executing a JMX Query for a single bean
In this example, we will query a remote system running a Tomcat server that has JMX enabled. We will retrieve a single bean:
low_code:
version: 2
steps:
- jmx:
beans:
- Catalina:type=Host,host=localhost/stateName
The output of this step:
[
{
"attribute": "stateName",
"bean": "Catalina:type=Engine",
"port": 3000,
"result": "STARTED",
"type": "String",
}
]
Executing a JMX Query for multiple beans
In this example, we will query a remote system running a Tomcat server that has JMX enabled. We will retrieve two specified beans:
low_code:
version: 2
steps:
- jmx:
beans:
- Catalina:type=Host,host=localhost/stateName
- Catalina:type=Host,host=localhost/modelerType
The output of this step:
[
{
"attribute": "stateName",
"bean": "Catalina:type=Engine",
"port": 3000,
"result": "STARTED",
"type": "String",
},
{
"attribute": "modelerType",
"bean": "Catalina:type=Engine",
"port": 3000,
"result": "org.apache.catalina.core.StandardHost",
"type": "String",
}
]
Executing a JMX Query with a wildcard
In this example, we will query a remote system running a Tomcat server that has JMX enabled. We will retrieve multiple beans based on a wildcard query:
low_code:
version: 2
steps:
- jmx:
beans:
- Catalina:type=Host,host=localhost/deploy*
The output of this step:
[
{
"attribute": "deployIgnore",
"bean": "Catalina:type=Engine",
"port": 3000,
"result": None,
"type": "Null",
},
{
"attribute": "deployOnStartup",
"bean": "Catalina:type=Engine",
"port": 3000,
"result": True,
"type": "Boolean",
}
]
json
The json
step is used to convert a JSON string into
a python object. It is commonly used to transform
results from http
API calls into a format we can then use
with a selector step like jmespath.
Framework Name |
|
Example - Converting a JSON String
If the incoming data to the step is:
'{
"project": "low_code",
"tickets": { "t123": "selector work", "t321": "parser work" },
"name": "Josh", "teams": ["rebel_scrum", "sprint_eastwood"]
}'
The output of this step will be:
{
"name": "Josh",
"project": "low_code",
"teams": ["rebel_scrum", "sprint_eastwood"],
"tickets": {
"t123": "selector work",
"t321": "parser work"
}
}
The Snippet Argument should look like this:
low_code:
version: 2
steps:
- static_value: '{
"project": "low_code",
"tickets": { "t123": "selector work", "t321": "parser work" },
"name": "Josh", "teams": ["rebel_scrum", "sprint_eastwood"]
}'
- json
- jmespath:
value: project
jsonpath
JSONPath has been deprecated. Use JMESPath instead.
low_code
The low_code Syntax is a YAML configuration format for specifying your collections. You explicitly provide the steps you want to run, in the order you need them to be executed. There are multiple versions of the low_code Syntax.
Framework Name |
|
All versions support specifying configuration. The configuration will be
all sub-elements under the step. For example, if you had a step cool_step
and wanted to specify two key values, you would provide the following:
low_code:
version: 2
steps:
- cool_step:
key1: value1
key2: value2
Note
Notice that, in the example above, key1
and key2
are indented
one level beyond cool_step
. Setting them at the same indentation
level as cool_step
, as shown below, will result in an error.
low_code:
version: 2
steps:
- cool_step:
key1: value1 # key1 is not properly indented
key2: value2 # key2 is not properly indented
To provide a list in YAML you will need to utilize the -
symbol. For example,
if you had a step cool_step
and wanted to specify a list
of two elements, you would provide the following:
low_code:
version: 2
steps:
- cool_step:
- key1
- key2
Version 2
Version 2 of the low_code Syntax provides more flexibility when defining the order of step execution. This version can utilize multiple versions of Requestors (if supported) and allows for steps to run before a Requestor executes.
Format
low_code:
version: 2
steps:
- static_value: '{"key": "value"}'
- json
- simple_key: "key"
id: Identification for the request.
Note
If this value is not specified, the Snippet Framework will automatically create one. This allows for easier tracking when debugging when an ID is not required for the collection.
version: Specify the version of the low_code Syntax.
steps: Order of the steps for the Snippet Framework to execute.
Version 1
Version 1 was the original low_code syntax. It allowed for a single Requestor and any number of processors. It lacks support for multiple Requestors so it is not preferred.
Format
low_code:
network:
static_value: '{"key": "value"}'
processing:
- json
- simple_key: "key"
id: Identification for the request.
Note
If this value is not specified, the Snippet Framework will automatically create one. This allows for easier tracking when debugging when an ID is not required for the collection.
version: Specify the version of the low_code Syntax. If not provided, it will default to 1.
network: Section for the data requester step.
processing: Section for listing the steps required to transform your data to the desired output for SL1 to store.
paginator_offset
The paginator_offset step works in conjunction with the http step to support offset or paged-based API queries.
Step details:
Framework Name |
|
Parameters |
|
Note
The Paginator Offset Limit has a default maximum number of iterations of 50.
Note
When specifying the limit within the step, it must be less than
or equal to then limit provided in the http
step params
(step_limit <= http_limit). Specifying a limit greater
than the value from the http
step params will cause
the paginator to not collect additional data.
Example - Offset Pagination
The following example of the paginator working with SL1 API. In this example the limit is set to a low value to show the effects of pagination. There are 6 accounts on the target SL1 and thus 3 API calls will be made. The output of each API call appears as follows:
[{"URI": "/api/account/2", "description": "AutoAdmin"}, {"URI": "/api/account/3", "description": "AutoRegUser"}]
[{"URI": "/api/account/1", "description": "em7admin"}, {"URI": "/api/account/4", "description": "snadmin"}]
[{"URI": "/api/account/5", "description": "Test Account"}, {"URI": "/api/account/6", "description": "test person"}]
low_code:
version: 2
steps:
- http:
uri: /api/account
params:
limit: 2
hide_filterinfo: true
- json
- paginator_offset:
limit: 2
The paginator step then combines the result into an ordered dictionary as shown below:
OrderedDict(
[
("offset_2", [{"URI": "/api/account/2", "description": "AutoAdmin"}, {"URI": "/api/account/3", "description": "AutoRegUser"}]),
("offset_4", [{"URI": "/api/account/1", "description": "em7admin"}, {"URI": "/api/account/4", "description": "snadmin"}]),
("offset_6", [{"URI": "/api/account/5", "description": "Test Account"}, {"URI": "/api/account/6", "description": "test person"}
]
)
Example - Overriding Offset
In this example the API will use page instead of offset for its pagination technique. Lets assume that there are 3 pages that return results. The following requests would be made:
/api/something?limit=2
/api/something?page=2&limit=2
/api/something?page=3&limit=2
low_code:
version: 2
steps:
- http:
uri: /api/something
params:
limit: 2
- json
- paginator_offset:
limit: 2
offset_qs: page
pagination_increment: 1
parse_ifconfig
The Parse Ifconfig converts the response of the ifconfig
command into a
dictionary by using the ifconfig-parser module. The output dictionary
contains Interface Configuration data, where the keys are the interface names.
Step details:
Step |
|
Incoming data type |
string or list of strings |
Return data type |
dictionary |
Below is an example of a Parse Ifconfig step.
Incoming data to step:
ens160 Link encap:Ethernet HWaddr 00:50:56:85:73:0d inet addr:10.2.10.45 Bcast:10.2.10.255 Mask:255.255.255.0 inet6 addr: fe80::916b:4b90:721d:a731/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5212713 errors:0 dropped:25051 overruns:0 frame:0 TX packets:1291444 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2300868310 (2.3 GB) TX bytes:266603934 (266.6 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:28287 errors:0 dropped:0 overruns:0 frame:0 TX packets:28287 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:8793361 (8.7 MB) TX bytes:8793361 (8.7 MB)
Output:
{ "ens160": { "name": "ens160", "type": "Ethernet", "mac_addr": "00:50:56:85:73:0d", "ipv4_addr": "10.2.10.45", "ipv4_bcast": "10.2.10.255", "ipv4_mask": "255.255.255.0", "ipv6_addr": "fe80::916b:4b90:721d:a731", "ipv6_mask": "64", "ipv6_scope": "Link", "state": "UP BROADCAST RUNNING MULTICAST", "mtu": "1500", "metric": "1", "rx_packets": "5212713", "rx_errors": "0", "rx_dropped": "25051", "rx_overruns": "0", "rx_frame": "0", "tx_packets": "1291444", "tx_errors": "0", "tx_dropped": "0", "tx_overruns": "0", "tx_carrier": "0", "rx_bytes": "2300868310", "tx_bytes": "266603934", "tx_collisions": "0", }, "lo": { "name": "lo", "type": "Local Loopback", "mac_addr": None, "ipv4_addr": "127.0.0.1", "ipv4_bcast": None, "ipv4_mask": "255.0.0.0", "ipv6_addr": "::1", "ipv6_mask": "128", "ipv6_scope": "Host", "state": "UP LOOPBACK RUNNING", "mtu": "65536", "metric": "1", "rx_packets": "28287", "rx_errors": "0", "rx_dropped": "0", "rx_overruns": "0", "rx_frame": "0", "tx_packets": "28287", "tx_errors": "0", "tx_dropped": "0", "tx_overruns": "0", "tx_carrier": "0", "rx_bytes": "8793361", "tx_bytes": "8793361", "tx_collisions": "0", }, }The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_ifconfig
parse_line
The Parse Line converts a simple Unix command output into an addressable data structure; usually a dictionary.
Step details:
Step |
|
Incoming data type |
string |
Return data type |
iterable |
Configuration of arguments
There are two types of arguments that can be configured to return data for a collection object. However, these arguments are optional; meaning, if unprovided, the step will return the same input value.
This includes:
Argument |
Type |
Default |
Description |
---|---|---|---|
split_type |
string |
“” |
Optional. This argument determines the split type.
For this particular step, the only valid option is
colon . |
key |
string |
“” |
Optional. This argument sets a key in the final
result. The other possible value is
from_output . |
If you only provide the “key” argument with a given value, a dictionary with one item will be returned. The key of this dictionary is what you provide in the “key” argument, and the value is the input of the step.
If you provide the “split_type” argument with the value colon
and the
“key” argument with the value from_output
, a dictionary will be returned
where each line of the input to the step is an element. In this dictionary,
the keys and values appear due to the data being separated by the “:”
character.
The values of these two arguments must match to get this behavior. If any other value for split_type and key is provided, an empty dictionary will be the returned result.
Below is an example of an argument step where the arguments are not provided:
Incoming data to step:
Linux Debian 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux
Output without a provided argument appears as:
Linux Debian 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_line
Below are two examples of a split_argument that successfully returns a dictionary where each line is an element.
Incoming data to step:
5.10.0-8-amd64
Provided input to step:
key: kernel
Output appears as:
{"kernel": "5.10.0-8-amd64"}The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_line: key: kernel
Incoming data to step:
"Tue Oct 21 17:26:30 EDT 2021"
Provided input to step:
split_type: '' key: date
Output appears as:
{"date": "Tue Oct 5 17:26:30 EDT 2021"}
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_line: split_type: '' key: date
Incoming data to step:
MemTotal: 8144576 kB MemFree: 1316468 kB MemAvailable: 3685676 kB Buffers: 403324 kB Cached: 2095104 kB SwapCached: 0 kB Active: 4581348 kB Inactive: 1156620 kB Active(anon): 3254544 kB Inactive(anon): 116952 kB
Provided input to step:
split_type: colon key: from_outputOutput appears as:
{ "MemTotal": "8144576 kB", "MemFree": "1316468 kB", "MemAvailable": "3685676 kB", "Buffers": "403324 kB", "Cached": "2095104 kB", "SwapCached": "0 kB", "Active": "4581348 kB", "Inactive": "1156620 kB", "Active(anon)": "3254544 kB", "Inactive(anon)": "116952 kB" }
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_line: split_type: colon key: from_output
parse_netstat
A Parse Netstat, as any of the Data Table Parsers, converts the table-like Unix command response into an addressable data dictionary.
When working with this type of response, you can use a Data Table Parser that turns the Unix command response that looks like a table into an addressable data structure.
The Parse Netstat works similar to the Parse Table Row, it parses
responses from the netstat
command. This provides a dedicated port
key for the Local Address in the netstat response.
Step details:
Framework Name |
|
Incoming data type |
list of strings or string |
Return data type |
dictionary |
Configuration of arguments
Various arguments can be configured for the Parse Netstat step. This includes:
Argument |
Type |
Default |
---|---|---|
|
str |
single_space |
|
bool |
True |
|
bool |
True |
|
str |
“” |
|
str |
“” |
|
dict |
{} |
Below is an example of a Parse Netstat step.
Incoming data to step:
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp6 0 0 :::22 :::* LISTEN tcp6 0 0 ::1:631 :::* LISTEN
Input arguments to step:
split_type: single_space, skip_header: True, modify_headers: True, separator: "", headers: "",
Output:
{ "proto": ["tcp", "tcp", "tcp", "tcp6", "tcp6"], "recv-q": ["0", "0", "0", "0", "0"], "send-q": ["0", "0", "0", "0", "0"], "local_address": ["127.0.1.1:53", "0.0.0.0:22", "127.0.0.1:631", ":::22", "::1:631"], "foreign_address": ["0.0.0.0:*", "0.0.0.0:*", "0.0.0.0:*", ":::*", ":::*"], "state": ["LISTEN", "LISTEN", "LISTEN", "LISTEN", "LISTEN"], "port": ["53", "22", "631", "22", "631"], }
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_netstat: split_type: single_space skip_header: True modify_headers: True separator: "" headers: ""
parse_proc_net_snmp
The Parse Proc/net/snmp turns the output of a proc
, net
or snmp
command into an addressable data dictionary. The generated dictionary
contains the keys of one line and the values of the consecutive lines.
Step details:
Framework Name |
|
Incoming data type |
string or list of strings |
Return data type |
dictionary |
Below is an example of a Parse Proc/net/snmp step.
Incoming data to step:
Ip: Forwarding DefaultTTL InReceives InHdrErrors InAddrErrors ForwDatagrams Ip: 2 64 2124746 0 61 0 Icmp: InMsgs InErrors InCsumErrors InDestUnreachs InTimeExcds InParmProbs InSrcQuenchs InRedirects Icmp: 7474 0 0 98 0 0 0 0 IcmpMsg: InType3 InType8 OutType0 OutType3 IcmpMsg: 98 7376 7376 104 Tcp: RtoAlgorithm RtoMin RtoMax MaxConn ActiveOpens PassiveOpens AttemptFails EstabResets Tcp: 1 200 120000 -1 2760 13547 55 597 Udp: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrors InCsumErrors IgnoredMulti Udp: 87737 104 0 10695 0 0 0 405592 UdpLite: InDatagrams NoPorts InErrors OutDatagrams RcvbufErrors SndbufErrors InCsumErrors UdpLite: 0 0 0 0 0 0 0
Output:
{ "Ip": { "Forwarding": 2, "DefaultTTL": 64, "InReceives": 2124746, "InHdrErrors": 0, "InAddrErrors": 61, "ForwDatagrams": 0, }, "Icmp": { "InMsgs": 7474, "InErrors": 0, "InCsumErrors": 0, "InDestUnreachs": 98, "InTimeExcds": 0, "InParmProbs": 0, "InSrcQuenchs": 0, "InRedirects": 0, }, "IcmpMsg": { "InType3": 98, "InType8": 7376, "OutType0": 7376, "OutType3": 104 }, "Tcp": { "RtoAlgorithm": 1, "RtoMin": 200, "RtoMax": 120000, "MaxConn": -1, "ActiveOpens": 2760, "PassiveOpens": 13547, "AttemptFails": 55, "EstabResets": 597, }, "Udp": { "InDatagrams": 87737, "NoPorts": 104, "InErrors": 0, "OutDatagrams": 10695, "RcvbufErrors": 0, "SndbufErrors": 0, "InCsumErrors": 0, "IgnoredMulti": 405592, }, "UdpLite": { "InDatagrams": 0, "NoPorts": 0, "InErrors": 0, "OutDatagrams": 0, "RcvbufErrors": 0, "SndbufErrors": 0, "InCsumErrors": 0, }, }
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_proc_net_snmp
parse_sectional_rows
Parse Sectional Row parsers convert the table format response to a data dictionary. For each line, the parser exhibits the following behavior:
If no colon (:) is provided in the value, the last 12 characters will be removed. The string will then be transformed to lower-case, excluding spaces. This value will be used as a key in the output dictionary.
If a colon (:) is provided, the data to the left will be they key, and data to the right will be the value.
Step details:
Framework Name |
|
Incoming data type |
list of strings or dictionary |
Return data type |
dictionary |
Configuration of arguments
The following argument can be configured for the Parse Sectional Row step. This includes:
Argument |
Type |
Default |
Description |
---|---|---|---|
key |
string |
“” |
Optional. Use this argument when the input data is a
dictionary and you need to parse an element
(example 2).
|
Below are two examples of the Parse Sectional Rows step.
Incoming data to step:
[ "Section1 Information", "1.0: First", "1.1: Second", "Section2 Information", "2.0: First", "2.1: Second", ]
If no arguments are provided, the output would be:
{ "section1": {"1.0": "First", "1.1": "Second"}, "section2": {"2.0": "First", "2.1": "Second"}, }The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_sectional_rows
Incoming data to step:
{ "data" : [ "Section1 Information", "1.0: First", "1.1: Second", "Section2 Information", "2.0: First", "2.1: Second", ] }
Input arguments to step:
key: data
Output:
{ "section1": {"1.0": "First", "1.1": "Second"}, "section2": {"2.0": "First", "2.1": "Second"}, }
The Snippet Argument appears as:
.. code-block:: yaml
:emphasize-lines: 5-6
low_code:
version: 2
steps:
- <network_request>
- parse_sectional_rows:
key: data
parse_split_response
Parse Split Responses convert multi-line Unix output, from a command, to a list of a strings. Each line of the Unix command response will appear as an element of the returned list in the step’s output.
Step details:
Step |
|
Incoming data type |
string |
Return data type |
list of strings |
Below are two examples of a parse split response step.
Incoming data to step:
nr_free_pages 359986 nr_zone_inactive_anon 29307 nr_zone_active_anon 769830 nr_zone_inactive_file 251253 nr_zone_active_file 356384
Output:
[ "nr_free_pages 359986", "nr_zone_inactive_anon 29307", "nr_zone_active_anon 769830", "nr_zone_inactive_file 251253", "nr_zone_active_file 356384" ]
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_split_response
Incoming data to step:
Some SSH response
Output:
["Some", "SSH", "response"]
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_split_response
parse_table_column
The Parse Table Column step turns the Unix command response into a list of addressable dictionaries. Each dictionary corresponds to a single line of the command response. Users also have the option to skip lines.
Step details:
Framework Name |
|
Incoming data type |
strings or list of strings |
Return data type |
list of dictionaries |
Return data type |
list of dictionaries |
Configuration of arguments
Various arguments are available for configuration in this step. These include:
Argument |
Type |
Default |
Description |
---|---|---|---|
|
int |
0 |
Optional. Useful for skipping lines that should not be
included as input for parsing. It represents the number
of skipped lines at the beginning.
|
|
int |
0 |
Optional. Useful for skipping lines that should not be
included as input for parsing. It represents the number
of skipped lines at the end.
|
|
list |
None |
|
|
int |
None |
Required. This argument is the maximum number of
columns the parser should use when parsing the input
data. It needs to be equal to or greater than the number
of actual columns in the command response. The column
count starts at 0 (e.g. if you have 5 columns, your max
would be 4). It’s helpful if the table response has extra
white spaces (which will be interpreted as empty columns)
at the end.
|
Below are three examples of the Parse Table Column step.
Incoming data to step:
total 36 -rw------- 1 em7admin em7admin 2706 Oct 21 19:00 nohup.out -rw-r--r-- 1 em7admin em7admin 32 Nov 3 17:35 requirements.txt
Input arguments to step:
skip_lines_start: 1 skip_lines_end: 0 columns: 8
Output:
[ { 0: "-rw-------", 1: "1", 2: "em7admin", 3: "em7admin", 4: "2706", 5: "Oct", 6: "21", 7: "19:00", 8: "nohup.out", }, { 0: "-rw-r--r--", 1: "1", 2: "em7admin", 3: "em7admin", 4: "32", 5: "Nov", 6: "3", 7: "17:35", 8: "requirements.txt", }, ]
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_table_column: skip_lines_start: 1 skip_lines_end: 0 columns: 8
Incoming data to step:
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp6 0 0 :::22 :::* LISTEN tcp6 0 0 ::1:631 :::* LISTEN
Input arguments to step:
skip_lines_start: 2 skip_lines_end: 1 columns: 5
Output:
[ {0: "tcp", 1: "0", 2: "0", 3: "127.0.1.1:53", 4: "0.0.0.0:*", 5: "LISTEN"}, {0: "tcp", 1: "0", 2: "0", 3: "0.0.0.0:22", 4: "0.0.0.0:*", 5: "LISTEN"}, {0: "tcp", 1: "0", 2: "0", 3: "127.0.0.1:631", 4: "0.0.0.0:*", 5: "LISTEN"}, {0: "tcp6", 1: "0", 2: "0", 3: ":::22", 4: ":::*", 5: "LISTEN"}, {0: "tcp6", 1: "0", 2: "0", 3: "::1:631", 4: ":::*", 5: "LISTEN"}, ]
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_table_column: skip_lines_start: 2 skip_lines_end: 1 columns: 8
Incoming data to step:
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp6 0 0 ::1:631 :::* LISTEN
Input arguments to step:
skip_lines_start: 2 skip_lines_end: 0 key_list : ["protocol", "recv-q", "send-q", "local", "foreign", "state"] columns: 5
Output:
[ {'protocol': 'tcp', 'recv-q': '0', 'send-q': '0', 'local': '127.0.1.1:53', 'foreign': '0.0.0.0:*', 'state': 'LISTEN'}, {'protocol': 'tcp', 'recv-q': '0', 'send-q': '0', 'local': '127.0.0.1:631', 'foreign': '0.0.0.0:*', 'state': 'LISTEN'}, {'protocol': 'tcp6', 'recv-q': '0', 'send-q': '0', 'local': '::1:631', 'foreign': ':::*', 'state': 'LISTEN'} ]
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_table_column: skip_lines_start: 2 skip_lines_end: 0 key_list: - protocol - recv-q - send-q - local - foreign - state columns: 5
parse_table_row
A Parse Table Row converts the table-like Unix command response into an addressable data dictionary.
Step details:
Step |
|
Incoming data type |
strings or list of strings |
Return data type |
dictionary |
Configuration of arguments
Many arguments can be configured for the Parse Table Row step. This includes:
Argument |
Type |
Default |
Description |
---|---|---|---|
|
str |
single_space |
Optional. This argument determines the split type.
For this particular step, the valid options are
(more details):
- single_space
- without_header
- custom_space
|
|
bool |
False |
Optional. This argument defines if the parser
needs to skip the first line of the command
response, a line that is not meaningful for the
parsing (netstat example 1).
|
|
bool |
True |
Optional. This argument determines if the parser
needs to modify the header names or not. Values
for the change need to be provided in the
headers_to_replace argument. The following valueschange by default if they exist in the command
response and the argument is set to True,
even if you do not provide the
headers_to_replace argument (example 2 and netstat example 1):
- Mounted on will be replaced by mounted_on
- IP address will be replaced by ip_address
- HW type will be replaced by hw_type
- HW address will be replaced by hw_address
- Local Address will be replaced by local_address
- Foreign Address will be replaced by foreign_address
|
|
str |
“ “ |
Optional. This argument sets the string that will be
used to split the command output when the
split_type is set to custom_space. |
|
str |
“” |
Optional. To be used when
split_type parameteris set to without_header so that you can apply your
own set headers to the table command response. It
works with
split_type argument set towithout_header. It needs to be a string of keys
determined by a space delimiter.
|
|
dict |
{} |
Optional. Use this argument when
modify_headers is set to True. Uses the dictionary argument
provided, where the keys are the current header
names of the response and the values are the new
header names you want to replace the current ones
with.
|
The
split_type
argument can take any of the following values:
single_space - It will parse the table formatted command output into a dictionary by converting the first row/line of data into the keys of the dictionary and the value for each key is a list of all the column line items under the key/header (see example 1 and example 2). The parser transforms the keys into lower case. Acknowledge that the spaces in the header line define the keys; a single space character is enough to consider the following value as a different key. For this reason, it is necessary to replace values of the header line and remove the space by using the
modify_headers
andheaders_to_replace
arguments.without_header - Works like single_space but should be used when the table formatted command output does not have a header row. This provides an opportunity for you to apply your own headers with the
headers
argument.custom_space - Turns each line of the table into an element of the dictionary. The first item of each line in the command output is used as the key, and the remaining items in the row are set into a list (example 4).
Below are four examples of the Parse Table Row step.
Incoming data to step:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND user1 6458 0.0 0.0 15444 924 pts/5 S+ 15:14 0:00 grep --color=auto Z
Input arguments to step:
split_type: single_space, skip_header: False, modify_headers: False, separator: "", headers: "",
Output:
{ "user": ["user1"], "pid": ["6458"], "%cpu": ["0.0"], "%mem": ["0.0"], "vsz": ["15444"], "rss": ["924"], "tty": ["pts/5"], "stat": ["S+"], "start": ["15:14"], "time": ["0:00"], "command": ["grep --color=auto Z"], }
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_table_row: split_type: single_space skip_header: False modify_headers: False separator: "" headers: ""
Incoming data to step:
Filesystem Type 1024-blocks Used Available Capacity Mounted on udev devtmpfs 3032744 0 3032744 0% /dev tmpfs tmpfs 611056 62508 548548 11% /run /dev/sda1 ext4 148494760 7255552 133673080 6% / tmpfs tmpfs 3055272 244 3055028 1% /dev/shm tmpfs tmpfs 5120 0 5120 0% /run/lock tmpfs tmpfs 3055272 0 3055272 0% /sys/fs/cgroup tmpfs tmpfs 611056 84 610972 1% /run/user/1000
Input arguments to step:
split_type: single_space, skip_header: False, modify_headers: True, separator: "", headers: "",
Output:
{ "filesystem": ["udev", "tmpfs", "/dev/sda1", "tmpfs", "tmpfs", "tmpfs", "tmpfs"], "type": ["devtmpfs", "tmpfs", "ext4", "tmpfs", "tmpfs", "tmpfs", "tmpfs"], "1024-blocks": ["3032744", "611056", "148494760", "3055272", "5120", "3055272", "611056"], "used": ["0", "62508", "7255552", "244", "0", "0", "84"], "available": ["3032744", "548548", "133673080", "3055028", "5120", "3055272", "610972"], "capacity": ["0%", "11%", "6%", "1%", "0%", "0%", "1%"], "mounted_on": [ "/dev", "/run", "/", "/dev/shm", "/run/lock", "/sys/fs/cgroup", "/run/user/1000", ], }
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_table_row: split_type: single_space skip_header: False modify_headers: True separator: "" headers: ""
Incoming data to step:
2 0 fd0 0 0 0 0 0 0 0 0 0 0 0 8 0 sda 530807 21 8777975 687626 13983371 161787 551438764 48053137 0 21892797 48721313 8 1 sda1 1804 0 11227 721 2049 0 4096 2906 0 3620 3626 8 2 sda2 528973 21 8764652 686683 13981322 161787 551434668 48050231 0 21890022 48728814 11 0 sr0 0 0 0 0 0 0 0 0 0 0 0 253 0 dm-0 16049 0 848912 45328 117397 0 714586 627210 0 387658 673185 253 1 dm-1 94 0 4456 35 0 0 0 0 0 25 35 253 2 dm-2 110 0 10380 337 406452 0 4416366 1593673 0 1530349 1594671 253 3 dm-3 191 0 14017 410 399562 0 3725067 1657647 0 1063574 1689139 253 4 dm-4 6125 0 167987 10596 2209975 0 65264306 7899194 0 2173858 7956596 253 5 dm-5 163 0 7111 661 926632 0 9161743 2889610 0 2705664 2892645 253 6 dm-6 143 0 2475 400 257 0 5509 708 0 996 1108 253 7 dm-7 505998 0 7706482 629324 10082835 0 468147091 34574715 0 14668657 35212163
Input arguments to step:
split_type: without_header, skip_header: False, modify_headers: False, separator: "", headers: "major_number minor_mumber device_name reads_completed_successfully reads_merged sectors_read time_spent_reading(ms) writes_completed writes_merged sectors_written time_spent_writing(ms) I/Os_currently_in_progress time_spent_doing_I/Os(ms) weighted_time_spent_doing_I/Os(ms)
Output:
{ "major_number": [ "2", "8", "8", "8", "11", "253", "253", "253", "253", "253", "253", "253", "253",], "minor_mumber": ["0", "0", "1", "2", "0", "0", "1", "2", "3", "4", "5", "6", "7"], "device_name": ["fd0", "sda", "sda1", "sda2", "sr0", "dm-0", "dm-1", "dm-2", "dm-3", "dm-4", "dm-5", "dm-6", "dm-7",], "reads_completed_successfully": ["0", "530807", "1804", "528973", "0", "16049", "94", "110", "191", "6125", "163", "143", "505998",], "reads_merged": ["0", "21", "0", "21", "0", "0", "0", "0", "0", "0", "0", "0", "0"], "sectors_read": ["0", "8777975", "11227", "8764652", "0", "848912", "4456", "10380", "14017", "167987", "7111", "2475", "7706482", ], "time_spent_reading(ms)": [ "0", "687626", "721", "686683", "0", "45328", "35", "337", "410", "10596", "661", "400", "629324", ], "writes_completed": [ "0", "13983371", "2049", "13981322", "0", "117397", "0", "406452", "399562", "2209975", "926632", "257", "10082835", ], "writes_merged": ["0", "161787", "0", "161787", "0", "0", "0", "0", "0", "0", "0", "0", "0"], "sectors_written": [ "0", "551438764", "4096", "551434668", "0", "714586", "0", "4416366", "3725067", "65264306", "9161743", "5509", "468147091", ], "time_spent_writing(ms)": ["0", "48053137", "2906", "48050231", "0", "627210", "0", "1593673", "1657647", "7899194", "2889610", "708", "34574715",], "I/Os_currently_in_progress": ["0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", ], "time_spent_doing_I/Os(ms)": ["0", "21892797", "3620", "21890022", "0", "387658", "25", "1530349", "1063574", "2173858", "2705664", "996", "14668657",], "weighted_time_spent_doing_I/Os(ms)": [ "0", "48721313", "3626", "48728814", "0", "673185", "35", "1594671", "1689139", "7956596", "2892645", "1108", "35212163",], }
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_table_row: split_type: without_header skip_header: False modify_headers: False separator: "" headers: "major_number minor_mumber device_name reads_completed_successfully reads_merged sectors_read time_spent_reading(ms) writes_completed writes_merged sectors_written time_spent_writing(ms) I/Os_currently_in_progress time_spent_doing_I/Os(ms) weighted_time_spent_doing_I/Os(ms)"
Incoming data to step:
cpu 55873781 59563877 9860209 722670590 35449 0 78511 0 0 0 cpu0 27983218 29082019 4926645 361957968 20859 0 24137 0 0 0 cpu1 27890563 30481858 4933564 360712621 14590 0 54374 0 0 0 intr 2141799923 22 257 0 0 0 0 0 0 1 0 0 0 15861 0 0 0 61 191591 ctxt 5821097301 btime 1571712300 processes 217415 procs_running 2 procs_blocked 0 softirq 888390539 0 370421671 854059 3231656 2258408 0 499710 374437477 0 136687558
Input arguments to step:
split_type: custom_space, skip_header: False, modify_headers: False, separator: " ", headers: "",
Output:
{ "cpu": ["55873781", "59563877", "9860209", "722670590", "35449", "0", "78511", "0", "0", "0"], "cpu0": ["27983218", "29082019", "4926645", "361957968", "20859", "0", "24137", "0", "0", "0",], "cpu1": ["27890563", "30481858", "4933564", "360712621", "14590", "0", "54374", "0", "0", "0",], "intr": ["2141799923", "22", "257", "0", "0", "0", "0", "0", "0", "1", "0", "0", "0", "15861", "0", "0", "0", "61", "191591",], "ctxt": ["5821097301"], "btime": ["1571712300"], "processes": ["217415"], "procs_running": ["2"], "procs_blocked": ["0"], "softirq": [ "888390539", "0", "370421671", "854059", "3231656", "2258408", "0", "499710", "374437477", "0", "136687558",], }
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - parse_table_row: split_type: custom_space skip_header: False modify_headers: False separator: "" headers: ""
powershell
The PowerShell Requestor creates and executes calls against a remote Windows System.
By default, it will attempt to convert the result to Json by using the
Cmdlet, ConvertTo-Json
. Any Cmdlet can be specified within the convert
key. The returned value is a namedtuple,
silo.low_code_steps.winrm.win_result(std_out=std_out, std_err=std_err, exit_code=exit_code)
.
Step details:
Framework Name |
powershell |
Supported Credentials |
PowerShell |
Supported Fields (PowerShell) |
|
Step Parameters:
Parameter |
Required |
Type |
Default |
Description |
command |
Required |
string |
Command to execute |
|
convert |
Optional |
string |
|
Cmdlet to execute as a pipe from the previous command which
enables easier parsing. To disable conversion, use |
stream |
Optional |
string |
|
Value to return from the request. Available options are
|
Note
The result of the PowerShell command must be a primitive (string, integer,
etc). If the result is a PowerShell object, the result will be Object
which cannot be parsed in the Snippet Framework.
Executing a PowerShell Command
In this example we will get the installed memory in MB through a PowerShell command.
low_code:
version: 2
steps:
- powershell:
command: "(Get-CimInstance Win32_PhysicalMemory | Measure-Object -Property capacity -Sum).sum / 1mb"
The output of this step:
win_result(std_out='4096\r\n', std_err='', exit_code=0)
Executing a PowerShell Script
In this example we will get the available memory in MB through a PowerShell script. The response will be the output of the script.
Note
If you script consumes multiple lines, you must inform YAML that
the following lines should be treated as a single block. To
accomplish this, use the pipe indicator, |
.
low_code:
version: 2
steps:
- powershell:
command: |
$strComputer = $Host
Clear
$RAM = WmiObject Win32_ComputerSystem
$MB = 1048576
"Installed Memory: " + [int]($RAM.TotalPhysicalMemory /$MB) + " MB"
The output of this step:
win_result(std_out='Installed Memory: 4095 MB\r\n', std_err='', exit_code=0)
Conversion Examples
In this first example, we will collect the ProductName and WindowsBuildLabEx from Get-ComputerInfo and perform no conversions on the data.
low_code:
version: 2
steps:
- powershell:
convert: false
command: Get-ComputerInfo | Select ProductName, WindowsBuildLabEx
The output of this step:
'\r\nProductName WindowsBuildLabEx \r\n----------- '
'----------------- \r\n 17763.1.amd64f'
're.rs5_release.180914-1434\r\n\r\n\r\n'
As you can see from the result, it has returned a table which is default when performing a Select statement. However this is difficult to parse and is best if we use Json or Xml to parse the data. The next example shows the data being returned in Json which allows it to be easily parsed by the Snippet Framework.
low_code:
version: 2
steps:
- powershell:
convert: Json
command: Get-ComputerInfo | Select ProductName, WindowsBuildLabEx
The output of this step:
win_result(std_out='{"ProductName":null,"WindowsBuildLabEx":"17763.1.amd64fre.rs5_release.18'}, std_err='', exit_code=0)
promql
The PromQL syntax is a YAML configuration format for specifying the data to collect from Prometheus. You explicitly provide a PromQL query, the result type, an aggregation function if needed, and the labels that will be taken as indices.
Syntax name |
|
PromQL Format
promql:
id: RequestID1
query: promql_query
result_type: type
id: Identification for the request.
Note
If this value is not specified, the Snippet Framework will automatically create one. Specifying the ID allows easier tracking when debugging.
query: PromQL query.
result_type: The type of the result.
Note
The “result_type” is an attribute of the result returned by a PromQL query, indicating the type of the result. The two possible options are:
vector
andmatrix
. If this value is not specified, the toolkit will assume that the expected result type isvector
. You should know in advance the result type your PromQL query will generate.
Additionally, you can specify which labels you want to use as indices by using
the key labels
.
promql:
id: RequestID1
query: prometheus_query
result_type: type
labels:
- label1
- label2
id: Identification for the request.
query: PromQL query.
result_type: Result type.
labels: The labels in the order you would like to get as indices.
Note
If the labels key is not provided, all the labels will be retrieved as you would get them in the Prometheus expression browser.
In the case of providing labels that do not define the uniqueness of an index to identify a value, only the first retrieved value will be displayed and a log message will report the collision.
When you are using a PromQL query that will return a matrix
result type,
you will need to apply an aggregation function. A matrix
result type
represents a range of data points. To apply an aggregation function, you can
use the following configuration.
promql:
id: RequestID1
query: prometheus_query
result_type: matrix
aggregation: function
id: Identification for the request.
query: PromQL query.
result_type: Use
matrix
as the result type.aggregation: Aggregation function for a matrix result type.
The available options for aggregation functions are:
mean
, median
, mode
, min
, max
, and percentile
.
If percentile
is the aggregation function specified, you should
also provide the percentile position by using the percentile key,
an integer value between 1 and 99, as you can see below.
promql:
id: RequestID1
query: prometheus_query
result_type: matrix
aggregation: percentile
percentile: 95
id: Identification for the request.
query: PromQL query.
result_type: Use
matrix
as the result type.aggregation: Use
percentile
as the aggregation function.percentile: Percentile position.
Example of use
If we want to collect the Kafka Exporter metrics for the number of in-sync replicas for a topic partition, the PromQL query should be:
kafka_topic_partition_in_sync_replica
The response is like this:
{
'kafka_topic_partition_in_sync_replica{instance="kafka-service-metrics.default.svc:9308", job="kubernetes-services", kubernetes_namespace="default", partition="0", port_name="http-metrics", service_name="kafka-service-metrics", topic="__consumer_offsets"}': '3',
'kafka_topic_partition_in_sync_replica{instance="kafka-service-metrics.default.svc:9308", job="kubernetes-services", kubernetes_namespace="default", partition="0", port_name="http-metrics", service_name="kafka-service-metrics", topic="apl.prod.agent.avail.trigger"}': '3',
'kafka_topic_partition_in_sync_replica{instance="kafka-service-metrics.default.svc:9308", job="kubernetes-services", kubernetes_namespace="default", partition="0", port_name="http-metrics", service_name="kafka-service-metrics", topic="apl.prod.app.agg.trigger"}': '3'
...
}
The Snippet Argument should look like this:
promql:
id: InSyncReplicas
query: kafka_topic_partition_in_sync_replica
result_type: vector
If we want to index by topic, the Snippet Argument should look like this:
promql:
id: InSyncReplicas
query: kafka_topic_partition_in_sync_replica
result_type: vector
labels:
- topic
The collected data is like this:
{
'{topic="__consumer_offsets"}': '3',
'{topic="apl.prod.agent.avail.trigger"}': '3',
'{topic="apl.prod.app.agg.trigger"}': '3'
...
}
Promql syntax takes the query, puts it in the http
step as param,
and sends it to the Prometheus server as a REST API request. Then, it takes
the response and parses it using the json
step. Finally, it takes
the parsed response and indexes it by the labels using the
promql_selector
step and applies a aggregation function if needed.
promql_matrix
The PromQL Matrix selector processes responses in the matrix format as it returns the expression browser at a Prometheus server. This step returns dictionaries where the keys are built by the labels and the values are a result of applying an aggreation operation. It also allows you to only show the labels of interest by providing a list of labels as part of the arguments.
Step details:
Framework Name |
|
Parameters |
|
Note
When querying metrics in Prometheus, you may get some special values such as NaN, +Inf, and -Inf. SL1 does not support these values. To ensure that your monitoring data is accurate and reliable, these values are automatically filtered out.
Example Usage for Matrix Result Type
If the incoming data to the step is:
{
"status": "success",
"data": {
"resultType": "matrix",
"result": [
{
"metric": {"__name__": "up", "job": "prometheus", "instance": "localhost:9090"},
"values": [[1435781430.781, "1"], [1435781445.781, "1"], [1435781460.781, "1"]],
},
{
"metric": {"__name__": "up", "job": "node", "instance": "localhost:9091"},
"values": [[1435781430.781, "0"], [1435781445.781, "0"], [1435781460.781, "1"]],
},
],
},
}
The output of this step will be:
{
'up{instance="localhost:9090", job="prometheus"}': [1, 1, 1],
'up{instance="localhost:9091", job="node"}': [0, 0, 1],
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
If the incoming data to the step is:
{
"status": "success",
"data": {
"resultType": "matrix",
"result": [
{
"metric": {
"__name__": "prometheus_http_request_duration_seconds_count",
"container": "prometheus",
"endpoint": "http-web",
"handler": "/",
"instance": "10.42.0.145:9090",
"job": "prom-op-kube-prometheus-st-prometheus",
"namespace": "kube-system",
"pod": "prometheus-prom-op-kube-prometheus-st-prometheus-0",
"service": "prom-op-kube-prometheus-st-prometheus"
},
"values": [
[
1681818434.852,
"10"
],
[
1681818464.852,
"11"
],
[
1681818494.852,
"12"
],
[
1681818524.852,
"12"
]
]
},
{
"metric": {
"__name__": "prometheus_http_request_duration_seconds_count",
"container": "prometheus",
"endpoint": "http-web",
"handler": "/static/*filepath",
"instance": "10.42.0.145:9090",
"job": "prom-op-kube-prometheus-st-prometheus",
"namespace": "kube-system",
"pod": "prometheus-prom-op-kube-prometheus-st-prometheus-0",
"service": "prom-op-kube-prometheus-st-prometheus"
},
"values": [
[
1681818434.852,
"80"
],
[
1681818464.852,
"80"
],
[
1681818494.852,
"80"
],
[
1681818524.852,
"88"
]
]
}
]
}
}
If we wanted to provide these input parameters:
labels: ["handler"]
The output of this step will be:
{
'{handler="/"}':[10, 11, 12, 12],
'{handler="/static/*filepath"}': [80, 80, 80, 88],
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
labels:
- handler
promql_vector
The PromQL Vector selector processes responses in a similar format as the expression browser in a Prometheus server. This step returns dictionaries where the keys are built by the labels. It also allows you to only show the labels of interest by providing a list of labels as part of the arguments.
Step details:
Framework Name |
|
Parameters |
|
Note
When querying metrics in Prometheus, you may get some special values such as NaN, +Inf, and -Inf. SL1 does not support these values. To ensure that your monitoring data is accurate and reliable, these values are automatically filtered out.
Example Usage for Vector Result Type
If the incoming data to the step is:
{
"data": {
"result": [
{
"metric": {"consumergroup": "AIML_anomaly_detection.alert"},
"value": [1658874518.797, "0"],
},
{
"metric": {"consumergroup": "AIML_anomaly_detection.autoselector"},
"value": [1658874518.797, "0"],
},
{
"metric": {"consumergroup": "AIML_anomaly_detection.storage"},
"value": [1658874518.797, "3"],
},
{
"metric": {"consumergroup": "AIML_anomaly_detection.train"},
"value": [1658874518.797, "1"],
},
{
"metric": {"consumergroup": "sl_event_storage"},
"value": [1658874518.797, "0"]
},
],
"resultType": "vector",
},
"status": "success",
}
If we wanted to provide these input parameters:
labels: ["consumergroup"]
The output of this step will be:
{
'{consumergroup="AIML_anomaly_detection.alert"}':"0",
'{consumergroup="AIML_anomaly_detection.autoselector"}':"0",
'{consumergroup="AIML_anomaly_detection.storage"}':"3",
'{consumergroup="AIML_anomaly_detection.train"}':"1",
'{consumergroup="sl_event_storage"}': "0",
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: vector
labels:
- consumergroup
If the incoming data to the step is:
{
"data": {
"result": [
{
"metric": {"consumergroup": "AIML_anomaly_detection.alert"},
"value": [1658874518.797, "0"],
},
{
"metric": {"consumergroup": "AIML_anomaly_detection.autoselector"},
"value": [1658874518.797, "0"],
},
{
"metric": {"consumergroup": "AIML_anomaly_detection.storage"},
"value": [1658874518.797, "3"],
},
{
"metric": {"consumergroup": "AIML_anomaly_detection.train"},
"value": [1658874518.797, "1"],
},
{
"metric": {"consumergroup": "sl_event_storage"},
"value": [1658874518.797, "0"],
},
],
"resultType": "vector",
},
"status": "success",
}
The output of this step will be:
{
'{consumergroup="AIML_anomaly_detection.alert"}':"0",
'{consumergroup="AIML_anomaly_detection.autoselector"}':"0",
'{consumergroup="AIML_anomaly_detection.storage"}':"3",
'{consumergroup="AIML_anomaly_detection.train"}':"1",
'{consumergroup="sl_event_storage"}': "0",
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: vector
If the incoming data to the step is:
{
"data": {
"result": [
{
"metric": {
"__name__": "kafka_topic_partition_in_sync_replica",
"instance": "kafka-service-metrics.default.svc:9308",
"job": "kubernetes-services",
"kubernetes_namespace": "default",
"partition": "0",
"port_name": "http-metrics",
"service_name": "kafka-service-metrics",
"topic": "__consumer_offsets",
},
"value": [1658944573.158, "3"],
},
{
"metric": {
"__name__": "kafka_topic_partition_in_sync_replica",
"instance": "kafka-service-metrics.default.svc:9308",
"job": "kubernetes-services",
"kubernetes_namespace": "default",
"partition": "0",
"port_name": "http-metrics",
"service_name": "kafka-service-metrics",
"topic": "apl.prod.app.agg.trigger",
},
"value": [1658944573.158, "3"],
},
{
"metric": {
"__name__": "kafka_topic_partition_in_sync_replica",
"instance": "kafka-service-metrics.default.svc:9308",
"job": "kubernetes-services",
"kubernetes_namespace": "default",
"partition": "9",
"port_name": "http-metrics",
"service_name": "kafka-service-metrics",
"topic": "swap.data",
},
"value": [1658944573.158, "3"],
},
],
"resultType": "vector",
},
"status": "success",
}
If we wanted to provide these input parameters:
labels: ["service_name", "topic"]
The output of this step will be:
{
'{service_name="kafka-service-metrics", topic="__consumer_offsets"}': "3",
'{service_name="kafka-service-metrics", topic="apl.prod.app.agg.trigger"}': "3",
'{service_name="kafka-service-metrics", topic="swap.data"}': "3",
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: vector
labels:
- service_name
- topic
If the incoming data to the step is:
{
"data": {
"result": [
{
"metric": {
"__name__": "kafka_topic_partition_in_sync_replica",
"instance": "kafka-service-metrics.default.svc:9308",
"job": "kubernetes-services",
"kubernetes_namespace": "default",
"partition": "0",
"port_name": "http-metrics",
"service_name": "kafka-service-metrics",
"topic": "__consumer_offsets",
},
"value": [1658944573.158, "3"],
},
{
"metric": {
"__name__": "kafka_topic_partition_in_sync_replica",
"instance": "kafka-service-metrics.default.svc:9308",
"job": "kubernetes-services",
"kubernetes_namespace": "default",
"partition": "0",
"port_name": "http-metrics",
"service_name": "kafka-service-metrics",
"topic": "apl.prod.app.agg.trigger",
},
"value": [1658944573.158, "3"],
},
{
"metric": {
"__name__": "kafka_topic_partition_in_sync_replica",
"instance": "kafka-service-metrics.default.svc:9308",
"job": "kubernetes-services",
"kubernetes_namespace": "default",
"partition": "9",
"port_name": "http-metrics",
"service_name": "kafka-service-metrics",
"topic": "swap.data",
},
"value": [1658944573.158, "3"],
},
],
"resultType": "vector",
},
"status": "success",
}
The output of this step will be:
{
'kafka_topic_partition_in_sync_replica{instance="kafka-service-metrics.default.svc:9308", job="kubernetes-services", kubernetes_namespace="default", partition="0", port_name="http-metrics", service_name="kafka-service-metrics", topic="__consumer_offsets"}': "3",
'kafka_topic_partition_in_sync_replica{instance="kafka-service-metrics.default.svc:9308", job="kubernetes-services", kubernetes_namespace="default", partition="0", port_name="http-metrics", service_name="kafka-service-metrics", topic="apl.prod.app.agg.trigger"}': "3",
'kafka_topic_partition_in_sync_replica{instance="kafka-service-metrics.default.svc:9308", job="kubernetes-services", kubernetes_namespace="default", partition="9", port_name="http-metrics", service_name="kafka-service-metrics", topic="swap.data"}': "3",
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: vector
If the incoming data to the step is:
{
'data': {'result': [{'metric': {},
'value': [1659022840.388, '5.100745223340136']}],
'resultType': 'vector'},
'status': 'success'}
}
The output of this step will be:
{"{}":"5.100745223340136"}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: vector
If the incoming data to the step is:
{
"data": {
"result": [
{
"metric": {
"__name__": "kafka_topic_partition_in_sync_replica",
"instance": "kafka-service-metrics.default.svc:9308",
"job": "kubernetes-services",
"kubernetes_namespace": "default",
"partition": "0",
"port_name": "http-metrics",
"service_name": "kafka-service-metrics",
"topic": "__consumer_offsets",
},
"value": [1658944573.158, "1"],
},
{
"metric": {
"__name__": "kafka_topic_partition_in_sync_replica",
"instance": "kafka-service-metrics.default.svc:9308",
"job": "kubernetes-services",
"kubernetes_namespace": "default",
"partition": "0",
"port_name": "http-metrics",
"service_name": "kafka-service-metrics",
"topic": "apl.prod.app.agg.trigger",
},
"value": [1658944573.158, "3"],
},
],
"resultType": "vector",
},
"status": "success",
}
If we wanted to provide these input parameters:
labels: ["port_name"]
The output of this step will be:
{
'{port_name="http-metrics"}': "1",
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: vector
labels:
- port_name
Note
In the example above, the label port_name
has the same value
(http-metrics
) for all elements. However, only the first entry will be
returned. That is emphasized with a info log message like the one bellow:
The following labels were duplicated: {port_name="http-metrics"}.
Only the first entry is being displayed.
Example Usage for Matrix Result Type
If the incoming data to the step is:
{
"status": "success",
"data": {
"resultType": "matrix",
"result": [
{
"metric": {"__name__": "up", "job": "prometheus", "instance": "localhost:9090"},
"values": [[1435781430.781, "1"], [1435781445.781, "1"], [1435781460.781, "1"]],
},
{
"metric": {"__name__": "up", "job": "node", "instance": "localhost:9091"},
"values": [[1435781430.781, "0"], [1435781445.781, "0"], [1435781460.781, "1"]],
},
],
},
}
The output of this step will be:
{
'up{instance="localhost:9090", job="prometheus"}': [1, 1, 1],
'up{instance="localhost:9091", job="node"}': [0, 0, 1],
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
If the incoming data to the step is:
{
"status": "success",
"data": {
"resultType": "matrix",
"result": [
{
"metric": {
"__name__": "prometheus_http_request_duration_seconds_count",
"container": "prometheus",
"endpoint": "http-web",
"handler": "/",
"instance": "10.42.0.145:9090",
"job": "prom-op-kube-prometheus-st-prometheus",
"namespace": "kube-system",
"pod": "prometheus-prom-op-kube-prometheus-st-prometheus-0",
"service": "prom-op-kube-prometheus-st-prometheus"
},
"values": [
[
1681818434.852,
"10"
],
[
1681818464.852,
"11"
],
[
1681818494.852,
"12"
],
[
1681818524.852,
"12"
]
]
},
{
"metric": {
"__name__": "prometheus_http_request_duration_seconds_count",
"container": "prometheus",
"endpoint": "http-web",
"handler": "/static/*filepath",
"instance": "10.42.0.145:9090",
"job": "prom-op-kube-prometheus-st-prometheus",
"namespace": "kube-system",
"pod": "prometheus-prom-op-kube-prometheus-st-prometheus-0",
"service": "prom-op-kube-prometheus-st-prometheus"
},
"values": [
[
1681818434.852,
"80"
],
[
1681818464.852,
"80"
],
[
1681818494.852,
"80"
],
[
1681818524.852,
"88"
]
]
}
]
}
}
If we wanted to provide these input parameters:
labels: ["handler"]
The output of this step will be:
{
'{handler="/"}':[10, 11, 12, 12],
'{handler="/static/*filepath"}': [80, 80, 80, 88],
}
The Snippet Argument should look like this:
promql:
id: my_request
query: <query>
result_type: matrix
labels:
- handler
regex_parser
The regex_parser
enables the use of regular expression processing to
select data from a string. It wraps the python standard library re
module.
Note
All re
methods are supported except for purge, compile, and escape.
For methods that require additional parameters, check the re
documentation
to find usage and parameter names, which can be passed directly into the step
as shown in example 2.
Step details:
Framework Name |
|
Parameters |
|
Reference |
Example - Using RegEx Methods
Search
static_value
is returning a string where an IP address is within a block
of text. The regex_parser
’s search
method is used to find the
IP address.
low_code:
version: 2
steps:
- static_value: an IP 192.168.0.42 where the regex will be applied
- regex_parser:
flags:
- I
- M
method: search
regex: (192.168.0.\d{1,3})
Output
{
'match': '192.168.0.42',
'groups': ('192.168.0.42',),
'span': (6, 17)
}
Substitution
static_value
is returning a string where a block of text contains tab separated data.
Using regex_parser
’s sub
feature we will change tabs into commas.
low_code:
version: 2
steps:
- static_value: "Sepal length\tSepal width\tPetal length\tPetal width\tSpecies\n
5.1\t3.5\t1.4\t0.2\tI. setosa\n
4.9\t3.0\t1.4\t0.2\tI. setosa\n
4.7\t3.2\t1.3\t0.2\tI. setosa\n
4.6\t3.1\t1.5\t0.2\tI. setosa\n
5.0\t3.6\t1.4\t0.2\tI. setosa\n"
- regex_parser:
flags:
- I
- M
method: sub
regex: "\t"
repl: ","
count: 0
Output
{
'groups': '',
'match': 'Sepal length,Sepal width,Petal length,Petal width,Species\n'
' 5.1,3.5,1.4,0.2,I. setosa\n'
' 4.9,3.0,1.4,0.2,I. setosa\n'
' 4.7,3.2,1.3,0.2,I. setosa\n'
' 4.6,3.1,1.5,0.2,I. setosa\n'
' 5.0,3.6,1.4,0.2,I. setosa\n',
'span': ''
}
regex_select_table
The Regex Select Table step picks data by using a regular expression. Another argument, an index, can be provided. The index allows users to return more specific data.
Step details:
Step |
|
Incoming data type |
dictionary |
Return data type |
dictionary |
Configuration of arguments
Two arguments can be configured for the Regex Select Table step:
Argument |
Type |
Default |
Description |
---|---|---|---|
regex |
string |
Required. This argument is the regular expression used
to parse the data.
|
|
index |
int |
None |
Optional. Use this argument to get a specific indexed value.
|
Below are three examples of the Regex Select Table step.
Incoming data to step:
{ "cpu": ["170286", "1547", "212678", "10012284"], "cpu0": ["45640", "322", "54491", "2494773"], "cpu1": ["39311", "966", "51604", "2507064"], "ctxt": ["37326362"], "btime": ["1632486419"], "processes": ["8786"], }
Input arguments to step:
regex: btime
Output:
{"btime": "1632486419"}
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - regex_select_table: regex: btime
Incoming data to step:
{ "cpu": ["170286", "1547", "212678", "10012284"], "cpu0": ["45640", "322", "54491", "2494773"], "cpu1": ["39311", "966", "51604", "2507064"], "ctxt": ["37326362"], "btime": ["1632486419"], "processes": ["8786"], }
Input arguments to step:
regex: ^cpu\d*$
Output:
{ "cpu": ["170286", "1547", "212678", "10012284"], "cpu0": ["45640", "322", "54491", "2494773"], "cpu1": ["39311", "966", "51604", "2507064"] }
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - regex_select_table: regex: ^cpu\d*$
Incoming data to step:
{ "cpu": ["170286", "1547", "212678", "10012284"], "cpu0": ["45640", "322", "54491", "2494773"], "cpu1": ["39311", "966", "51604", "2507064"], "ctxt": ["37326362"], "btime": ["1632486419"], "processes": ["8786"], }
Input arguments to step:
regex: ^cpu\d*$ index: 2
Output:
{"cpu": "212678", "cpu0": "54491", "cpu1": "51604"}
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - regex_select_table: regex: ^cpu\d*$ index: 2
select_table_item
The Select Table Item step selects data using the indexes in an incoming list of dictionaries. The dictionary list is represented in a table. When an index argument is provided, the corresponding value will be returned. However, when the index argument is a list, it will concatenate the values at each index and return a single value.
Step details:
Step |
|
Incoming data type |
list of dictionaries |
Return data type |
list |
Configuration of arguments
The following argument can be configured for the Select Table Item step:
Argument |
Type |
Description |
---|---|---|
index |
int or list |
Required. If the argument is an int, it works as an index
to select the data. If the argument is a list, it joins the
values at each index and returns a single value.
|
Below are four examples of the Select Table Item step.
Incoming data to step:
[{1: "2.63", 2: "8.05"}]
Input arguments to step:
index: 1
Output:
["2.63"]
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - select_table_item: index: 1
Incoming data to step:
[{1: "2.63", 2: "8.05"}]
Input arguments to step:
index: [1, 2]
Output:
["2.63 8.05"]
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - select_table_item: index: - 1 - 2
Incoming data to step:
[{1: "2.63", 2: "8.05"}, {1: "1.85", 2: "1.95"}]
Input arguments to step:
index: [1]
Output:
["2.63 1.85"]
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - select_table_item: index: - 1
Incoming data to step:
[{0: "0.5", 1: "2.63", 2: "8.05"}, {0: "0.4", 1: "1.85", 2: "1.95"}]
Input arguments to step:
index: [0, 2]
Output:
["0.5 8.05", "0.4 1.95"]
The Snippet Argument appears as:
low_code: version: 2 steps: - <network_request> - select_table_item: index: - 0 - 2
simple_key
The simple_key
step can be used to select a value from a dictionary or
a list. simple_key
has been deprecated. See examples below for migrating
simple_key
to JMESPath.
Note
The simple_key
step cannot be used to select indexed data. Indexed data
is required grouping like sets of collection objects. This step should not
be used for the aforementioned usecase.
Step details:
Framework Name |
|
Format |
path to the key |
Example - Selecting Attributes with simple_key
Nested Key
{
"key": {
"subkey": {
"subsubkey": "subsubvalue",
"num": 12
}
}
}
From the provided data above we would like to select num
. The Snippet
Argument would be as follows.
low_code:
version: 2
steps:
- static_value:
key:
subkey:
subsubkey: subsubvalue,
num: 12
- simple_key: "key.subkey.num"
Output
12
The equivalent jmespath
Snippet Argument is shown below.
low_code:
version: 2
steps:
- static_value:
key:
subkey:
subsubkey: subsubvalue,
num: 12
- jmespath:
value: key.subkey.num
List Selection
{
"key": {
1: {
"2": ["value0", "value1", "value2"]
}
}
}
From the provided data we would like to select value0
. Then our Snippet
Argument that would select the data is as follows.
low_code:
version: 2
steps:
- static_value:
key:
'1':
'2':
- value0
- value1
- value2
- simple_key: key.1.2.0
Output
"value0"
The equivalent jmespath
Snippet Argument is shown below.
low_code:
version: 2
steps:
- static_value:
key:
'1':
'2':
- value0
- value1
- value2
- jmespath:
value: key."1"."2"[0]
Object Selection
[
{"id": "value0", "cheese": "swiss"},
{"id": "value1", "cheese": "goat"}
]
From the provided data above we would like to select only the id
values.
Then our Snippet Argument that would select the data is as follows.
low_code:
version: 2
steps:
- static_value:
- id: value0
cheese: swiss
- id: value1
cheese: goat
- simple_key: "id"
Output
["value0", "value1"]
The equivalent jmespath
Snippet Argument is shown below.
low_code:
version: 2
steps:
- static_value:
- id: value0
cheese: swiss
- id: value1
cheese: goat
- jmespath:
value: "[*].id"
SL1 API
Below is a Snippet Argument whose static_value
step is mocking a SL1 REST
API payload.
low_code:
version: 2
steps:
- static_value: '[{"URI":"\/api\/account\/2","description":"AutoAdmin"}
,{"URI":"\/api\/account\/3","description":"AutoRegUser"},
{"URI":"\/api\/account\/1","description":"user"},
{"URI":"\/api\/account\/4","description":"snadmin"}]'
- json:
- simple_key: "description"
Output
['AutoAdmin', 'AutoRegUser', 'user', 'snadmin']
The output of the simple_key
step is only the description field. This could be
assigned to a collection object in SL1 and would list out the descriptions.
Note
If both the URI
and description
fields were collected in two seperate
collection objects then simple key should not be used. See JMESPath <steps-jmespath>.
The equivalent jmespath
Snippet Argument is shown below.
low_code:
version: 2
steps:
- static_value: '[{"URI":"\/api\/account\/2","description":"AutoAdmin"}
,{"URI":"\/api\/account\/3","description":"AutoRegUser"},
{"URI":"\/api\/account\/1","description":"user"},
{"URI":"\/api\/account\/4","description":"snadmin"}]'
- json:
- jmespath:
value: "[*].description"
snmp
Framework Name |
|
Note
This step is not compatible with Bulk Dynamic Applications due to how cred_host is populated.
Credential Type |
Functionality |
---|---|
SNMP |
All fields within the credential are supported. |
Universal Credential |
Supports all functionality as the SNMP credential. The following fields are used by the requestor:
|
Step details:
Parameter |
Description |
---|---|
method |
Optional parameter that specifies the snmp request. Available options: get, get_multi, getbulk, walk. Default: walk |
oids |
List that specifies the oids that will be queried. Only the first value will be used for get and walk methods. |
The SNMP Requestor enables the Snippet Framework to collect SNMP data. It supports the following SNMP operations:
get
Perform a request against a specific OID. If a table is found, then no data is returned as it does not traverse tables. Returns a single value
get_multi
Performs all requests as gets. This should only be used if the SNMP agent does not support PDU Packing. If the device supports PDU Packing, you should use getbulk. Returns a dictionary containing
{oid1: value1, oid2: value2...}
getbulk
Optimization where multiple get requests can be performed in a single handshake. This enables the requestor to be more performant. The device must support PDU Packing. Returns a dictionary containing
{oid1: value1, oid2: value2...}
walk
Walks an oid and traverses the table. Returns a dictionary containing
{oid1: value1, oid2: value2...}
Examples
Performing a SNMP get
In this example, we will query the oid, sysDescr (.1.3.6.1.2.1.1.1.0).
low_code:
version: 2
steps:
- snmp:
method: get
oids:
- .1.3.6.1.2.1.1.1.0
Result: ScienceLogic EM7 G3 - All-In-One
Performing a SNMP get_multi
In this example we are going to use the get_multi operation and query the first two ifDescr’s. This operation should only be used when PDU Packaging is not supported.
low_code:
version: 2
steps:
- snmp:
method: get_multi
oids:
- .1.3.6.1.2.1.2.2.1.2.1
- .1.3.6.1.2.1.2.2.1.2.2
Result:
{".1.3.6.1.2.1.2.2.1.2.1": "lo", ".1.3.6.1.2.1.2.2.1.2.2": "ens160"}
Performing a SNMP getbulk
In this example we are going to use the getbulk operation and query the first two ifDescr’s. This operation should only be used when PDU Packaging is supported.
low_code:
version: 2
steps:
- snmp:
method: get_multi
oids:
- .1.3.6.1.2.1.2.2.1.2.1
- .1.3.6.1.2.1.2.2.1.2.2
Result:
{".1.3.6.1.2.1.2.2.1.2.1": "lo", ".1.3.6.1.2.1.2.2.1.2.2": "ens160"}
Performing a SNMP walk
In this example we will query the ifDescr table (.1.3.6.1.2.1.2.2.1.2)
low_code:
version: 2
steps:
- snmp:
method: walk
oids:
- .1.3.6.1.2.1.2.2.1.2
Result:
{".1.3.6.1.2.1.2.2.1.2.1": "lo", ".1.3.6.1.2.1.2.2.1.2.2": "ens160"}
Performing a Dependent Collection
The SNMP Requestor can use a previous result to construct a new SNMP query.
This is done by the previous step setting the result as a
silo.low_code_steps.snmp.snmp_dependent_collection
object.
- silo.low_code_steps.snmp.snmp_dependent_collection(method, oids)
Information about the Dependent Collection
- Parameters:
method (str) – SNMP method to collect the data
oids (Union[str, Tuple[str]]) – Oid(s) to query
In this example we want to query information related to Avaya phones. To accomplish this, we must first query the LLDP MIB to determine which of the connected devices are Avaya phones. Once we know which indices are Avaya phones, we want to query the hostname for each device.
low_code:
version: 2
steps:
- snmp:
method: walk
oids:
- .1.0.8802.1.1.2.1.4.2.1.5
- get_avaya_devices
- format_hostname_oid
- snmp
The first step, snmp
, is requesting the sysObjId through
the LLDP MIB for each connected device. The second step, get_avaya_devices
would filter out any devices that do not respond with the Avaya
sysObjId. The third step, format_hostname_oid
, would format
the oid for querying the devices hostname. The final step, snmp
,
would query the hostname oid for the device.
from silo.low_code_steps.snmp import snmp_dependent_collection
LLDP_HOSTNAME_OID = ".1.0.8802.1.1.2.1.4.1.1.9"
AVAYA_ROOT_OID = ".1.3.6.1.4.1.6889"
@register_processor
def get_avaya_devices(result):
avaya_devices = []
for oid, sysObjId in result.items():
if AVAYA_ROOT_OID not in sysObjId:
continue
avaya_devices.append(oid)
return avaya_devices
@register_processor
def format_hostname_oid(result):
oid_list = []
for oid in result:
split_oid = oid.split(".")
index = split_oid[14]
ifIndex_long = ".".join(split_oid[12:14])
oid_list.append(f"{LLDP_HOSTNAME_OID}.{ifIndex_long}.{index}")
return snmp_dependent_collection("getbulk", oid_list)
Result:
{".1.0.8802.1.1.2.1.4.1.1.9.0.12.67": "hostname1", ".1.0.8802.1.1.2.1.4.1.1.9.0.14.63": "hostname2"}
ssh
The Secure Shell (SSH) Data Requester allows users to communicate with remote devices through SSH-request functionality. Users can access a command-line shell on a remote, or server, through the network’s protocol.
Through SSH’s strong encryption and authentication, commands can be executed securely. The protocol supports two methods of authentication. A user can use their username and password, or an SSH key. An SSH key is a specific access credential that performs the same functionality as the first method, but with stronger reliability.
For the first authentication method, use the Username and Password fields. For SSH key authentication, use the Username and Private Key fields.
Step details include:
Step Name |
|
Package |
silo.ssh_lc |
Supported Credentials |
SSH/Key Credential |
Supported Fields of Credential |
|
Configuration of arguments
There are two types of arguments that can be configured to return data for a collection object.
This includes:
Argument |
Type |
Default |
Description |
---|---|---|---|
command |
string |
None |
Required. This argument sets the command for execution. |
standard_stream |
string |
stdout |
Optional. This argument sets the standard streams. The
possible values are:
- stdout - Standard output, stream to which the command
writes its output data.
- stderr - Standard error, another output stream to
output error messages.
|
Below is an example of a command argument used to retrieve data about the CPU architecture of a Unix system.
lscpu
The Snippet Argument should look like this:
low_code:
version: 2
steps:
- ssh:
command: lscpu
The output of this example step would look similar to:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 45
Model name: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
Stepping: 2
CPU MHz: 2299.998
BogoMIPS: 4599.99
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0-3
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx hypervisor lahf_lm ssbd ibrs ibpb stibp tsc_adjust arat md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities
static_value
The static_value
step is used to mock network responses. For example,
instead of making the api request with the http step, the
response could be mocked by using the static_value
step and hardcoding the
data.
Step details:
Framework Name |
|
Supported Credentials |
N/A |
Example - Mocking a Network Response
String Input
If we wanted to mock:
"Apple1,Ball1,Apple2,Ball2"
The Snippet Argument would look like this:
low_code:
version: 2
steps:
- static_value: "Apple1,Ball1,Apple2,Ball2"
Output
'Apple1,Ball1,Apple2,Ball2'
Stringified JSON
By copying a stringified JSON response you can avoid repeated REST API accesses when developing a Snippet Argument. Below is an example of a stringified JSON response that is used in conjuction with a json step.
low_code:
version: 2
steps:
- static_value: "[
{\"URI\":\"/api/account/2\",\"description\":\"AutoAdmin\"},
{\"URI\":\"/api/account/3\",\"description\":\"AutoRegUser\"},
{\"URI\":\"/api/account/1\",\"description\":\"user\"},
{\"URI\":\"/api/account/4\",\"description\":\"snadmin\"}
]"
Output
'[
{"URI": "/api/account/2", "description": "AutoAdmin"},
{"URI": "/api/account/3", "description": "AutoRegUser"},
{"URI": "/api/account/1", "description": "user"},
{"URI": "/api/account/4", "description": "snadmin"},
]'
Adding the json step will convert the string into a list of dictionaries.
low_code:
version: 2
steps:
- static_value: "[
{\"URI\":\"/api/account/2\",\"description\":\"AutoAdmin\"},
{\"URI\":\"/api/account/3\",\"description\":\"AutoRegUser\"},
{\"URI\":\"/api/account/1\",\"description\":\"user\"},
{\"URI\":\"/api/account/4\",\"description\":\"snadmin\"}
]"
- json
Output
[
{"URI": "/api/account/2", "description": "AutoAdmin"},
{"URI": "/api/account/3", "description": "AutoRegUser"},
{"URI": "/api/account/1", "description": "user"},
{"URI": "/api/account/4", "description": "snadmin"},
]
Formatted Data
The static_value
allows YAML expressed data structures such as lists or
dictionaries to be outputted. Below is the previous JSON stringified example
Snippet Argument.
low_code:
version: 2
steps:
- static_value: "[
{\"URI\":\"/api/account/2\",\"description\":\"AutoAdmin\"},
{\"URI\":\"/api/account/3\",\"description\":\"AutoRegUser\"},
{\"URI\":\"/api/account/1\",\"description\":\"user\"},
{\"URI\":\"/api/account/4\",\"description\":\"snadmin\"}
]"
- json
Below is the equivalent YAML expressed step argument for static_value
.
low_code:
version: 2
steps:
- static_value:
- URI: /api/account/2
description: AutoAdmin
- URI: /api/account/3
description: AutoRegUser
- URI: /api/account/1
description: user
- URI: /api/account/4
description: snadmin
The outputted data type of this snippet argument is a list of dictionaries. Notice that the json step is no longer needed.
store_data
The store_data step allows a user to store the current result into a key of their choosing. This enables a pre-processed dataset to be used at a later time where it may be necessary to have the full result. An example of this could be trimming data you do not need but requiring the whole payload to make a decision in the future.
This step does not update request_id so it will not affect the automatic cache_key generated by the Snippet Framework.
Framework Name |
|
key |
storage_key |
For example, if you wanted to store the current result into
the key storage_key
, you would use the following step
definition:
store_data: storage_key
To access this data in a later step, you would use the following:
result_container.metadata["storage_key"]
winrm
The WinRM Requestor creates and executes calls against a remote Windows System.
The returned value is a namedtuple,
silo.low_code_steps.winrm.win_result(std_out=std_out, std_err=std_err, exit_code=exit_code)
.
Step details:
Framework Name |
winrm |
Supported Credentials |
PowerShell |
Supported Fields (PowerShell) |
|
Step Parameters:
Parameter |
Required |
Type |
Default |
Description |
command |
Required |
string |
Command to execute |
|
flags |
Optional |
list |
|
List of flags for the command |
stream |
Optional |
string |
|
Value to return from the request. Available options are
|
Executing a WinRM command
In this example we will query a remote system and determine which folders
exist in the %APPDATA%\Python
directory.
low_code:
version: 2
steps:
- winrm:
command: dir
flags:
- "%APPDATA%\\Python"
The output of this step:
win_result(std_out=' Volume in drive C has no label.\r\n Volume Serial Number is 16E1-6A59'
'\r\n\r\n Directory of C:\Users\Administrator\AppData\Roaming\Pyt'
'hon\r\n\r\n10/05/2023 10:57 AM <DIR> .\r\n10/05/2023 10:'
'57 AM <DIR> ..\r\n10/05/2023 10:57 AM <DIR> P'
'ython312\r\n 0 File(s) 0 bytes\r\n '
' 3 Dir(s) 42,964,643,840 bytes free\r\n', std_err='', exit_code=0)