Usage

Download this manual as a PDF file

The Batch Upload API allows a set of related logs to be grouped together when uploading to Skylar Automated RCA. When compared to single file uploads, or the upload-status APIs, batch uplobegin_batch.htmlads provide a more controlled and organized way to send groups of information to Skylar Automated RCA. There can be multiple batch uploads concurrently underway.

The operational flow for batch uploads is:

  1. Make an API call to Skylar Automated RCA to begin a batch (begin_batch). A unique batch ID is returned on success that is used in subsequent steps while working with a batch. This API call creates the required Skylar Automated RCA state for a batch and must be the first operation for each new batch.
  2. The logs associated with a batch are uploaded, such as using ze or curl. These use the configuration variable ze_batch_id to notify Skylar Automated RCA that the logs are part of a batch. This must be set to the batch_id used in step 1.
  3. When all files have been uploaded make another API call to Skylar Automated RCA to end the batch upload phase (end_batch). This tells Skylar Automated RCA that all files for a batch are uploaded and processing can begin on the batch.
  4. Check the state of a batch periodically (using the get_batch API) until processing has completed.

See the Example below for more information.

Additional operations that can be performed are:

Batch IDs and Scope of Batches

Each batch upload is identified by a unique string, the batch ID. This is defined when the begin_batch API is called, and is valid for the lifetime of the batch upload.

Skylar Automated RCA automatically returns a new batch ID from the begin_batch API by default. Alternatively, a user-defined batch ID may be supplied on the begin_batch API call. However, note that this cannot be reused until the batch has expired and been removed. Batch IDs are formed using 1-36 alphanumeric characters, plus ‘_’ (underscore) and ‘-‘ (dash).

Batch ids are used as part of ZAPI uploads, along with a ZAPI token. They are associated with that ZAPI token at creation time, and may only be used with the same token in later upload calls.

The lifetime of a batch, or retention period, is set in hours. By default this is 8 hours. This can be overridden in the begin_batch API if desired. The retention period is used to extend the lifetime as a batch successfully proceeds through each state.

Batch States

Each batch upload exists in one of the following states:

State Interpretation
Uploading Files are being uploaded to the batch (step 1, 2 above)
Processing All files have completed upload and are being processed. (triggered by step 3 above)
Done Ingest and bake has completed on all uploads
Failed The batch could not be uploaded and/or processed
Cancelled The batch was cancelled by the user prior to step 3

Opportunistic or Delayed Batch Processing

When starting a new batch the API (step 1) allows the user to specify how to stage and process the batch, either delayed or opportunistic. The default is delay.

In both cases uploaded files for a batch are processed together in one or more bundles, with no other logs included in the bundles.

Type Interpretation
Opportunistic Skylar Automated RCA may start processing uploaded files before the final commit (step 3). This can reduce the amount of temporary space needed for a batch, and spreads work out over a longer time.
Delayed Skylar Automated RCA will delay processing uploaded files until the final commit (step 3) occurs. This guarantees the batch is processed as a unit, although it may consume more temporary space and cause a burst of work when the batch ends.

If batches are typically small then using delay is appropriate. If batches are very large then using opportunistic may be appropriate.

Example

This example uses Curl to get a batch ID, uses the ze CLI to upload several files with the same batch ID, then uses Curl to advise Skylar Automated RCA that all data for the upload has been sent. Finally, a check is made whether or not all the data in the upload has been processed.

Begin batch, get a batch ID:

curl --silent --insecure -H "Authorization: Token <authToken> " -H "Content-Type: application/json" -X POST https://<ZapiHost>/api/v2/batch

BATCH_ID=<newBatchId>

Upload logs using ze CLI

ze up --url=https://mysite.example.com --auth=<authToken>--file=syslog.syslog.log --log=syslog --ids=ze_deployment_name=case1 --cfgs=ze_batch_id=$BATCH_ID

ze up --url=https://mysite.example.com --auth=<authToken> --file=jira.jira.log --log=jira --ids=ze_deployment_name=case1 --cfgs=ze_batch_id=$BATCH_ID

ze up --url=https://mysite.example.com --auth=<authToken> --file=conflnc.conflnc.log --log=conflnc --ids=ze_deployment_name=case1 --cfgs=ze_batch_id=$BATCH_ID

Indicate end of uploads:

curl --silent --insecure -H "Authorization: Token <authToken" -H "Content-Type: application/json" -X PUT --data '{ "uploads_complete" : true }' https://<zapi_host>/api/v2/batch/$BATCH_ID

Check the status of uploads is complete via the state that is returned in the response payload:

curl --silent --insecure -H "Authorization: Token <authToken" -H "Content-Type: application/json" https://<zapi_host>/api/v2/batch/$BATCH_ID | grep state

When the state becomes Done, the batch is successfully processed. While processing is underway other information from the get_batch API can be used to monitor progress, for example the number of bundles created for the batch, and completed so far:

...

"bundles": 8,

"bundles_completed": 3,

...

Note on Canceled and Failed Batches

A batch can be canceled while still performing uploads using the cancel_batch API. This causes the batch to transition to the Canceled state. Any uploaded files staged on Skylar Automated RCA will be removed.

If a batch fails processing it transitions to the Failed state. The reason for the failure, if known, is available in the reason attribute. For example:

"state": "Failed",

...

"reason": "write bundle files failed"

would indicate insufficient temporary storage to process the batch.