Backup and Restore

Elasticsearch Snapshot and Restore

Elasticsearch provides a mechanism to snapshot data to a network-attached storage device and to restore from it.

  1. Mount the Network File Storage (NFS) on the Analytics Node.
    1. Create a directory on the remote Ubuntu Server (NFS store). This directory must have the user group remoteuser and root, respectively, with 10000 for the UID and 0 for the GID.
    2. Stop the Elasticsearch container: sudo docker elasticsearch stop
    3. Mount the remote store on /opt/bigswitch/snapshot in the Analytics server.
    4. Start the Analytics Node: sudo docker elasticsearch start
  2. Create a snapshot repository by running the following API call:
    curl \
    -k \
    -X PUT \
    -H 'Content-Type:application/json' \
    -d '{"type":"fs","settings":{"location":"/usr/share/elasticsearch/snapshot"}}' \
    -u admin:***** \
    https://169.254.16.2:9201/_snapshot/test_automation
  3. Take a snapshot by running the following API call:
    curl \
    -k \
    -X POST \
    -H 'Content-Type:application/json' \
    -d '{ "indices": ".ds-flow-sflow-stream-2023.08.21-000001", "include_global_state": true, "ignore_unavailable": true, "include_hidden": true}' \
    -u admin:***** \
    https://169.254.16.2:9201/_snapshot/test_automation/test_snap1
  4. To view the a snapshot, run the following API call:
    curl \
    -s -k \
    -H 'Content-Type:application/json' \
    -u admin:***** \
    https://169.254.16.2:9201/_snapshot/test_automation/test_snap1?pretty
  5. To restore a snapshot, run the following API call:
    curl \
    -k \
    -X POST \
    -H 'Content-Type:application/json' \
    -d '{ "indices": ".ds-flow-sflow-stream-2023.08.21-000001", "ignore_unavailable": true, "include_global_state": true, "rename_pattern": "(.+)", "rename_replacement": "restored_$1" }' \
    -u admin:***** \
    https://169.254.16.2:9201/_snapshot/test_automation/test_snap1/_restore

Import and Export of Saved Objects

The Saved Objects UI helps keep track of and manage saved objects. These objects store data for later use, including dashboards, visualization, searches, and more. This section explains the procedures for backing up and restoring saved objects in Arista Analytics.

Exporting Saved Objects

 

  1. Open the main menu, then click Main Menu > Management > Saved Objects
  2. Select the custom-saved objects to export by clicking on their checkboxes.
  3. Click the Export button to download. Arista Networks suggests changing the file name to the nomenclature that suits your environment (for example, clustername_date_saved_objects_<specific_name_or_group_name>.ndjson).
    Note: Arista Networks recommends switching ON include related objects before selecting the export button. If there are any missing dependency objects, selecting include related objects may throw errors, in which case switch it OFF.
  4. The system displays the following notification if the download is successful.
    Figure 1. Verifying a Saved/Downloaded Object
    Note: Recommended Best Practices
    • While creating saved objects, Arista Networks recommends using the naming convention that suits your environment. For instance, in the example above, a naming pattern has been used, prefixed with “ARISTA” and specifying Type: dashboard, which allows a manageable set of items to click individually or to select all. Furthermore, exporting dashboards individually based on their type is a more appropriate option, as tracking modifications to a dashboard improves using this method. Dashboards should use only custom visualizations and searches (i.e., do not depend on default objects that might change during a software upgrade).
    • Do not edit any default objects. Arista Networks suggests saving the new version with a different (custom) name if default objects require editing.
    • The files exported should be treated as code and reserved in a source control system, so dissimilarities and rollbacks are possible under standard DevOps approaches.

Importing Saved Objects

 

  1. To import one or a group of custom-created objects, navigate to Main Menu > Management > Kibana > Saved Objects.
    Figure 2. Importing a Group of Saved Objects
  2. Click Import and navigate to the NDJSON file that represents the objects to import. By default, saved objects already in Kibana are overwritten by the imported object. The system should display the following screen.
    Figure 3. NDJSON File Import Mechanism
  3. Verify the number of successfully imported objects. Also verify the list of objects, selecting Main Menu > Management > Kibana > Saved Objects > search for imported object.
    Figure 4. Import Successful Dialog Box

Import and Export of Watchers

Use the Watcher feature to create actions and alerts based on certain conditions and periodically evaluate them using queries on the data. This section explains the procedure of backing up and restoring the watchers in Arista Analytics.

Exporting Watchers

 

  1. The path parameter required to back up the watcher configuration is watcher_id. To obtain the watcher_id, go to Main Menu > Management > watcher > watcher_id.
    Figure 5. Find Watcher_ID
  2. Open the main menu, then select Dev Tools > Console. Issue the GET API mentioned below with the watcher_id. The response appears in the output terminal.
    Run the following API call:
    GET _watcher/watch/<watcher_id>
    Replace watcher_id with the watcher_id name copied in Step 1.
    Figure 6. GET API
  3. Copy the API response from Step 2 into a .json file with the terminology that suits the environment, and keep track of it. As an example, the following may be useful nomenclature: Arista_pod1_2022-02-03_isp_latency_watcher.json.

Importing Watchers

 

  1. Not all exported fields are needed when importing a watcher. To filter out the unwanted fields from the exported file, use the jq utility. Use jq .watch <exported_watcher.json>and import the output.
    Figure 7. jq Command Output
  2. Click DevTools > console, enter the API PUT_watcher/watch/<watcher_id>, and copy the Step 1 output into the screen shown below. Replace watcher_id with the desired watcher name. The output terminal will confirm the creation of the watcher.
    Figure 8. PUT API in Dev Tools Console
  3. Locate the newly created watcher in Main menu > Management > Elasticsearch > Watcher > search with watcher_ID.
    Figure 9. Watcher

Import and Export of Machine Learning Jobs

Machine Learning (ML) automates time series data analysis by creating accurate baselines of normal behavior and identifying anomalous patterns. This section explains how to back up and restore the machine learning jobs in Arista Analytics.

Exporting Machine Learning Jobs

 

  1. Open the main menu, then select Dev Tools > Console. Send a GET _ml/anomaly_detectors/<Job-id> request to Elasticsearch and view the response of all the machine learning anomaly jobs. Replace Job_id with the ML job name. The system displays the following output when executing the GET request.
    Figure 10. Main Menu > Dev Tools > Console
  2. Copy the GET API response of the ML job into a .json file with nomenclature that suits your environment and keep track of it. An example of appropriate nomenclature might be Arista_pod1_2022-02-03_ML_Source_Latency_ML_job.json

Importing Machine Learning Jobs

 

  1. Not all exported fields must be imported. Only description, analysis_config, and data_description fields may be needed. Running jq '.jobs[] |{description, analysis_config, data_description}'<json-filename> copies the output into the Dev tools console. Replace json-filename with the filename of the JSON file previously exported.
    Run the following API call:
    jq '.jobs[] |{description, analysis_config, data_description}' Arista_pod1_
    2022-02-03_ML_Source_Latency_ML_job.json
    Figure 11. jq Required Fields
  2. Select Dev tools > Console and copy the Step 1 output into the screen shown below along with the PUT request.
    Run the following API call:
    PUT _ml/anomaly_detectors/<ml_job_name>
    Replace ml_job_name with the specific string of the ML Job name.
    Figure 12. PUT ML Jobs API
  3. The successful response to the PUT request confirms the creation of the ML Job. Further, verify imported ML jobs by selecting Main menu > Machine Learning > Job Management > search with ML Job Name.
    Figure 13. ML Job Verification