Setting Up the Arista Analytics Node

Requirements

Arista Analytics node can be deployed with or without DANZ Monitoring Fabric (DMF). You need the following information before installing the Arista Analytics node:
  • IP address and netmask to assign to the Analytics server
  • Default IP gateway
  • DNS server IP address (optional)
  • DNS Search Domain (optional)
  • Admin password for the Analytics server
  • NTP server IPv4 address
  • Password for Analytics GUI admin user (optional)
  • TACACS+ Server IPv4 Address (optional)
  • TACACS+ secret (optional)
  • TACACS+ Server Service (optional)
When deploying the Arista Analytics node along with DMF, you will need the following additional information.
  • IP addresses for the DMF Controllers
    Note: If Arista Analytics node is deployed along with DMF, ensure that the version on the Arista Analytics node is the same as that running on the DMF Controllers. Running different versions on the Arista Analytics node and DMF Controllers are not supported.

The ports in the following table should be open on security devices between the Controller or switches and the Arista Analytics server, as noted in the table.

In addition, you need to open the ports for Redis and replicated Redis on the Analytics server after first boot configuration (see the Enabling Access Control to the Analytics Server section).
Table 1. Arista Analytics Open Port Requirements
Monitoring Port Requirement Explanation
NetFlow UDP 2055 Flow data exported to the Analytics node in NetFlow v5 format, either from the production network or the DANZ Monitoring Fabric.
IPFIX UDP 4739 Flow data exported to the Analytics node in IPFIX/NetFlow v10 format, either from the production network or the DANZ Monitoring Fabric.
sFlow UDP 6343 between switches and Analytics server Packets are sampled on filter interfaces and the SwitchLight OS sFlow agent constructs the sFlow header and forwards to Analytics server and other sFlow collectors for processing.
Host-tracker information UDP 6380 between switches and Analytics server ARP, DNS, and other control traffic is forwarded from each switch to the Analytics server. A private header is prepended with a timestamp in the process. The Analytics server processes packets and maintains the host tracking database. The Controller queries the Analytics server for the latest host table.
DMF Statistics and Events UDP 9379 (redis) between Controller and Analytics server Statistics gathered by the Controller from switches and service nodes are sent to the Analytics server from REDIS database.
DMF Statistics and Events (cluster) UDP 6379 (replicated redis) between Controller and Analytics server Replicated redis is used to gather information with a DMF Controller cluster.
Monitoring Active Directory or Open VPN UDP 5043 Required only if you are using Analytics to monitor Active Directory or Open VPN.

Arista Analytics Node First Boot Configuration

Note: Before attempting to reinstall the ISO image on an existing analytics node,run sudo /opt/bigswitch/rma.sh.

To complete the initial configuration of Arista Analytics, complete the following steps.

  1. Respond to the system prompt to login using the admin account.
    analytics login: admin
    Login: admin, on Wed 2018-10-31 18:22:24 UTC, from localhost
  2. When prompted, accept the End User License Agreement (EULA).
    This product is governed by an End User License Agreement (EULA).
    You must accept this EULA to continue using this product.
    You can view this EULA by typing 'View', or from our website at:
    https://www.arista.com/en/eula
    Do you accept the EULA for this product? (Yes/No/View) [Yes] > Yes
    Running system pre-check
    Finished system pre-check
    Starting first-time setup
  3. Enter the emergency recovery user password.
    Local Node Configuration
    ------------------------
    Emergency recovery user password >
    Emergency recovery user password (retype to confirm) >
  4. Assign a hostname to the Analytics Node.
    Hostname > analytics1
  5. Choose the management network option.
    Management network options:
    [1] IPv4 only
    [2] IPv6 only
    [3] IPv4 and IPv6
    >1
  6. Enter the IP address to assign to the Arista Analytics Server as in the following example.
    Configuration IPv4 Address: 10.9.18.220

    If you do not enter a CIDR, the system prompts you for the IPv4 subnet mask.

    IPv4 address [0.0.0.0/0] > 10.9.40.100/24
    IPv4 gateway (Optional) > 10.9.40.1
    DNS server 1 (Optional) > 10.3.0.4
    DNS server 2 (Optional) > 10.1.5.200
    DNS search domain (Optional) > qa.bigswitch.com
  7. Starting with DMF 7.3.0 release, a three node analytics cluster is supported. This is for added performance and reliability. Select the clustering option and information.
    If you have a standalone analytics node or the current node you are configuring is the first node of the analytics cluster then select option:
    [1] Start a new cluster
    Note: Wait for the active node to load completely (ES and Kibana) before executing the first boot script on the other cluster nodes.
    Clustering
    ----------
    Analytics cluster options:
    [1] Start a new cluster
    [2] Join an existing cluster
    > 1
    Cluster name > analytics-test
    Cluster description (Optional) > testing
    Cluster administrator password >
    Cluster administrator password (retype to confirm) >

    If you have already setup you active/master node of the analytics cluster and you want additional nodes to join the cluster then select:

    [2] Join an existing cluster

    Clustering
    ----------
    Analytics cluster options:
    [1] Start a new cluster
    [2] Join an existing cluster
    > 2
    Existing Analytics Node address > <ip_of_active_analytics_node>
    Cluster administrator password >
    Cluster administrator password (retype to confirm) >
  8. Enter the IP addresses of up to four Network Time Protocol (NTP) servers, which will use to synchronize the system time.
    Default NTP servers:
    - 0.bigswitch.pool.ntp.org
    - 1.bigswitch.pool.ntp.org
    - 2.bigswitch.pool.ntp.org
    - 3.bigswitch.pool.ntp.org
    NTP server options:
    [1] Use default NTP servers
    [2] Use custom NTP servers
    [1] > 1

    After completing the required configuration, the system displays the following messages and a prompt to confirm the settings to be applied.

    Menu ----
    Please choose an option:
    [ 1] Apply settings
    [ 2] Reset and start over
    [ 3] Update Recovery Password (*****)
    [ 4] Update Hostname (analytics-1)
    [ 5] Update IP Option (IPv4 only)
    [ 6] Update IPv4 Address (10.9.40.100/24)
    [ 7] Update IPv4 Gateway (10.9.40.1)
    [ 8] Update DNS Server 1 (10.3.0.4)
    [ 9] Update DNS Server 2 (10.1.5.200)
    [10] Update DNS Search Domain (qa.bigswitch.com)
    [11] Update Cluster Option (Start a new cluster)
    [12] Update Cluster Name (analytics-cluster)
    [13] Update Cluster Description (testing)
    [14] Update Admin Password (*****)
    [15] Update NTP Option (Use default NTP servers)
    [1] > 1
    [Stage 1] Initializing system
    [Stage 2] Configuring local node
    Waiting for network configuration IP address on bond0 is 10.9.40.100 Generating cryptographic keys
    [Stage 3] Configuring system time
    Initializing the system time by polling the NTP servers:
    0.bigswitch.pool.ntp.org
    1.bigswitch.pool.ntp.org
    2.bigswitch.pool.ntp.org
    3.bigswitch.pool.ntp.org
    [Stage 4] Configuring cluster
    Cluster configured successfully. Current node ID is 20445
    All cluster nodes:
    Node 20445: [fe80::d294:66ff:fe4f:5746]:6642
    First-time setup is complete!
  9. If you plan to install multiple Analytics node in cluster configuration, then go back to Step 1. and re-do the steps for the other nodes in the cluster.
  10. After the system completes the configuration, you can establish an SSH session to the active Analytics Node the IP address configured during installation.
  11. If you have an analytics cluster configured, then SSH to the active/master Analytics node and configure a Virtual IP address. Else skip to Step 14.
    analytics-1 > enable
    analytics-1 # configure
    analytics-1(config)# cluster
    analytics-1(config-cluster)# virtual ip <virtual_ip>
  12. Next verify that the cluster has been successfully setup.
    analytics-1 > enable
    analytics-1 # show cluster
    Cluster Name : analytics-cluster
    Cluster Virtual IP : 10.106.4.19
    Redundancy Status : redundant
    Last Role Change Time : 2019-10-23 22:38:39.083000 UTC
    Failover Reason : Changed connection state: cluster configuration changed
    Cluster Uptime : 1 week, 5 days
    # IP@ State Uptime
    -|-----------|-|-------|--------------|
    1 10.106.4.21 * active1 week, 5 days
    2 10.106.4.20 standby 1 week, 5 days
    3 10.106.4.22 standby 1 week, 5 days
    analytics-active-server #
  13. In config mode on the Active DMF Controller, configure the Analytics server IP address by entering the following command from the config-analytics submode.
    analytics-server address <ip>

    For example, the following commands configure the Analytics server with the IP address 10.9.18.220.

    dmf-controller1(config)# analytics
    dmf-controller1(config-analytics)# analytics-server address 10.9.18.220

    To use the Analytics GUI, click the System > Configuration tab at the top of the page, and click the DMF Controller link in the right panel.

  14. Configure sFlow on the DMF Controller or other sFlow agents.

    On the DMF Active Controller, configure the Analytics server IP address as an sFlow collector by entering the following commands.

    dmf-controller1(config)# sflow default
    dmf-controller1(config-sflow)# collector 10.106.4.19

    This example configures the Virtual IP of the Analytics cluster with the IP address 10.106.4.19 and the default UDP port 6343 as an sFlow collector.

    The CLI enters config-sflow mode, from where you can enter the commands available for configuring sFlow on the DANZ Monitoring Fabric.

    To use the DMF GUI, select the Maintenance > sFlow option.

Using the Arista Analytics Server CLI

Starting in the DMF 7.0 release, administrative access to Arista Analytics and other server-level operations, such as configuring sFlow and creating a support bundle, are completed on the DMF Active Controller. For details, refer to the latest version of the DANZ Monitoring Fabric Deployment Guide, available here: https://www.arista.com/en/support/software-download/dmf-ccf-mcd.

Operations that are specific to Arista Analytics are performed by using the Analytics server CLI after logging in to the Analytics server at the address assigned during the first boot configuration.

The Analytics CLI provides a subset of the commands available on the DMF Controller. For details about any command, enter Help <command> or press the Tab to see the options available. You can refer to the DANZ Fabric Command Reference Guide for information about the DMF Controller commands, which are similar to the Analytics commands.

The following shows the commands available from Login mode.

analytics-1> Tab
debug exit logout ping6 show upload
echo help no reauth support watch
enable history ping set terminal whoami

The following shows the additional commands available from enable mode.

analytics-1> enable
analytics-1# <Tab>
boot compare copy diagnose sync upgrade
clear configure delete reset system

The following shows the additional commands available from Config mode.

analytics-1# config
analytics-1(config)# <Tab>
aaa crypto local radius snmp-server version
banner end logging secure tacacs
cluster group ntp service user

Enabling Access Control to the Analytics Server

DANZ Monitoring Fabric (DMF) statistics and events and DMF switch/interface details (which are required for certain visualizations on the Analytical Node(AN)) are advertised through Redis and replica-Redis. The following is mandatory for DMF-AN integration:
  1. Configuring AN (Virtual IP) IP on the DMF Controller.
  2. Allowing DMF physical IPs under Redis/replicated ACL on the AN.

To enable access to the Analytics server for Redis and replicated Redis, complete the following steps.

  1. Log in to the Analytics Server CLI.
  2. Change to config-cluster-access submode.
    analytics-1> enable
    analytics-1# config
    analytics-1(config)# cluster
    analytics-1(config-cluster)# access-control
    analytics-1(config-cluster-access)#
  3. Define an access-list for Redis.
    analytics-1(config-cluster-access)# access-list redis
    analytics-1(config-cluster-access-list)# 1 permit from ip-address/cidr

    Replace ip-address/cidr with the IP address or subnet ID and subnet mask where the Controller is running.

  4. Define an access-list for replicated Redis.
    analytics-1(config-cluster-access)# access-list replicated-redis
    analytics-1(config-cluster-access-list)# 1 permit from ip-address/cidr

Adding Access Control to GUI

This section describes adding an access control list (ACL) command to the DANZ Monitoring Fabric (DMF) supported commands family.

To enable access to the Analytics Node (AN) User Interface (UI) from specific IP addresses or ranges of IP addresses, apply the new CLI command in the following manner:
DMF-ANALYTICS-CLUSTER> enable
DMF-ANALYTICS-CLUSTER# configure
DMF-ANALYTICS-CLUSTER(config)# cluster
DMF-ANALYTICS-CLUSTER(config-cluster)# access-control
DMF-ANALYTICS-CLUSTER(config-cluster-access)# access-list
<Access list name> Enter an access list name: Enter an access list name
active-directory Configure access-list for active-directory
apiConfigure access-list for api
guiConfigure access-list for gui
ipfixConfigure access-list for ipfix
netflowConfigure access-list for netflow
redisConfigure access-list for redis
replicated-redis Configure access-list for replicated-redis
snmp Configure access-list for snmp
sshConfigure access-list for ssh
DMF-ANALYTICS-CLUSTER(config-cluster-access)#
Refer to the DMF User guide for more information on Analytics ACL for GUI.

Configuring sFlow

sFlow is an industry-standard technology, defined by RFC 3176, for monitoring high-speed switched networks. sFlow defines methods for sampling packets and counters in the data path, and for forwarding the results to a sFlow collector for analysis and display. The DANZ Monitoring Fabric (DMF) supports sFlow to capture information about the production network and for troubleshooting the monitoring fabric.

For information about advanced search and analysis of historical sFlow messages using the Arista Analytics Graphical User Interface (GUI), refer to the latest edition of the Arista Analytics User Guide.

You can either configure the DANZ Monitoring Fabric Controller with global sFlow settings that apply uniformly to all DANZ Monitoring Fabric switches or configure different sFlow settings on a per-switch basis. These settings, in general, define the following:
  • IP address and port number of one or more sFlow collectors: identifies one or more sFlow collectors to which to send the sFlow packets. The default UDP port number is 6343.
  • Sample rate: specifies the number of packets to transmit before sending a sFlow packet. Sampling is enabled on all filter interfaces and disabled on core interfaces and delivery interfaces. The default sample is 1 packet per 10,000 packets.
Note: Due to switch architecture rate limiting, the maximum effective number of sFlow packets per second is 100.

If the sFlow collector is on a device external to the DANZ Monitoring Fabric, a static route to the collector must be configured on the external tenant logical router.

Using the DMF Controller GUI to Configure sFlow

To enable sFlow, add Analytics or other collectors, or change the default parameters, complete the following steps.

  1. To enable sFlow, select Maintenance > sFlow from the main menu.
    Figure 1. sFlow Configuration

    To view information about existing sFlow collectors, click the Expansion control to the left of the entry on the Collectors table. The system displays details about the switch counters associated with the specific collector.

    To activate or deactivate sFlow on a fabric-wide basis, click the Settings control to the left of the Configuration section and move the slider to Active to activate or to Inactive to deactivate.

  2. You can add up to four sFlow collectors. To add a sFlow collector, first click the Provision control (+) in the upper left corner of the Collectors table.
    Figure 2. Create sFlow Collector
  3. Type the IP address of the sFlow collector.
  4. Use the spinner to select the UDP port number used by the sFlow collector.
  5. Select the tenant where the sFlow agent should collect sFlow messages.
  6. Select the segment where the sFlow agent should collect sFlow messages. The default port is 6343.
  7. Click Save.
  8. (Optional) To view or change the default sFlow settings, select Maintenance > sFlow
    Figure 3. Configure sFlow Settings Dialog
  9. To change the sFlow global settings, click the Settings control to the left of the Configuration section.
  10. Change the default settings for properties as required and click Submit.

Using the DMF Controller CLI to Configure sFlow

Configure the Analytics server IP address as an sFlow collector by entering the following commands.
dmf-Controller1(config)# sflow default
dmf-Controller1(config-sflow)# collector 10.106.1.57

This example configures the Analytics server with the IP address 10.106.1.57 and the default UDP port 6343 as a sFlow collector.

The CLI enters sFlow Configuration Mode, from which you can enter the commands available for configuring sFlow on the DANZ Monitoring Fabric. For example, the following command identifies a sFlow collector at 10.106.1.57 using UDP port 6343.
dmf-Controller-1(config-sflow)# collector 10.106.1.57 udp-port 6343

The default UDP port is 6343. Up to four collectors can be defined by entering the collector command for each collector.

The following command defines a header size of 128 bytes, a sample rate of 1 per 1,000 packets, and the counter interval of 10 seconds:
dmf-Controller-1(config)# show running-config sflow
! sflow
sflow
collector 10.106.1.57
collector 10.106.1.58
collector 10.106.1.59
counter-interval 10
header-size 128
sample-rate 100
dmf-Controller-1(config)#

Managing the Arista Analytics Server Software

This section describes operations for managing the Arista Analytics server.

Verifying the Analytics Server Version

To view the version of the Analytics server, enter the following command.
analytics-1# show version
Controller Version : DMF Analytics Node 8.0.0 (bigswitch/analytics/dmf-8.0.0 #28)

Resetting to the Factory Default Configuration

To reset the Arista Analytics server to the factory default configuration, enter the following command.
analytics-1(config)# boot factory-default
boot factory-default: alternate partition will be overwritten
boot factory-default: proceed ("y" or "yes" to continue):

Password Reset

Resetting the Analytics Server Administrative Password

To reset the administrative password for the Analytics server, enter the following commands.
analytics-1# config
analytics-1(config)# reset user-password
Changing password for: admin
Current password:
New password:
Re-enter:
analytics-1(config)#

Resetting Password for Recovery User

To reset the password for the recovery user, please follow one of the following procedures. The steps need to be performed on both the Controllers of the cluster as resetting the password of the recovery user on one Controller won’t change it for the recovery user on the other Controller.

  1. Using Controller’s Bash:
    1. Go to Controller Bash by executing debugbash command.
    2. Execute sudo passwd recovery command.
    admin@Controller-1:~$ sudo passwd recovery
    New password:
    Retype new password:
    passwd: password updated successfully
    admin@Controller-1:~$
  2. From recovery account login:
    Note: For this to work, the customer needs to know the current password for the recovery user.
    recovery@Controller-1:~$ passwd recovery
    Changing password for recovery.
    Current password:
    New password:
    Retype new password:
    passwd: password updated successfully
    recovery@Controller-1:~$
  3. Using the API/api/v1/rpc/Controller/os/action/system-user/reset-password:
    The API call below will reset the recovery user’s password to AdminAdmin. The example given below is using curl initiated from a Linux host, but any rest client can be used to call the API.
    curl -g -H "Cookie: session_cookie=<session_cookie>" 'https://<Controller IP>:8443/api/v1/
    rpc/Controller/os/action/system-user/reset-password' -d '{"user-name" : "recovery","password" : "AdminAdmin"}' -X POST

Resetting Password for Admin and Other Local Users

To reset the password for admin and other local users, log in to the Controller using recovery user credentials. Use floodlight-reset-password to reset the user’s password. The following example resets the admin user’s password.
recovery@Controller-1:~$ floodlight-reset-password --user admin
Enter new admin password:
Re-enter new admin password:
Password updated for user admin
recovery@Controller-1:~$
The following example resets the password for a guest who is a read-only group user.
recovery@Controller-1:~$ floodlight-reset-password --user guest
Enter new guest password:
Re-enter new guest password:
Password updated for user guest
recovery@Controller-1:~$

Restarting the Analytics Server

If the Analytics server needs to be restarted for any reason, complete the following steps.
  1. Reboot the Controller from the CLI using the following command.
    analytics-1# system reboot controller
  2. In the case of a three-node analytics cluster, repeat the above command on every member of the cluster.

Checking the State of an Analytics Cluster

To check the state of the Analytics Cluster, perform the following steps.
  1. Click on the heart-shaped Stack Monitoring icon in the menu bar on the left.
  2. Validate that the Elasticsearch and Kibana state is green. The Graphical User Interface (GUI) should display Health is green.
    Figure 4. Health Monitoring
  3. Next, navigate to the CLI of the Analytics Node and run the following command.
    analytics-2# show cluster
    Cluster Name: SCALE
    Cluster Description : Analytics in Rack 314
    Cluster Virtual IP: 10.106.1.60
    Redundancy Status : redundant
    Last Role Change Time : 2020-11-29 23:25:50.128000 PST
    Failover Reason : Changed connection state: cluster configuration changed
    Cluster Uptime: 2 weeks, 3 days
    # IP@ State Uptime
    -|-----------|-|-------|--------------------|
    1 10.106.1.57 standby 16 hours, 24 minutes
    2 10.106.1.58 * active16 hours, 28 minutes
    3 10.106.1.59 standby 16 hours, 23 minutes
    analytics-2#

Accessing and Configuring Arista Analytics

To access the Analytics GUI, point the browser to the IP address assigned to the Analytics server during first boot configuration as following:
http://<Analytics node IP address or domain name or Virtual IP in case of Analytics cluster>

Using the System Tab for Analytics Configuration

When you click the System > Configuration tab at the top of the Analytics window, the system displays the following page.
Figure 5. System > Configuration

This page lets you configure the settings for sending alerts to an SMTP server, set the alert thresholds, and edit the mapping files used in the different dashboards.

Linking to a DMF Controller

To identify a specific DMF Controller, which is used for the Controller link in the lower left corner of the Analytics page, click the Edit control on the Analytics Configuration > dmf_controller option.

The system displays the following dialog.
Figure 6. Link Analytics Node to a DMF Controller

Enter the IP address of the DMF Controller and click Save.

Configuring SMTP Settings

Click the Edit icon to configure the settings for sending alerts to an SMTP server. The system displays the following page.
Figure 7. SMTP Settings

Enter the details for the SMTP server and other required information and click Apply & Test.

Note: Once enabled, the Server Name field cannot be left blank, even if you later decide not to use SMTP. You can enter any text string in the field to remove the SMTP server connection information.

Configuring Alert Thresholds and Enabling Alerts

You can enable the following alerts.
  • Production Traffic Mix
  • Monitoring Port Utilization Report
  • New Host Report

When you click the Edit control for the Production Traffic Mix option, the system displays the following page.

Figure 8. Alert Configuration

To make changes to the threshold, edit the fields provided and click Save. To enable the alert, move the slider to the left. When you click the Edit control for the Monitoring Port Utilization Report option, the system displays the following page.

To make changes to the threshold, edit the fields provided and click Save. To enable the alert move the slider to the left. To enable the New Host Report option, move the slider to the left.

Sending Analytics SMTP Alerts to a Syslog Server

To set up the Analytics Node to receive NetFlow messages from the DMF Service Node or another NetFlow generator, complete the following steps.
  1. SSH to the Analytics Node to access the CLI prompt for Analytics Node configuration.
  2. Enter Config-Local Mode on the Analytics Node CLI.
    analytics-1> enable
    analytics-1# config
    analytics-1(config)# local-node
    analytics-1(config-local)#
  3. Assign an IP address to the collector interface, which should be reachable from the DMF Service Node or other NetFlow generator.
    analytics-1(config-local)# interface collector
    analytics-1(config-local-if)# ipv4
    analytics-1(config-local-if-ip)# ip <collector-ip-address>

Configuring Collector Interface

Note: This feature is currently supported only on the standalone Analytics Node and is NOT supported in the Analytics Cluster.
Configure collector interface on Analytics to receive NetFlow from a service node or third-party devices by entering the following commands:
analytics-1(config)# local node
analytics-1(config-local)# interface collector
analytics-1(config-local-if)# ipv4
analytics-1(config-local-if-ipv4)# ip 219.1.1.10/24
analytics-1(config-local-if-ipv4)#

In the Arista Analytics Node, there are two 10G interfaces in bond (bond3) acting as a collector interface.

Note: DNS, DHCP, ARP, sFlow, and ICMP traffic coming from Analytics node management should not have the source IP address on the same subnet as the collector interface. Any traffic of those kinds with a source IP address in the same subnet as the collector interface will be dropped.

Configuring Advanced Features

Machine Learning

Note: X-Pack machine learning uses pop-ups, so disable any pop-up blockers, which may create an exception for a Kibana URL.

X-Pack machine learning lets you specify activity that can be monitored over time so that changes from historical norms are flagged as discrepancies, which may indicate unauthorized network usage. For details about this feature, see the Kibana Guide: Machine learning.

To configure this feature, click the Machine Learning control in the left pane of the Kibana interface.

Figure 9. Machine Learning
The Machine Learning page provides the following tabs:
  • Job Management: Create and manage jobs and associated data feeds.
  • Anomaly Explorer: Display the results of machine learning jobs.
  • Single Metric Viewer: Display the results of machine learning jobs.
  • Settings: Add scheduled events to calendars and associate these calendars with your jobs.

Using Watch for Alerting

Elasticsearch alerting is a set of administrative features that enable you to watch for changes or anomalies in your data and perform the necessary actions in response. The Elasticsearch watch feature lets you generate an alert when specific network activity occurs. For details about configuring an advanced watch, refer to the Elasticsearch Reference: Alerting.

Elasticsearch provides an API for creating, managing and testing watches. A watch describes a single alert and can contain multiple notification actions.

A watch is constructed from four simple building blocks:
  • Schedule: A schedule for running a query and checking the condition.
  • Query: The query to run as input to the condition. Watches support the full Elasticsearch query language, including aggregations.
  • Condition: A condition that determines whether or not to execute the actions. You can use simple conditions (always true), or use scripting for more sophisticated scenarios.
  • Actions: One or more actions, such as sending email, pushing data to 3rd party systems through a webhook, or indexing the results of the query.

A full history of all watches is maintained in an Elasticsearch index. This history keeps track of each time a watch is triggered and records the results from the query, whether the condition was met, and what actions were taken.

To configure an Alert, click the Management control in the left pane of the Kibana interface, and click Watcher to define a new instance.
Figure 10. Using a Watcher for Alerting
The following figure defines a new watch:
Figure 11. Defining a New Watch
Click Create new watch and select Advanced Watch from the menu that appears. This option lets you define a custom alert.
Figure 12. Example of Advanced Watch

REST script in JSON format

Compose a REST script in JSON format that includes four sections:
  • Trigger Schedules when the watch runs. This can be an interval, which causes the watcher to run after the specified time elapses (for example, every 10 seconds).
  • Input Identifies the information you want to evaluate. This can be search criteria that retrieves the required input.
  • Condition Identifies the activity or other condition that determines if the alert should be sent.
  • Action Identifies the text of the alert and the webhook where the alert message will be sent.
The following is an example JSON script that sends an alert whenever more than 10 packets containing the string “gte” are received within a 5-second interval.
{
"trigger": {
"schedule": {
"interval": "5s"
},
"input": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
"flow-icmp*"
],
"types": [],
"body": {
"query": {
"match_all": {}
}
}
}
}
},
"condition": {
"compare": {
"ctx.payload.hits.total": {
"gte": 10
}
}
},
"actions": {
"my_webhook": {
"webhook": {
"scheme": "https",
"host": "hooks.slack.com",
"port": 443,
"method": "post",
"path": "/services/T029CQ2GE/B5NBNKMGR/uZjyLgVUqrQLvGl60yM9ANUP",
"params": {},
"headers": {
"Content-Type": "application/json"
},
"body": "{\"channel\": \"#office_bmf_test\", \"username\": \"webhookbot\", \"text\": \"icmp
burst detected over the set limit \", \"icon_emoji\": \":exclamation:\"}"
}
}
}
}
}

For information about configuring the SLACK webhook, refer to the following Slack documentation.

Application Dependency Mapping

This feature helps you identify how items in an Elasticsearch index are related, a process known as Application Dependency Mapping (ADM). You can explore the connections between indexed terms and see which connections are the most meaningful. For example, this feature lets you map the relationships between the Destination IP (DIP) and Source IP (SIP) for a specific application. For details about this feature, refer to the Kibana documentation.

Arista Analytics provides a graph exploration API and an interactive graph visualization tool that work with existing Elasticsearch indices. To configure this feature, click the Graph control in the left pane of the Kibana interface.
Figure 13. Application Dependency Mapping
A graph is a network of related terms in the index. The terms you want to include in the graph are called vertices. The relationship between any two vertices is a connection. This feature lets you answer questions such as the following.
  • Can I build a map to show different client machines accessing services identified by a Layer 4 port?
  • Can I build a map to view which DNS servers are accessed by all the clients?
  • Can I build a map to show how different servers interact with each other?

Advanced options let you control how your data is sampled and summarized. You can also set timeouts to prevent graph queries from adversely affecting the cluster.

Analytics also provides a dashboard that has a table with all the IPs and port numbers that are communicating with each other. To view the table, click Dashboard on the left panel, search for bsnNetOps_ACLorDrop_Flows, and click on the link.
Figure 14. Active IPs and Port Numbers

Using RBAC with Arista Analytics

Arista Analytics supports full Role-Based Access Control (RBAC) for the web-based User Interface (UI) and CLI. Arista Analytics supports two types of users:
  • admin: Admin user accounts have full read and write access to the CLI as well as to the Kibana UI.
  • non-admin: Non-admin users typically have read only access. They can be defined only by an admin user.

To create and enable new user accounts, complete the following steps.

  1. Create group and user in the Analytics CLI.
    analytics-1(config)# group new-non-admin-group
    analytics-1(config-group)#
    analytics-1(config)# user new-non-admin-user
    analytics-1(config-user)#
  2. Verify successful creation of user.
    analytics-1(config-group)# show user
    # User nameFull name Groups
    -|------------------|-------------|-------------------|
    1 admin Defaultadmin admin
    2 new-non-admin-user new-non-admin-group
  3. Verify successful creation of group.
    analytics-1(config-group)# show group
  4. Create role and privilege in the Kibana UI that matches the group created in Step 1. To set roles and privileges in the Kibana UI refer to the Elastic documentation
    1. Log in as admin into Kibana.
      Figure 15. Kibana UI Log In
    2. Go to Management >Roles.
      Figure 16. Role Management
    3. Click Create Role and populate the respective fields as shown for read-only access.
      Figure 17. Verifying New Group
    4. Add or remove indices as needed under Index Privileges > Indices.
    5. Click Save and verify that the created group appears in the list shown.
      Figure 18. Kibana Management > Roles
  5. Log in as usual to Kibana, using the newly created user account.

    Click the logout button as you normally do for users created in Kibana.

    Log in using an incognito tab and log off by closing all tabs in incognito mode.

Note: For configuring TACACS+ and Radius refer to the DMF User guide .

Time-based User Lockout

Starting in the DMF 8.0 release, DANZ Monitoring Fabric supports time-based user lockout functionality. Users will be locked out of login for t2 time when attempting with n incorrect passwords within t1 time.

Locked out users have to be cleared of lockout or they have to wait for the lockout period to expire before attempting login with the correct password. The feature is disabled by default.

To enable, use the following command:
Controller-1(config)# aaa authentication policy lockout failure <number of failed attempts> window <within t1 time>duration <lockout for t2 time>
  • Value range for failure can be from 1 to 255.
  • Value range for window and duration can be from 1 to 4294967295 seconds (2^32-1).
The following example locks any user out for 15 minutes when attempting three incorrect logins within 3 minutes.
Controller-1(config)# aaa authentication policy lockout failure 3 window 180 duration 900
Note: This feature affects only remote logins such as SSH/GUI/REST API using username and password. Console-based login, password-less authentications such as SSH keys, Single Sign-on, and access-token logins are not affected. Locked-out users can still access the Controller via console or password-less authentication.
Note: The feature is node-specific with respect to the functionality. For example, if user1 is locked out accessing the active Controller in the cluster, they would still be able to log in to a standby Controller with the correct password, and vice-versa. Lockout user information is also not persistent across Controller reboot or failover.
To view if a user is locked out, admin-group users can issue the following command: show aaa authentication lockout
Controller-1# show aaa authentication lockout
User name Host Failed LoginsLockout Date Lockout Expiration
---------|-------------|-------------|------------------------------|------------------------------|
admin 10.240.88.193 1 2020-09-08 16:07:36.283000 PDT 2156-10-15 22:35:51.283000 PDT

To clear the lockout for a user, admin-group users can issue the following command: clear aaa authentication lockout user <username>

To clear all the locked out users, admin-group users can issue the following command:

clear aaa authentication lockout

The following example shows how to clear the “admin” user who got locked out.
Controller-1# clear aaa authentication lockout user admin
Controller-1# show aaa authentication lockout
None.
The “recovery” user will also be locked out if attempting with incorrect passwords. To check if the user is locked out, use pam_tally2 tool:
admin@Controller-1:~$ sudo pam_tally2 -u recovery
Login Failures Latest failure From
recovery 9 09/08/20 16:16:04 10.95.66.44
To reset the lockout for the user, use the following command:
admin@Controller-1:~$ sudo pam_tally2 --reset --user recovery
Login Failures Latest failure From
recovery 9 09/08/20 16:16:04 10.95.66.44
admin@Controller-1:~$ sudo pam_tally2 -u recovery
Login Failures Latest failure From
recovery
Note: the window parameter does not apply to the “recovery” user login as the pam_tally2 tool does not support it.

Elasticsearch RBAC examples

Admin User and Group: The admin user is by default added to the admin group and the superuser role in elasticsearch. No configuration is needed for it.

Read-only Access: By default, the BSN read-only role exists that maps to Floodlight as well.

Dashboard Access Only:

Create the role for dashboard access, by selecting Stack > management > Roles > Create Role. Here, configure the indices to * and set the privileges under Kibana as shown in the image below.
Figure 19. Kibana privileges for Dashboard access only
The following is the example for different privileges for Elasticsearch.
Figure 20. Elasticsearch RBAC example

Integrating Analytics with Infoblox

Infoblox provides DNS and IPAM services, which can be used to integrate with Arista Analytics. To use, associate a range of IP addresses in Infoblox with extensible attributes, then configure Analytics to map these attributes for the associated IP addresses. The attributes assigned in Infoblox then appear in place of the IP addresses in Analytics visualizations.

Configuring Infoblox for Integration

To configure Infoblox for integration with Arista analytics, complete the following steps.
  1. Log in to Infoblox System Manager.
  2. To set the extensible attributes in Infoblox, click the Administration Extensible Attributes tab.
    Figure 21. Extensible Attributes Tab
    This tab lets you define the attributes applied to a block of IP addresses. The extensible attributes you define for integrating Infoblox used with Arista Analytics are as follows:
    • EVPC: Identifies the Enterprise Virtual Private Cloud (EVPC) assigned to a block of IP addresses in Infoblox.
    • segment: Identifies the specific subnet interface to which an IP address is assigned.
  3. To assign an IP address range to the VPC extensible attribute, click Data Management IPAM.
  4. Save the configuration.

Configuring Arista Analytics

After completing the configuration required to integrate Infoblox with Arista Analytics to recognize the IP address ranges assigned in Infoblox,configure Analytics by completing the following steps.
  1. Log in to Arista Analytics.
  2. Click System Analytics Configuration.
    Figure 22. DMF Analytics Configuration

    Refer the Adding Flow Enhancement via Infoblox IPAM Integration for more integration information.

Adding Flow Enhancement via Infoblox IPAM Integration

This feature integrates subnets and corresponding extensible attributes from an Infoblox application into Arista Analytics’ collection of IP blocks and corresponding list of attributes.

Arista Analytics provides an enhanced display of incoming flow records using these extensible attributes from the Infoblox application.

Configuring the Flow enhancement

Configure the feature in Kibana by selecting the System > Configuration tab on the Fabric page and opening the Analytics Configuration integration panel.

Figure 23. Dashboard - Configuration

The list of IP blocks and associated external attributes appears in the Infoblox application and under the Data Management > IPAM tab. The columns shaded in gray represent the extensible attributes and their values.

Figure 24. Data Management > IPAM

Editing IPAM Integration

Figure 25. Edit - Integration
Configure the integration on Arista Analytics using the following example:
  • Infoblox:
    • host: The IP address or DNS hostname of the Infoblox application.
    • user: Username for Infoblox application.
    • password: Password for Infoblox application.
    • keys_fetched:
      • The list of extensible attributes from the connected Infoblox application to be added to the Analytics Node ip_block tags. If an entered extensible attributes matches the name of an existing ip_block tag, it will not be added.
    • keys_aliased:
      • Mapping default Analytics Node ip_block tags to extensible attributes in the Infoblox application. Add additional mappings from ip_block tags to extensible attributes as required. Empty field values are ignored. Each mapping from the ip_block tag to the extensible attributes indicates:
        • Add the extensible attributes to the Analytics Node’s ip_block tags. If an extensible attributes appears in both the integration configuration keys_fetched list and as a value in the keys_aliased mapping, it will only be added once to the Analytics Node ip_block tags list. It will not be added if it is already in the ip_block tags.
        • For IP addresses coming from the Infoblox application, the value of the extensible attributes should replace the value of the corresponding ip_block tag. The extensible attributes and the Analytics Node tag become aliases of each other.

For example, in the above example integration configuration, VPC is in keys_fetched, and segment is in the values of keys_aliased, but both are already in the ip_block tags list, so they are not added again, as seen below. However, Site and ASNUM are not already in the tags list and are added to the end of the tags list.

Figure 26. Edit - ip_block

As a result of these configuration changes, view the following enhancements to the flow records in the Production Network > sFlow tab and scroll to the Flows by Time chart.

Figure 27. Dashboard - sFlow

Suppose the sFlow packet source and/or destination IP addresses fall within the IP subnets in the Infoblox IPAM dashboard. In that case, their flow records will be augmented with the extensible attributes from Infoblox as specified in the integration configuration.

For example, the source and destination IP address of the 10.240.155.0/HQ:54149 > 10.240.155.10/HQ/HTTPS flow fall within the 10.240.155.0/24 subnet in the Infoblox IPAM dashboard.

When expanding this flow in the Flows by Time chart, since VPC is in the integration keys_fetched, the sVPC value is VPC155.

Site is in the integration keys_aliased values, and a sSite value of HQ appears. Since Desc is aliased to Site (an extensible attribute), sDesc takes on the value of Site. Segment is in the keys_aliased values; hence, sSegment with a value S155 appears.

Observe similar attributes for the destination IP address in the flow record. All these values come from the Infoblox IPAM dashboard shown above. ASNUM does not appear as a field in the flow record below despite being in the integration keys_aliased values because it is not configured or associated as an extensible attribute to the subnets in the Infoblox IPAM dashboard.

Figure 28. Flow by Time

Troubleshooting

If the flow records that you expect to be augmented with InfoBlox extensible attributes are missing these attributes, please verify that the Infoblox credentials you provided in the integration configuration are correct. After confirming the credentials and the relevant flow records are still missing the Infoblox extensible attributes, please generate a support bundle and contact Arista Networks TAC.

Limitation

Known Issue:
  • When removing a tag in the middle of the ip_block tags list and saving the configuration, the relevant flow records may have incorrect values in their attributes during the minute following this change. After this brief period, the flow records will have the correct attributes and corresponding values.

Configuring SMTP Server to Send Email Alerts via Watcher

You can configure the email action in Watcher to send email notifications. To send an email, you must configure at least one email account in the Analytics Node. To do so, access the Analytics node via the CLI and complete the following steps.
Note: All of the following steps need to be executed on each node of the Analytics Node cluster.
  1. At the Analytics Node command prompt, enter:
    debug bash
  2. Access the config file.
    vi /opt/bigswitch/docker-compose.yml
  3. Access the environment section under the Elasticsearch component.
    version: '2'
    services:
    elasticsearch:
    image: elasticsearch
    logging:
    driver: none
    container_name: elasticsearch
    #cpu_shares: 55
    ports:
    - "0.0.0.0:9201:9201"
    - "0.0.0.0:9300:9300"
    volumes:
    - /var/lib/analytics/data:/usr/share/elasticsearch/data
    - /var/log/analytics/es:/usr/share/elasticsearch/logs
    - /etc/localtime:/etc/localtime:ro
    - /opt/bigswitch/conf/log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties
    - /var/lib/analytics/data/private.key:/usr/share/elasticsearch/config/private.key
    - /var/lib/analytics/data/cert.pem:/usr/share/elasticsearch/config/cert.pem
    - /opt/bigswitch/snapshot:/usr/share/elasticsearch/snapshot
    environment:
    - cluster.name=${ES_CLUSTER_NAME}
    - http.port=9201
  4. Append the following lines to the environment section.
    - xpack.notification.email.default_account=<account name>
    - xpack.notification.email.account.<account name>.profile.from=<from email id>
    - xpack.notification.email.account.<account name>.smtp.auth=true
    - xpack.notification.email.account.<account name>.smtp.starttls.enable=true
    - xpack.notification.email.account.<account name>.smtp.host=<SMTP server host name>
    - xpack.notification.email.account.<account name>.smtp.port=587
    - xpack.notification.email.account.<account name>.smtp.user=<SMTP user email id>
  5. Use the keystore command to store the account SMTP password. Access the Elasticsearch container, run the following command, enter the password, then commit changes to the container.
    sudo docker exec -it elasticsearch bash
    bin/elasticsearch-keystore add xpack.notification.email.account.arista.smtp.secure_password
    exit
    sudo docker commit elasticsearch elasticsearch
  6. Configure the watcher action to send notifications by email.
    "actions": {
    "send_email": {
    "email": {
    "profile": "gmail",
    "from": "<from email id>",
    "to": [
    "<To email id>"
    ],
    "subject": "<subject>",
    "body": {
    "text": "<email body>"
    }
    }
    }
Refer to https://www.elastic.co/guide/en/elasticsearch/reference/current/actions-email.html for more details on the Watcher email action.