Using the DMF Recorder Node

This chapter describes how to configure the DANZ Monitoring Fabric (DMF) Recorder Node to record packets from DMF filter interfaces. For related information, refer to the following:

Overview

The DANZ Monitoring Fabric (DMF) Recorder Node is integrated with the DANZ Monitoring Fabric for single-pane-of-glass monitoring. A single DMF Controller can manage multiple Recorder Nodes, delivering packets for recording through Out-of-Band policies. The DMF Controller also provides central APIs for packet queries across one or multiple recorder nodes and for viewing errors, warnings, statistics, and the status of connected recorder nodes.

A DMF out-of-band policy directs matching packets to be recorded to one or more recorder nodes. A recorder node interface identifies the switch and port used to attach the recorder node to the fabric. A DMF policy treats these as delivery interfaces and adds them to the policy so that flows matching the policy are delivered to the specified recorder node interfaces.

Configuration Summary

At a high level, follow the below three steps to use the recorder node.

Step 1: Define a recorder node.

Step 2: Define a DANZ Monitoring Fabric (DMF) policy to select the traffic to forward to the recorder node.

Step 3: View and analyze the recorded traffic.

The recorder node configuration on the DMF Controller includes the following:
  • Name: Each recorder node requires a name that is unique among recorder nodes in the connected fabric. If the name is removed, all configuration for the given recorder node is removed.
  • Management MAC address: Each recorder node must have a management MAC address that is unique in the connected fabric.
  • Packet removal policy: This defines the behavior when the recorder node disks reach capacity. The default policy causes the earliest recorded packets to be overwritten by the most recent packets. The other option is to stop recording and wait until space is available.
  • Record enable or Record disable: Recording of packets is enabled by default, but it can be enabled or disabled for a specific recorder node.
  • Static auth tokens: Static auth tokens are pushed to each recorder node as an alternative form of authentication in headless mode, when the DMF Controller is unreachable, or by third-party applications that do not have or do not need DMF Controller credentials.
  • Controller auth token: The recorder node treats the controller as an ordinary client and requires it to present valid credentials in the form of an authentication token. The DMF Controller authentication token is automatically generated but can be reset upon request.
  • Pre-buffer: This buffer, which is defined in minutes, is used for proactive network monitoring without recording and retaining unnecessary packets. Once the buffer is full, the oldest packets are deleted.
  • Maximum disk utilization: This defines the maximum disk utilization in terms of a percentage between 5% and 95%. When the configured utilization is reached, the packet removal policy is enforced. The default maximum disk utilization is 95%.
  • Maximum packet age: This defines the maximum age in minutes of any packet in the recorder node. It can be used in combination with the packet removal policy to control when packets are deleted based on age rather than disk utilization alone. When not set, the maximum packet age is not enforced and packets are kept until the maximum disk utilization is reached.

Indexing Configuration

The recorder node indexing configuration defines the fields that can be used to query packets on the recorder node. By default, all indexing fields are enabled in the indexing configuration. You can selectively disable indexing fields you do not wish to use in recorder node queries.

Disabling indexing fields has two advantages. First, it reduces the index space required for each packet recorded. Second, it improves query performance by reducing unnecessary overhead. It is recommended that unnecessary indexing fields be disabled.

The recorder node supports the following indexing fields:
  • MAC Source
  • MAC Destination
  • VLAN 1: Outer VLAN ID
  • VLAN 2: Inner/Middle VLAN ID
  • VLAN 3: Innermost VLAN ID
  • IPv4 Source
  • IPv4 Destination
  • IPv6 Source
  • IPv6 Destination
  • IP protocol
  • Port Source
  • Port Destination
  • MPLS
  • Community ID
  • MetaWatch Device ID
  • MetaWatch Port ID
Note:The Outer VLAN ID indexing field must be enabled in order to query the recorder node using a DANZ Monitoring Fabric (DMF) policy name or a DMF filter interface name.

To understand how indexing configuration can be leveraged to your advantage, consider the following examples:

Example 1: To query packets based on applications defined by unique transport ports, disable all indexing fields except source and destination transport ports. This results in only transport ports being saved as meta data for each packet recorded. This greatly reduces per-packet index space consumption and also increases the speed of recorder-node queries.

However, you will not be able to effectively query on any other indexing field because that meta data was not saved when the packets were recorded.

Example 2: The recorder node supports community ID indexing, which is a hash of IP addresses, IP protocol, and transport ports that can be used to identify a flow of interest. If the recorder node use case is to query based on community ID, it might be redundant to index on IPv4 source and destination addresses, IPv6 source and destination addresses, IP protocol, and transport port source and destination addresses.

Pre-buffer Configuration and Events

The recorder node pre-buffer is a circular buffer in which packets to be recorded are received. When enabled, the pre-buffer feature allows for retention of the packets received by the recorder node for a specified length of time prior to an event that triggers recording of buffered and future packets to disk. In the absence of an event, the recorder node will record into this buffer, deleting the oldest packets in the buffer when the buffer reaches capacity. When a recorder node event is triggered, the packets in the pre-buffer are saved to disk, and the packets received from the time of the event trigger to the time of the event termination are saved directly to disk upon termination of the event, received packets are received and retained in the pre-buffer until the next event. By default, the pre-buffer feature is disabled, indicated by a value of zero minutes.

For example, if you configure the pre-buffer to thirty minutes, up to thirty minutes of packets will be received by the buffer. When you trigger an event, the packets currently in the buffer are recorded to disk, and packets newly received by the recorder node bypass the buffer and are written directly to disk until the event is terminated. When you terminate the event, the pre-buffer resets, accumulating received packets for up to the defined thirty-minute pre-buffer size.

The packets associated with an event can be queried, replayed, or analyzed using any type of recorder node query. Each triggered event is identified by a unique, user-supplied name, which can be used in the query to reference packets recorded in the pre-buffer prior to and during the event itself.

Using an Authentication Token

When using a DANZ Monitoring Fabric (DMF) Controller authentication token, the recorder node treats the DMF Controller as an ordinary client and requires it to present valid credentials either in the form of an HTTP basic username and password or an authentication token.

Static authentication tokens are pushed to each recorder node as an alternative form of authentication in headless mode, when the DMF Controller is unreachable, or by third-party applications that do not have or do not need Controller credentials.

Using the GUI to Add a Recorder Device

To configure a recorder node or update the configuration of an existing recorder node, follow the steps below:
  1. Select Monitoring > Recorder Nodes from the main menu bar of the DANZ Monitoring Fabric (DMF) GUI.

    The system displays the page shown below.

    Figure 1. Recorder Nodes
  2. To add a new recorder node, click the provision control (+) in the Recorder Nodes Devices table.
    Figure 2. Provision Recorder Node
  3. Complete the following required fields:
    • Assign a name to the recorder node.
    • Set the MAC address of the recorder node. Obtain the MAC address from the chassis ID of the connected device, using the Fabric > Connected Devices option.
  4. Configure the following options as needed:
    • Recording: Recording is enabled by default. To disable recording on the recorder node, move the Recording slide ball to Off. When recording is enabled, the recorder node records the matching traffic directed from the filter interface defined in a DMF policy.
    • Disk Full Policy: Change the Disk Full Policy to Stop and Wait if required. The default packet removal policy is Rolling FIFO (First In First Out), which means the oldest packets will be deleted to make room for newer packets. This occurs only when the recorder node disks are full. The alternative removal policy is Stop and Wait, which causes the recorder node to stop recording when the disks are full and wait until disk space becomes available. Disk space can be made available by leveraging the recorder node delete operation to remove all or selected time ranges of recorded packets.
    • Backup Disk Policy: Specify the disk backup policy to as desired. This is a mandatory field while creating a new recorder. Select from one of the three following options:
      • No Backup: This is the default option and is also the recommended option when no extra disk is available. It is also a continuation of the behavior supported in previous releases.
      • Remote Extend: In this option, recording is performed on the local disks. When full, the recording continues on a remote Isilon cluster mounted over NFS. In this mode, the remote disks are called backup disks. With regard to the Disk Full Policy, if set to:
        • Stop and Wait: Recording stops when both local and remote disks become full.
        • Rolling FIFO: When the configured threshold is reached, the oldest files from both disks are removed until the disk usage returns below the threshold number.
      • Local Fallback: In this option, recording is performed on a remote Isilon cluster mounted over NFS. If the connection between the Recorder Node and the remote cluster fails, the recording is performed on the local disks until the failure is resolved. In this mode, the local disks are called backup disks. With regard to the Disk Full Policy, if set to:
        • Stop and Wait: Recording stops when the remote disks become full.
        • Rolling FIFO: When the configured threshold is reached, the oldest files from both disks are removed until the disk usage returns below the threshold number.
      Note: A connection failure should not occur due to a misconfiguration of the NFS server on the DMF Controller. In such cases, the recording stops until the Controller’s configuration is fixed.
    • Max Packet Age: Change the Max Packet Age to set the maximum number of minutes that recorded packets will be kept on the recorder node. Packets recorded are discarded after the specified number of minutes. This defines the maximum age in minutes of any packet in the recorder node. It can be used in combination with the Disk Full Policy to control when packets are deleted based on age rather than disk utilization alone. When unset, Max Packet Age is not enforced.
    • Pre-Buffer: Assign the number of minutes the recorder node pre-buffer allows for windowed retention of packets received by the recorder node for a specified length of time. By default, the Pre-Buffer is set to zero minutes (disabled). With a nonzero Pre-Buffer setting, when you trigger a recorder event, any packets in the pre-buffer are saved to disk, and any packets received by the recorder after the trigger are saved directly to disk. When you terminate an ongoing recorder event, a new pre-buffer is established in preparation for the next event.
    • Max Disk Utilization: Specify the maximum utilization allowed on the index and packet disks. The Disk Full Policy will be enforced at this limit. If left unset, then the disks space will be used to capacity.
    • Parse MetaWatch Trailer: Determine when the MetaWatch trailer should be parsed.
      • Off: When set to Off, the recorder node will not parse the MetaWatch trailer, even if it is present in incoming packets.
      • Auto: When set to Auto, the recorder node will look for a valid timestamp in the last 12 bytes of the packet. If it matches the system timestamp closely enough, the trailer will be parsed by the recorder node.
      • Force: When set to Force, recorder node will assume the last 12 bytes of packet is a MetaWatch trailer and parse it, even if it did not find a valid timestamp.
  5. Click Save to save and close the configuration page or click NEXT to continue with the configuration. Displays the Indexing tab of the Provision Recorder Node page.
    Figure 3. Provision Recorder Node-Indexing
  6. All the indexing options are enabled by default. To disable any of the indexing behaviors, move the sliding ball of the respective item to the left. For more details, see the Indexing Configuration section.
  7. Click Save to save and close the configuration page or click NEXT to continue with the configuration. Displays the Network tab of the Provision Recorder Node page.
    Figure 4. provision_recorder_nodes_network1

Configuring a Node to Use Local Storage

 

To configure a node to use local storage, use the following steps:

  1. Network: To use local storage, set the Auxiliary NIC Configuration to default (No) as shown in the figure below.
    Figure 5. Network Provisioning
  2. Storage: To use local storage, set the Index Disk Configuration and Packet Disk Configuration to default (No) as shown in the figure below.
    Figure 6. Configure to Use Local Storage
  3. Click Save to add the recorder node configuration to the Controller.

Configuring a Node to Use External Storage

To store packets on external storage with an NFS mount, the auxiliary interface of the recorder node has to be connected to the network and subnet where the NFS storage is located as displayed in the figure below.
Figure 7. Topology to Use External Storage
Note: Volume for index and packet on the NFS storage should be created first. Refer to vendor-specific NFS storage documentation about how to create the volume (or path).
To configure a recorder node for external NFS storage, update the configuration of an existing recorder node or add a new node with the following steps:
Note: For release 7.2, only Isilon NFS storage is supported.
  1. Network: For external NFS storage, such as Isilon, the auxiliary interface of the recorder node should be connected to a network and subnet which has reachability to Isilon NFS storage. Set the Auxiliary NIC Configuration slide to YES and assign an IP address to the auxiliary interface as shown in the figure below. Ensure the IP address for the auxiliary interface is not in the same subnet as the recorder node management IP address.
    Figure 8. Provision External Storage
  2. Storage: To specify the location of the external NFS storage, configure the following options:
    • Index Disk Configuration and Packet Disk Configuration are disabled by default (slide set to No). Set the slide for both Index Disk Configuration and Packet Disk Configuration to Yes.
    • NFS Server [Index Disk Configuration and Packet Disk Configuration]: assign the IP address or host name for the NFS Server (e.g., Isilon Smart Connect host name).
    • Transport Port of NFS Service [Index Disk Configuration and Packet Disk Configuration]: if no value is specified, default will be used (2049). Specify a value if the NFS storage has been configured to use something other than the default value.
    • Transport Port of Mounted Service [Index Disk Configuration and Packet Disk Configuration]: if no value is specified, the default will be used. Specify a value for this if the NFS storage mounted service has been configured to use something other than the default value.
    • Volume: [Index Disk Configuration and Packet Disk Configuration] - Specify the location or path on the NFS server where the index and packets should be stored.
    Figure 9. Provision External Storage
  3. Click Save to add the recorder node configuration to the Controller.
    Note:If the configuration of a previously added packet recorder is edited to use external storage from local storage or vice versa, then the packet recorder must be rebooted.

Configuring a Recorder Node Interface

To record packets to a recorder node using a DANZ Monitoring Fabric (DMF) policy, configure a DMF Recorder Node interface that defines the switch and interface in the monitoring fabric where the recorder node is connected. The DMF Recorder Node interface is referenced by name in the DMF policy as the destination for traffic matched by the policy. To configure a DMF Recorder Node interface, complete the following steps:
  1. Click the provision control (+) at the top of the Recorder Node Interfaces table. The system displays the following page:
    Figure 10. Create DMF Recorder Node Interface
  2. Assign a name for the DMF Recorder Node interface in the Name field.
  3. Select the switch containing the interface that connects the recorder node to the monitoring fabric.
  4. Select the interface that connects the recorder node to the monitoring fabric.
  5. (Optional) Type information about the interface in the Description field.
  6. Click Save to add the configuration to the DMF Controller.

Using the GUI to Assign a Recorder Interface to a Policy

To forward traffic to a recorder node, include one or more recorder node interfaces as a delivery interface in a DANZ Monitoring Fabric (DMF) policy.

When creating a new policy or editing an existing policy, select the recorder node interfaces from the Monitoring > Policies dialog, as shown in the following screen.
Figure 11. DMF Policies
Note: To create a Recorder Node interface, proceed to the Monitoring > Recorder Nodes page and click the + in the Interface section.
To create a policy select Destination Tools > Add Ports(s) and use the RN Fabric Interface to select a previously configured Recorder Node interface. Select or drag the interfaces or Recorder Nodes to add Destination Tools.
Figure 12. Recorder Node - Create Policy
Figure 13. Add Recorder Nodes
Note: The Recorder Node interface can only be selected and not created in the create policy dialogue.

Using the GUI to Define a Recorder Query

The recorder node records all the packets received on a filter interface that match the criteria defined in a DANZ Monitoring Fabric (DMF) policy. Recorded packets can be recalled from or analyzed on the recorder node using a variety of queries. Use the options in the recorder node Query section to create a query and submit it to the recorder node for processing. The following queries are supported:
  • Window: Retrieves the timestamps of the oldest and most recent packets recorded on the recorder.
  • Size: Provides the number of packets and their aggregate size in bytes that match the filter criteria specified.
  • Application: Performs deep packet inspection to identify applications communicating with the packets recorded and that match the filter criteria specified.
  • Packet-data: Retrieves all the packets that match the filter criteria specified.
  • Packet-object: The packet object query extracts unencrypted HTTP objects from packets matching the given stenographer filter.
  • HTTP, HTTP Request, and HTTP Stat: Analyzes HTTP packets, extracting request URLs, response codes, and statistics.
  • DNS: Analyzes any DNS packets, extracting query and response meta data.
  • Replay: Replays selected packets and transmits them to the specified delivery interface.
  • IPv4: Identifies and dissects distinct IPv4 flows.
  • IPv6: Identifies and dissects distinct IPv6 flows.
  • TCP: Identifies and dissects distinct TCP flows.
  • TCP Flow Health: Analyzes TCP flows for information such as maximum RTT, retransmissions, throughput, etc.
  • UDP: Identifies and dissects distinct UDP flows.
  • Hosts: Identifies all the unique hosts that match the filter criteria specified.
  • RTP Stream: Characterizes the performance of Real Time Protocol streaming packets.
After making a selection from the Query Type list, the system displays additional fields that can be used to filter the retrieved results, as shown below:
Figure 14. Packet Recorder Node Query
Use the following options to specify the packets to include in the query:
  • Relative Time: A time range relative to the current time in which look for packets.
  • Absolute Time: A specific time range in which to look for packets.
  • Any IP: Include packets with the specified IP address in the IP header (either source or destination).
  • Directional IP: Include packets with the specified source and/or destination IP address in the IP header.
  • Src Port: Include packets with the specified protocol port number in the Src Port field in the IP header.
  • Dst Port: Include packets with the specified protocol port number in the Dst Port field in the IP header.
  • IP Protocol: Select the IP protocol from the selection list or specify the numeric identifier of the protocol.
  • Community ID:Select packets with a specific BRO community ID string.
  • Src Mac: Select packets with a specific source MAC address.
  • Dst Mac: Select packets with a specific destination MAC address.
  • VLAN: Select packets with a specific VLAN ID.
  • Filter Interfaces: Click the provision (+) control and, in the dialog that appears, enable the checkbox for one or more filter interfaces to which the query should be restricted. To add interfaces to the dialog, click the provision (+) control on the dialog and select the interfaces from the list that is displayed.
  • Policies: Click the provision (+) control and, in the dialog that appears, enable the checkbox for one or more policies to which the query should be restricted. To add policies to the dialog, click the provision (+) control on the dialog and select the policies from the list that is displayed.
  • Max Bytes: This option is only available for packet queries. Specify the maximum number of bytes returned by a packet query in a PCAP file.
  • Max Packets: This option is only available for packet queries. Specify the maximum number of packets returned by a packet query in a PCAP file.
  • MetaWatch Device ID: Filter packets with the specified MetaWatch device ID.
  • MetaWatch Port ID: Filter packets with the specified MetaWatch port ID.
Alternatively, Global Query Configuration can be used to set the byte limit on packet query results.
Figure 15. Global Query Configuration

Viewing Query History

You can view the queries that have been submitted to the recorder node using the GUI or CLI.

To use the DANZ Monitoring Fabric (DMF) GUI to view the query history, select Monitoring > Recorder Nodes and scroll down to the Query History section.
Figure 16. Monitoring > Recorder Nodes > Query History

The Query History section displays the queries submitted to each recorder node and the status of the query.

To download the query results, select Download Results from the Menu control for a specific query. To export the query history, click the Export control at the top of the table (highlighted in the figure above, to the right of the Refresh control).

To display query history using the CLI, enter the following command:
controller-1> show recorder-node query-history
# Packet Recorder Query Type StartDuration
---|---------------|-----------------------------------------------------------------------|------------------------|----------------------------|--------|
1 HW-PR-2 after 10m ago analysis-hosts 2019-03-20 09:52:38.021000 PDT 3428
2 HW-PR-1 after 10m ago analysis-hosts 2019-03-20 09:52:38.021000 PDT 3428
3 HW-PR-2 after 10m ago abort2019-03-20 09:52:40.439000 PDT 711
4 HW-PR-1 after 10m ago abort2019-03-20 09:52:40.439000 PDT 711
---------------------------------------------------------------------output truncated---------------------------------------------------------------------

Using the CLI to Manage the DMF Recorder Node

 

Basic Configuration

To perform basic recorder node configuration, complete the following steps:
  1. Assign a name to the recorder node device.
    controller-1(config)# recorder-node device rn-alias
  2. Set the MAC address of the recorder node.
    controller-1(config-recorder-node)# mac 18:66:da:fb:6d:b4
    If the management MAC is unknown, it can be determined from the chassis ID of connected devices.
  3. Define the recorder node interface name.
    controller-1(config)# recorder-fabric interface Intf-alias
    controller-1(config-pkt-rec-intf)#

    Any alphanumeric identifier can be assigned for the name of the recorder node interface, which changes the submode to config-pkt-rec-intf, where an optional description can be provided. This submode allows you to specify the switch and interface where the recorder node is connected.

  4. Provide an optional description and identify the switch interface connected to the recorder node.
    controller-1(config-pkt-rec-intf)# description 'Delivery point for recorder-node'
    controller-1(config-pkt-rec-intf)# recorder-interface switch Switch-z9100 ethernet37
  5. (Optional) Recording: Recording is enabled by default. To disable recording, enter the following commands:
    controller-1(config)# recorder-node device rn-alias
    controller-1(config-recorder-node)# no record
  6. (Optional) Disk Full Policy: By default, Disk Full Policy is set to rolling-fifo, which means oldest packets will be deleted to make room for newer packets when recorder node disks are full. This configuration can be changed to stop-and-wait, which will allow the recorder node to stop recording until disk space becomes available. Enter the commands below to configure Disk Full Policy to stop-and-wait.
    controller-1(config)# recorder-node device rn-alias
    controller-1(config-recorder-node)# when-disk-full stop-and-wait
  7. Backup Disk Policy: Define the backup disk policy to indicate to the Recorder Node which disk should be used as a secondary volume and for what purpose. The following three options are available:
    controller-1(config-recorder-node)# backup-volume
    local-fallback Set local disk as backup when remote disk is unreachable
    no-backupDo not use any backup volume (default selection)
    remote-extendSet remote volume to extend local main disk
    
    The no-backup mode is the default mode. The other two modes require that the Recorder Node have a set of disks for recording as well as be connected to an Isilon cluster mounted via NFS. This remote storage must be configured from the DMF Controller.
  8. (Optional) Max Packet Age: This defines the maximum age in minutes of any packet in the recorder node. By default, Max Packet Age is not set, which means no limit is enforced. When Max Packet Age is set, packets recorded on the recorder node will be discarded after the specified number of minutes. To set the maximum number of minutes that recorded packets will be kept on the recorder node, enter the following commands:
    controller-1(config)# recorder-node device rn-alias
    controller-1(config-recorder-node)# max-packet-age 30
    This sets the maximum time to keep recorded packets to 30 minutes.
    Note: Max Packet Age can be used in combination with the packet removal policy to control when packets are deleted based on age rather than disk utilization alone.
  9. (Optional) Max Disk Utilization: This defines the maximum disk utilization in terms of a percentage between 5% and 95%. When this utilization is reached, the Disk Full Policy (rolling-fifo or stop-and-wait) is enforced. If unset, the default maximum disk utilization is 95%; however, it can be configured using the following commands:
    controller-1(config)# recorder-node device rn-alias
    controller-1(config-recorder-node)# max-disk-utilization 80
  10. (Optional) Disable any indexing configuration fields that will not be used in subsequent recorder node queries. All indexing fields are enabled by default. To disable a specific indexing option, enter the following commands from the config-recorder-node-indexing submode. To re-enable a disabled option, enter the command without the no prefix.
    Use the following command enter the recorder node indexing submode:
    controller-1(config-recorder-node)# indexing
    controller-1(config-recorder-node-indexing)#
    Then use the following commands to disable any fields that will not be used in subsequent queries:
    • Disable MAC Source indexing: no mac-src
    • Disable MAC Destination indexing: no mac-dst
    • Disable outer VLAN ID indexing: no vlan-1
    • Disable inner/middle VLAN ID indexing: no vlan-2
    • Disable innermost VLAN ID indexing: no vlan-3
    • Disable IPv4 Source indexing: no ipv4-src
    • Disable IPv4 Destination indexing: no ipv4-dst
    • Disable IPv6 Source indexing: no ipv6-src
    • Disable IPv6 Destination indexing: no ipv6-dst
    • Disable IP Protocol indexing: no ip-proto
    • Disable Port Source indexing: no port-src
    • Disable Port Destination indexing: no port-dst
    • Disable MPLS indexing: no mpls
    • Disable Community ID indexing: no community-id
    • Disable MetaWatch Device ID: no mw-device-id
    • Disable MetaWatch Port ID: no mw-port-id
    For example, the following command disables indexing for the destination MAC address:
    controller-1(config-recorder-node-indexing)# no mac-src
  11. Identify the recorder node interface by name in an out-of-band policy.
    controller-1(config)# policy RecorderNodePolicy
    controller-1(config-policy)# use-recorder-fabric-interface intf-1
    controller-1(config-policy)#
  12. Configure the DANZ Monitoring Fabric (DMF) policy to identify the traffic to send to the recorder node.
    controller-1(config-policy)# 1 match any
    controller-1(config-policy)# # filter-interface FilterInterface1
    controller-1(config-policy)# # action forward
    This example forwards all traffic received in the monitoring fabric on filter interface FilterInterface1 to the recorder node interface. The following is the running-config for this example configuration:
    recorder-fabric interface intf-1
    description 'Delivery point for recorder-node'
    recorder-interface switch 00:00:70:72:cf:c7:cd:7d ethernet37
    policy RecorderNodePolicy
    action forward
    filter-interface FilterInterface1
    use-recorder-fabric intf-1
    1 match any

Authentication Token Configuration

Static authentication tokens are pushed to each recorder node as an alternative form of authentication in headless mode, when the DANZ Monitoring Fabric (DMF) Controller is unreachable, or by third-party applications that do not have or do not need DMF controller credentials in order to query the recorder node.

To configure the recorder node with a static authentication token, use the following commands:
controller-1(config)# recorder-node auth token mytoken
Auth : mytoken
Token : some_secret_string <--- secret plaintext token displayed once here
controller-1 (config)# show running-config recorder-node auth token
! recorder-node
recorder-node auth token mytoken $2a$12$cwt4PvsPySXrmMLYA.Mnyus9DpQ/bydGWD4LEhNL6xhPpkKNLzqWS <---hashed token shows in running config
The DMF Controller uses its own hidden authentication token to query the recorder node. To regenerate the Controller authentication token, use the following command:
controller-1(config)# recorder-node auth generate-controller-token

Configuring the Pre-buffer

To enable the pre-buffer or change the time allocated, enter the following commands:
controller-1(config)# recorder-node device <name>
controller-1(config-recorder-node)# pre-buffer <minutes>

Replace name with the name of the recorder node. Replace minutes with the number of minutes to allocate to the pre-buffer.

Triggering a Recorder Node Event

To trigger an event for a specific recorder node, enter the following command from enable mode:

controller-1# trigger recorder-node <name> event <event-name>

Replace name with the name of the recorder node and replace event-name with the name to assign to the current event.

Terminating a Recorder Node Event

To terminate a recorder node event, use the following command:
controller-1# terminate recorder-node <name> event <event-name>

Replace name with the name of the recorder node and replace event-name with the name of the recorder node event to terminate.

Viewing Recorder Node Events

To view recorder node events, enter the following command from enable mode:
controller-1# show recorder-node events
# Packet Recorder Time Event
-|---------------|------------------------------|-------------------------------------------------------------------|
1 pkt-rec-740 2018-02-06 16:21:37.289000 UTC Pre-buffer event my-event1 complete. Duration 3 minute(s)
2 pkt-rec-740 2018-02-06 20:23:59.758000 UTC Pre-buffer event event2 complete. Duration 73 minute(s)
3 pkt-rec-740 2018-02-07 22:39:15.036000 UTC Pre-buffer event event-02-7/event3 complete. Duration 183 minute(s)
4 pkt-rec-740 2018-02-07 22:40:15.856000 UTC Pre-buffer event event5 triggered
5 pkt-rec-740 2018-02-07 22:40:16.125000 UTC Pre-buffer event event4/event-02-7 complete. Duration 1 minute(s)
6 pkt-rec-740 2018-02-22 06:53:10.216000 UTC Pre-buffer event triggered

Using the CLI to Run Recorder Node Queries

Note: The DANZ Monitoring Fabric (DMF) Controller prompt is displayed immediately after entering a query or replay request, but the query continues in the background. If you try to enter another replay or query command before the previous command is completed, an error message is displayed.

Packet Replay

To replay the packets recorded by a recorder node, enter the replay recorder-node command from enable mode.
controller-1# replay recorder-node <name> to-delivery <interface> filter <stenographer-query>
[realtime | replay-rate <bps> ]
The following are the options available with this command.
  • name: Specify the recorder node for which you wish to replay the recorded packets from.
  • interface: The name of the DMF delivery interface to which the packets should be delivered.
  • stenographer-query: The filter used to look up desired packets.
  • (Optional) real-time: Replay the packets at the original rate recorded by the specified recorder node. The absence of this parameter will result in a replay up to the line rate of the recorder node interface.
  • (Optional) replay-rate bps: Specify the number of bits per second to be used for replaying the packets recorded by the specified recorder node. The absence of this parameter will result in a replay up to the line rate of the recorder node interface.
The following command shows an example of a replay command using the to-delivery option.
controller-1# replay recorder-node packet-rec-740 to-delivery eth26-del filter 'after 1m ago'
controller-1#
Replay policy details:
controller-1# show policy-flow | grep replay
1 __replay_131809296636625 packet-as5710-2 (00:00:70:72:cf:c7:cd:7d) 0 0 6400 1
in-port 47 apply: name=__replay_131809296636625 output: max-length=65535, port=26

Packet Data Query

You can use a packet query to search the packets recorded by a specific recorder node. The operation uses a Stenographer query string to filter only the interesting traffic. The query returns a URL that can be used to download and analyze the packets using Wireshark or other packet-analysis tools.

From enable mode, enter the query recorder-node command.
switch # query recorder-node <name> packet-data filter <stenographer-query>
The following is the meaning of each parameter:
  • name: Identify the recorder instance.
  • packet-data filter stenographer-query: Look up only the packets that match the specified Stenographer query.
The following example shows the results returned:

Packet Object Query

The packet object query extracts unencrypted HTTP objects from packets matching the given stenographer filter. To run a packet object query, run the following query command:
switch# query recorder-node bmf-integrations-pr-1 packet-object filter 'after 5m ago'
The following example shows the results returned:
switch# query recorder-node bmf-integrations-pr-1 packet-object filter 'after 1m ago'
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Packet Object Query Results ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Coalesced URL : /pcap/__packet_recorder__/coalesced-bmf-2022-11-21-14-27-56-67a73ea9.tgz
Individual URL(s) : /pcap/__packet_recorder__/bmf-integrations-pr-1-2022-11-21-14-27-55-598f5ae7.tgz

Untar the folder to extract the HTTP objects.

Size Query

You can use a size query to analyze the number of packets and the total size of the packets recorded by a specific recorder node. The operation uses a Stenographer query string to filter only the interesting traffic.

To run a size query, enter the query recorder-node command from enable mode.
# query recorder-node <name> size filter <stenographer_query>
The following is the meaning of each parameter:
  • name: Identify the recorder node.
  • size filter stenographer-query: Analyze only the packets that match the specified Stenographer query.
The following example shows the results returned:
switch# query recorder-node <hq-bmf-packet-recorder-1> size filter "after 1m ago and src host 8.8.8.8"
~ Summary Query Results ~
# Packets : 66
Size: 7.64KB
~ Error(s) ~
None.

Window Query

You can use a window query to analyze the oldest available packet and most recent available packet recorded by a specific recorder node.

To run a window query, enter the query recorder-node command from enable mode.
switch# query recorder-node <name> window
The following is the meaning of each parameter:
  • name: Identify the recorder node.
The following example shows the results returned:
switch# query recorder-node hq-bmf-packet-recorder-1 window
~~~~~~~~~~~~~ Window Query Results ~~~~~~~~~~~~~
Oldest Packet Available : 2020-07-30 05:01:08 PDT
Newest Packet Available : 2020-10-19 08:14:21 PDT
~ Error(s) ~
None.

Stopping a Query

You can use the abort recorder-node command to stop the current query running on the specified recorder. From enable mode, enter the following command:
controller-1# abort recorder-node <name> filter <string>
Replace name with the name of the recorder node, and use the filter keyword to identify the specific filter used to submit the query. If the specific query being run is unknown, an empty-string filter of “” can be used to terminate any running query.
controller-1# abort recorder-node hq-bmf-packet-recorder-1 filter ""
Abort any request with the specified filter? This cannot be undone. enter "yes" (or "y") to
continue:
yes
Result : Success
~ Error(s) ~
None.

Using RBAC to Manage Access to the DMF Recorder Node

You can use Role-Based Access Control (RBAC) to manage access to the DANZ Monitoring Fabric (DMF) Recorder Node by associating a recorder node with an RBAC group.

To restrict access for a specific recorder to a specific RBAC group, use the CLI or GUI as described below.

RBAC Configuration Using the CLI

  1. Identify the group to which you want to associate the recorder node.
    Enter the following command from config mode on the active DANZ Monitoring Fabric (DMF) controller:
    controller-1(config)# group test
    controller-1(config-group)#
  2. Associate one or more recorder nodes with the group.
    Enter the following CLI command from the config-group submode:
    controller-1(config-group)# associate recorder-node <device-name>
    Replace device-name name with the name of the recorder node, as in the following example:
    controller-1(config-group)# associate recorder-node HW-PR-1

RBAC Configuration Using the GUI

  1. Select Security > Groups , and select Edit from the Actions and click + Create Group.
    Figure 17. Create Security Group
  2. Enter a Group Name.
    Figure 18. Create Group
  3. Under the Role Based Access Control section select Add Recorder Node.
  4. Select the Recorder Node from the selection list, and assign the permissions required.
    • Read: The user can view recorded packets.
    • Use: The user can define and run queries.
    • Configure: The user can configure packet recorder instances and interfaces.
    • Export: The user can export packets to a different device.
    Figure 19. Associate Recorder Node
  5. Click Create.

Using the CLI to View Information About a Recorder Node

This section describes how to monitor and troubleshoot recorder node status and operation. The recorder node stores packets on the main hard disk and the indices on the SSD volumes.

Viewing the Recorder Node Interface

To view information about the recorder node interface information, use the following command:
controller-1(config)# show topology recorder-node
# DMF IF Switch IFName State SpeedRate Limit
-|------------|----------|----------|-----|------|----------|
1 RecNode-Intf Arista7050 ethernet1up25Gbps -

Viewing Recorder Node Operation

controller-1# show recorder-node device packet-rec-740 interfaces stats
Packet Recorder Name Rx Pkts Rx BytesRx DropRx Errors Tx PktsTx Bytes Tx Drop Tx Errors
---------------|----|-------------|---------------|--------|---------|--------|----------|-------|---------|
packet-rec-740pri1 2640908588614 172081747460802 84204084 0 24630503 3053932660 0 0
Information about a recorder node interface used as a delivery port in a DANZ Monitoring Fabric (DMF) out-of-band policy is displayed in a list. Recorder node interfaces are listed as dynamically-added delivery interfaces.
Ctrl-2(config)# show policy PR-policy 
Policy Name                            : PR-policy
Config Status                          : active - forward
Runtime Status                         : installed
Detailed Status                        : installed - installed to forward
Priority                               : 100
Overlap Priority                       : 0
# of switches with filter interfaces   : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces  : 0
# of filter interfaces                 : 1
# of delivery interfaces               : 1
# of core interfaces                   : 0
# of services                          : 0
# of pre service interfaces            : 0
# of post service interfaces           : 0
Push VLAN                              : 1
Post Match Filter Traffic              : 1.51Gbps
Total Delivery Rate                    : 1.51Gbps
Total Pre Service Rate                 : -
Total Post Service Rate                : -
Overlapping Policies                   : none
Component Policies                     : none
Installed Time                         : 2023-09-22 12:16:55 UTC
Installed Duration                     : 3 days, 4 hours
~ Match Rules ~
# Rule        
-|-----------|
1 1 match any

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s)  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF      Switch              IF Name   State Dir Packets     Bytes          Pkt Rate Bit Rate Counter Reset Time             
-|-----------|-------------------|---------|-----|---|-----------|--------------|--------|--------|------------------------------|
1 Lab-traffic Arista-7050SX3-T3X5 ethernet7 up    rx  97831460642 51981008309480 382563   1.51Gbps 2023-09-22 12:16:55.738000 UTC

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF          Switch              IF Name    State Dir Packets     Bytes          Pkt Rate Bit Rate Counter Reset Time             
-|---------------|-------------------|----------|-----|---|-----------|--------------|--------|--------|------------------------------|
1 PR-intf Arista-7050SX3-T3X5 ethernet35 up    tx  97831460642 51981008309480 382563   1.51Gbps 2023-09-22 12:16:55.738000 UTC

~ Service Interface(s) ~
None.

~ Core Interface(s) ~
None.

~ Failed Path(s) ~
None.
Ctrl-2(config)# 

Viewing Errors and Warnings

The following table lists the errors and warnings that may be displayed by a recorder node. In the CLI, these errors and warnings can be displayed by entering the following commands:
  • show fabric errors
  • show fabric warnings
  • show recorder-node errors
  • show recorder-node warnings
Table 1. Errors and Warnings
Type Condition Cause Resolution
Error Recorder Node (RN) management link down RN has not received controller LLDP Wait 30s if the recorder node is newly configured. Verify it is not connected to a switch port that is a DANZ Monitoring Fabric (DMF) interface.
Error RN fabric link down Controller has not received RN LLDP Wait 30s if recorder node is newly configured. Check it is online otherwise.
Warning Disk/RAID health degraded Possible hardware degradation Investigate specific warning reported. Could be temperature issue. Possibly replace indicated disk soon.
Warning Low disk space Packet or index disk space has risen above threshold Prepare for disk full soon
Warning Disk full Packet or index disk space is full. Packets are being dropped or rotated depending on removal policy. Do nothing if removal policy is rolling-FIFO. Consider erasing packets to free up space otherwise.
Warning Recorder misconfiguration on a DMF interface A recorder node has been detected in the fabric on a switch interface that is configured as a filter or delivery interface. Remove the conflicting interface configuration, or re-cable the recorder node to a switch interface not defined as a filter or delivery interface.

Using the GUI to view Recorder Node Statistics

Recorder node statistics can be viewed by clicking on the recorder node alias from the Monitoring > Recorder Nodes page.

Figure 20. DANZ Monitoring Fabric (DMF) List of Connected Recorder Nodes
Click a Recorder Node to display the available recorder node statistics. All statistics are disabled/ hidden by default.
Figure 21. Available Recorder Node Statistics
Statistics can be enabled/viewed by simply clicking on them. Selected statistics are highlighted in blue.
Figure 22. Selected Recorder Node Statistics

The recorder node shows health statistics for the following:

CPU: CPU health displays the compute resource utilization of the recorder node.

Figure 23. Recorder Node CPU Health Statistics

Memory: Memory related stats are displayed, such as total memory, used, free, available, etc.

Figure 24. Recorder Node Memory Statistics
Storage: Storage health displays the storage utilization percentage along with total and available capacity of Index and Packet virtual disks.
Figure 25. Recorder Node Storage Statistics
Time-based Disk Utilization Statistics: Time-based Disk Utilization Statistics provides an estimated time period until the Index and Packet virtual disks reach full storage capacity. This estimate is calculated based on data points (incoming data rate) collected periodically from recorder node for a certain time duration. Note that if the collected data points are insufficient to calculate the disk-full estimate, it will show inaccurate. However, once a sufficient number of data points are collected, the estimate will be calculated and displayed automatically.
Figure 26. Time-based Disk Utilization Statistics

Virtual Disks: Virtual Disks health stats displays the Index and Packet virtual disks size, state, health and RAID level configuration.

Figure 27. Recorder Node Virtual Disk Details
Click on the drop-down arrow next to the virtual disk name to obtain information regarding participating physical disks, such as slot numbers, type, size, state, temperature and Dell’s Self Monitoring Analysis and Report Technology (SMART) stats, such as errors and failures, if any.
Figure 28. Recorder Node Virtual Disk Statistics
File Descriptors: the File Descriptor section displays the following:
  • File Descriptors (current): Current number of files open in the entire system.
  • Max System File Descriptors: Highest number of open files allowed on the entire system.
  • Max Stenographer File Descriptors: Highest number of open files allowed for Stenographer application.
Figure 29. Recorder Node File Descriptors Statistics
Mount: Mount section displays the Index and Packet disk mount information, such as volume name, mount point, file system type and mount health.
Figure 30. Recorder Node Mount Information

Stenographer: Stenographer Statistics are displayed as follows:

Figure 31. Recorder Node Stenographer Statistics
  • Initialized: Displays the Stenographer application running state. A green check mark indicates that the application was initialized successfully. When the Stenographer application is starting up, a red x mark is expected. During this time, recording and querying is disallowed.
  • Tracked Files: Tracked files are the total number of files stored under each CPU instance thread.
  • Cached Files: Cached files are the number of files that are open and have a file descriptor.
  • Max Cached Files: Maximum cached files is the total number of files that are allowed to be open.
These numbers are further divided and displayed for each recording thread and can be viewed in the Recording Threads table:
Figure 32. Recorder Node Max Cached Files Statistics
Recording: Recording stats displays packet stats, such as dropped packets, total packets and collection time for each CPU core.
Figure 33. Recorder Node Statistics
The following displays packet size distribution stats.
Figure 34. Recorder Node Packet Size Distribution Statistics
The following displays interface errors, such as CRC errors, frame length errors and back pressure errors:
Figure 35. Recorder Node Interface Errors

Changing the Recorder Node Default Configuration

Configuration settings are automatically downloaded to the recorder node from the DANZ Monitoring Fabric (DMF) Controller, which eliminates the need for box-by-box configuration. However, you can override the default configuration for a recorder node from the config-recorder-node submode for any recorder node.
Note:In the current release, these options are available only from the CLI, and are not included in the DMF controller GUI.
To change the CLI mode to config-recorder-node, enter the following command from config mode on the active DMF controller:
controller-1(config)# recorder-node device <instance>

Replace instance with the alias you want to use for the recorder node. This alias is associated with the MAC hardware address, using the mac command.

Use any of the following commands from config-recorder-node submode to override the default configuration for the associated recorder node:
  • banner: Set the recorder node pre-login banner message
  • mac: Configure the MAC address for the recorder node
Additionally, the below configurations can be overridden to use values specific to the recorder node or can also be used in a merge-mode along with the configuration inherited from the DMF controller:
  • ntp: Configure recorder node to override default timezone and NTP parameters.
  • snmp-server: Configure recorder node SNMP parameters and traps.
  • logging: Enable recorder node logging to Controller.
  • tacacs: Set TACACS defaults, server IP address(es), timeouts and keys.
The following commands can be used, from the config-recorder-node submode, to change the default configuration on the recorder node:
  • ntp override-global: Override global time configuration with recorder node time configuration.
  • snmp-server override-global: Override global SNMP configuration with recorder node SNMP configuration.
  • snmp-server trap override-global: Override global SNMP trap configuration with recorder node SNMP trap configuration.
  • logging override-global: Override global logging configuration with packet recorder logging configuration.
  • tacacs override-global: Override global TACACS configuration with recorder node TACACS configuration.
To configure the recorder node to work in a merge mode by merging its specific configuration with that of the DMF Controller, execute the following commands in the config-recorder-node submode:
  • ntp merge-global: Merge global time configuration with recorder node time configuration.
  • snmp-server merge-global: Merge global SNMP configuration with recorder node SNMP configuration.
  • snmp-server trap merge-global: Merge global SNMP trap configuration with recorder node SNMP trap configuration.
  • logging merge-global: Merge global logging configuration with recorder node logging configuration.

TACACS configuration does not have a merge option. It can either be inherited completely from the DMF Controller or overridden to use only the recorder node specific configuration.

Large PCAP Queries

To run large PCAP queries to the recorder node, access the recorder node via a web browser. This allows you to run packet queries directly to the recorder node without specifying the maximum byte or packet limit for the PCAP file (which is required if the query is executed from the DANZ Monitoring Fabric (DMF) Controller).

To access the recorder node directly, use the URL https://RecorderNodeIP in a web browser, as shown below:
Figure 36. URL to Recorder Node
The following page will be displayed:
Figure 37. Recorder Node Page
  • Recorder Node IP Address: Enter the IP address of the target recorder node.
  • DMF Controller Username: Provide the DMF Controller username.
  • DMF Controller Password: Provide the password for authentication.
  • Stenographer Query Filter: The query filter can be used to filter the query results to look for specific packets. For example, to search for packets with a source IP address of 10.0.0.145 in the last 10 minutes, use the following filter:
    after 10m ago and src host 10.0.0.145
  • Stenographer Query ID: Starting in DMF 8.0, a Universally Unique Identifier (UUID) is required to run queries. To generate a UUID, run the following command on any Linux machine and use the result as the Stenographer query ID:
    $ uuidgen
    b01308db-65f2-4d7c-b884-bb908d111400
  • Save pcap as: Provide the file name to be used for this PCAP query result.
  • Submit Request: Click on Submit Request. This will send a query to the specified recorder node, and it will save the PCAP file with the provided file name to the default download location for the browser.

Recorder Node Management Migration L3ZTN

After the first boot (initial configuration) is completed, the recorder node can be removed from the old Controller, and it can be pointed to a new Controller via the CLI in the case of a Layer-3 topology mode.
Note:In order for appliances to connect to the DANZ Monitoring Fabric (DMF) Controller in Layer-3 Zero Touch Network (L3ZTN) mode, the DMF Controller deployment mode must be configured as pre-configure.

To migrate management to a new Controller, follow the steps below:

  1. Remove the recorder node and switch from the old Controller using the commands below:
    controller-1(config)# no recorder-node device <RecNode>
    controller-1(config)# no switch <Arista7050>
  2. Add the switch to the new Controller.
  3. SSH to the recorder node and configure the new Controller IP using the zerotouch l3ztn controller-ip command:
    controller-1(config)# zerotouch l3ztn controller-ip 10.2.0.151
  4. After pointing the recorder node to use the new Controller, reboot the recorder node.
  5. Once the recorder node is back online, the DMF Controller should receive the ZTN request.

  6. After the DMF Controller has received a ZTN request from the recorder node, it can be added to the DMF Controller running-configuration using the below command:
    controller-1(config)# recorder-node device RecNode
    controller-1(config-recorder-node)# mac 24:6e:96:78:58:b4
  7. The recorder node should now be added to the new DMF Controller. It can be verified using the command below:

Recorder Node CLI

The following commands are available from the recorder node:

The show version command can be used to view the version and image information that recorder node is running on.
RecNode(config)# show version
Controller Version : DMF Recorder Node 8.1.0 (bigswitch/enable/dmf-8.1.x #5)
RecNode(config)#
The show controllers command can be used to view the connected DANZ Monitoring Fabric (DMF) controllers to the recorder node. Note that if the recorder node is connected to a DMF Controller cluster, then all the cluster nodes should be listed in the command output:
RecNode(config)# show controllers
controllerRole State Aux
---------------------|------|---------|---|
tcp://10.106.8.2:6653 master connected 0
tcp://10.106.8.3:6653 slaveconnected 0
tcp://10.106.8.3:6653 slaveconnected 1
tcp://10.106.8.3:6653 slaveconnected 2
tcp://10.106.8.2:6653 master connected 1
tcp://10.106.8.2:6653 master connected 2
RecNode(config)#

Multiple Queries

The GUI can be used to run multiple recorder node queries

To run queries on recorded packets by the recorder node, navigate to the Monitoring > Recorder Nodes page.

Under the Query section, click on the Query Type drop-down to select the type of analysis that you would like to run on the recorded packets as shown below:
Figure 38. Query Type
After selecting the query type, you can use filters to limit or narrow the search to obtain specific results. Providing specific filters also helps to complete the query analysis faster. In the following example, the query result for the TCP query type will return the results for IP address 10.240.30.24 for the past 10 minutes.
Figure 39. Query - IP Address and Time
After entering the desired filters, click on the Submit button. The Progress dialog will be displayed, showing the Elapsed Time and Progress percentage of the running query:
Figure 40. Query Progress
While a query is in progress, another query can be initiated from a new DANZ Monitoring Fabric (DMF) Controller web session. The query progress can be viewed under the Active Queries section:
Figure 41. Active Queries

Ability to Deduplicate Packets - Query from Recorder Node

For Recorder Node queries, the recorded packets matching a specified query filter may contain duplicates when packet recording occurs at several different TAPs within the same network; i.e., as a packet moves through the network, it may be recorded multiple times. The dedup feature removes duplicate packets from the query results. By eliminating redundant information, packet deduplication improves query results' clarity, accuracy, and conciseness. Additionally, the dedup feature significantly reduces the size of query results obtained from packet query types.

Using the CLI to Deduplicate Packets

In the DANZ Monitoring Fabric (DMF) Controller CLI, packet deduplication is available for the packet data, packet object, size, and replay query types. Deduplication is turned off by default for these queries. To enable deduplication, “dedup” must be added to the end of the query command after all optional values have been selected (if any).

The following are command examples of enabling deduplication.

Enabling deduplication for a size query:

controller# query recorder-node rn size filter “before 5s ago” dedup

Enabling deduplication for a packet data query specifying a limit for the size of the PCAP file returned in bytes:

controller# query recorder-node rn packet-data filter “before 5s ago” limit-bytes 2000 dedup

Enabling deduplication for a replay query:

controller# replay recorder-node rn to-delivery dintf filter “before 5s ago” dedup

Enabling deduplication for a replay query specifying the replay rate:

controller# replay recorder-node rn to-delivery dintf filter “before 5s ago” replay-rate 100 dedup

A time window (in milliseconds) can also be specified for deduplication. The time window defines the time required between timestamps of identical packets to no longer be considered duplicates of each other. For example, for a time window of 200 ms, two identical packets with timestamps that are 200 ms (or less) apart are duplicates of each other. In contrast, if the two identical packets had timestamps more than 200 ms apart, they would not be duplicates of each other.

The time window must be an integer between 0 and 999 (inclusive) with a default time window of 200 ms when deduplication is enabled and no set time window value.

To configure a time window value, dedup-window must be added after dedup and followed by an integer value for the time window.

controller# query recorder-node rn size filter “before 5s ago” dedup dedup-window 150

Using the GUI to Deduplicate Packets

In the DANZ Monitoring Fabric (DMF) Controller GUI, packet deduplication is available for the packet data, packet object, size, replay, application, and analysis query types. Deduplication is not enabled by default for these queries. To enable deduplication perform the following steps:
  1. Set the toggle switch deduplication to Yes in the query submission window.
  2. Specify an optional time window (in milliseconds) as required by entering an integer between 0 and 999 (inclusive) into the Deduplication Time Window field. The time window will default to 200 ms if no time window value is set.
  3. Click Submit to continue.
Note: If a time window value is specified, but deduplication is not toggled, packet deduplication will not occur.
The following is an example of enabling deduplication for a size query specifying a time window value.
Figure 42. Query

Limitations

Expect a query with packet deduplication enabled to take longer to complete than with packet deduplication disabled. Hence, packet deduplication, by default, is disabled.

The maximum time window value permitted is 999 ms to ensure that TCP retransmissions are not regarded as duplicates, assuming that the receive timeout value for TCP retransmissions (of any kind) is at least 1 second. If the receive timeout value is less than 1 second (particularly, exactly 999 ms or less), then it is possible for TCP retransmissions to be regarded as duplicates when the time window value used is larger than the receive timeout value.

Due to memory constraints, removing some duplicates may not occur as expected. This scenario is likely to occur if a substantial amount of packets match the query filter, which all have timestamps within the specified time window from each other. We refer to this scenario as the query having exceeded the packet window capacity. To mitigate this from occurring, decrease the time window value or use a more specific query filter to reduce the number of packets matching the query filter at a given time.

Enabling Egress sFlow on Recorder Node Interfaces

The egress sFlow feature is available from the DANZ Monitoring Fabric (DMF) 8.5 release onward. Enable egress sFlow to send sampled packets collected on the Recorder Node switch attachment point to a configured sFlow collector. Examining these sampled packets enables the identification of post-match-rule flows recorded by the DMF Recorder Nodes (RNs) without performing a query against the RNs. While not explicitly required, Arista Networks highly recommends using the DMF Analytics Node (AN) as the configured sFlow collector because it can automatically identify packets sampled utilizing this feature.

Use the CLI or GUI to enable or disable the feature.

Using the CLI to Enable Egress sFlow

The egress sFlow feature requires a configured sFlow collector. After configuring the sFlow collector, enter the following command from the config mode to enable the feature:

Controller-1(config)# recorder-node sflow

To disable the feature, enter the command:

Controller-1(config)# no recorder-node sflow

Using the GUI to Enable Egress sFlow

Using the GUI

After configuring the fabric for sFlow and setting up the sFlow collector, navigate to the Monitoring > Recorder Node page. Under the Global Configuration section, click the Configure global settings button.

Figure 43. Configure Global Settings

In the Configure Global Settings pop-up window, enable the sFlow setting and click Submit.

Figure 44. Enable sFlow

Analytics Node

When using a DMF Analytics Node as the sFlow collector, it has a dashboard to display the results from this feature. To access the results:

  1. Navigate to the sFlow dashboard from the Fabric dashboard.
  2. Select the disabled RN Flows filter.
  3. Select the option to Re-enable the filter, as shown below.
Figure 45. Re-enable sFlow

Troubleshooting Egress SFlow Configurations

Switches not associated with a sFlow collector (either a global sFlow collector or a switch-specific sFlow collector) do not have an active feature even if the feature is enabled. Ensure the fabric has been configured for sFlow and a sFlow collector has been configured. To verify that a configured global sFlow collector exists, use the command:

Controller-1# show sflow default 

A configured collector appears as an entry in the table under the column labeled collector. Alternatively, to verify a configured collector exists for a given switch, use the command:

Controller-1#show switch <switch-name> table sflow-collector

This command displays a table with one entry per configured collector.

A feature-unsupported-on-device warning appears when connecting an EOS switch to an RN. The feature do not sample packets passing to an RN from an EOS switch. View any such warnings using the GUI or using the following CLI command:

Controller-1#show fabric warnings feature-unsupported-on-device

To verify the feature is active on a given switch, use the command:

Controller-1#show switch <switch-name> table sflow-sample

If the feature is enabled, the entry values associated with the ports connected to an RN would include an EgressSamplingRate(number) with a number greater than 0. The following example shows Port(1) on <switch-name> connecting to an RN.

Controller-1# show switch <switch-name> table sflow-sample
#Sflow-sample Device nameEntry key Entry value
--|------------|---------------|---------|----------------------------------------------------------------------------------|
5352 <switch-name>Port(1) SamplingRate(0), EgressSamplingRate(10000), HeaderSize(128), Interval(10000)

Guidelines and Limitations for Enabling Egress sFlow

Below are the guidelines and limitations to consider while enabling Egress sFlow :

  • The Egress sFlow support for the Recorder Nodes (RN) feature requires a configured sFlow collector in a fabric configured to allow sFlows.
  • If a packet enters a switch through a filter interface with sFlow enabled and exits through a port connected to an RN while the feature is enabled, only one sFlow packet (i.e. the ingress sFlow packet) is sent to the collector.
  • The Egress sFlow feature does not identify which RN has recorded a given packet in a fabric when there are multiple RNs. This is fine in a normal case as the queries are issued to the RNs in aggregate rather than to individual RNs and hence the information that any RN has received a packet is sufficient. In some cases, it may be possible to make that determination from the outport of the sFlow packet, but that information may not be available in all cases. This is an inherent limitation of egress sFlow.
  • In DMF 8.5.0, an enabled egress sFlow feature captures the packets sent to the RN regardless of whether the RN is actively recording or not.