Using the DMF Service Node Appliance

This chapter describes to configure the managed services provided by the DANZ Monitoring Fabric (DMF) Service Node Appliance.

Overview

The DANZ Monitoring Fabric (DMF) Service Node has multiple interfaces connected to traffic for processing and analysis. Each interface can be programmed independently to provide any supported managed-service actions.

To create a managed service, identify a switch interface connected to the service node, specify the service action, and configure the service action options.

Configure a DMF policy to use the managed service by name. This action causes the Controller to forward traffic the policy selects to the service node. The processed traffic is returned to the monitoring fabric using the same interface and sent to the tools (delivery interfaces) defined in the DMF policy.

If the traffic volume the policy selects is too much for a single service node interface, define an LAG on the switch connected to the service node, then use the LAG interface when defining the managed service. All service node interfaces connected to the LAG are configured to perform the same action. The traffic the policy selects is automatically load-balanced among the LAG member interfaces and distributes the return traffic similarly.

Changing the Service Node Default Configuration

Configuration settings are automatically downloaded to the service node from the DANZ Monitoring Fabric (DMF) Controller to eliminate the need for box-by-box configuration. However, you can override the default configuration for a service node from the config-service-node submode for any service node.
Note: These options are available only from the CLI and are not included in the DMF GUI.
To change the CLI mode to config-service-node, enter the following command from config mode on the Active DMF controller:
controller-1(config)# service-node <service_node_alias>
controller-1(config-service-node)#

Replace service_node_alias with the alias you want to use for the service node. This alias is associated with the hardware MAC address of the service node using the mac command. The hardware MAC address configuration is mandatory for the service node to interact with the DMF Controller.

Use any of the following commands from the config-service-node submode to override the default configuration for the associated service node:
  • admin password: set the password to log in to the service node as an admin user.
  • banner: set the service node pre-login banner message.
  • description: set a brief description.
  • logging: enable service node logging to the Controller.
  • mac: configure a MAC address for the service node.
  • ntp: configure the service node to override default parameters.
  • snmp-server: configure an SNMP trap host to receive SNMP traps from the service node.

Using SNMP to Monitor DPDK Service Node Interfaces

Directly fetch the counters and status of the service node interfaces handling traffic (DPDK interfaces). The following are the supported OIDs.
interfaces MIB: ❵.1.3.6.1.2.1.2❵
ifMIBObjects MIB: ❵.1.3.6.1.2.1.31.1❵
Note: A three-digit number between 101 and 116 identifies SNI DPDK (traffic) interfaces.
In the following example, interface sni5 (105) handles data traffic. To fetch the packet count, use the following command:
snmpget -v2c -c public 10.106.6.5 .1.3.6.1.2.1.31.1.1.1.6.105
IF-MIB::ifHCInOctets.105 = Counter64: 10008
To fetch the counters for packets exiting the service node interface, enter the following command:
snmpget -v2c -c public 10.106.6.5 .1.3.6.1.2.1.31.1.1.1.10.105
IF-MIB::ifHCOutOctets.105 = Counter64: 42721
To fetch Link Up and Down status, enter the following command:
[root@TestTool anet]# snmpwalk -v2c -c onlonl 10.106.6.6 .1.3.6.1.2.1.2.2.1.8.109
IF-MIB::ifOperStatus.109 = INTEGER: down(2)
[root@TestTool anet]# snmpwalk -v2c -c onlonl 10.106.6.6 .1.3.6.1.2.1.2.2.1.8.105
IF-MIB::ifOperStatus.105 = INTEGER: up(1)

Configuring Managed Services

To view, edit, or create DANZ Monitoring Fabric (DMF) managed services, select the Monitoring > Managed Services option.
Figure 1. Managed Services

This page displays the service node appliance devices connected to the DMF Controller and the services configured on the Controller.

Using the GUI to Define a Managed Service

To create a new managed service, complete the following steps:

  1. Click the Provision control (+) in the Managed Services table. The system displays the Create Managed Service dialog, shown in the following figure.
    Figure 2. Create Managed Service: Info
  2. Assign a name to the managed service.
  3. (Optional) Provide a text description of the managed service.
  4. Select the switch and interface providing the service.
    The Show Managed Device Switches Only checkbox, enabled by default, limits the switch selection list to service node appliances. Enable the Show Connected Switches Only checkbox to limit the display to connected switches.
  5. Select the action from the Action selection list, which provides the following options.
    • Application ID
    • Deduplication: Deduplicate selected traffic, including NATted traffic.
    • GTP Correlation
    • Header Strip: Remove bytes of packet starting from zero till selected Anchor and offset bytes
    • Header Strip Cisco Fabric Path Header: Remove the Cisco Fabric Path encapsulation header
    • Header Strip ERSPAN Header: Remove Encapsulated Remote Switch Port Analyzer Encapsulation header
    • Header Strip Genev1 Header: Remove Generic Network Virtualization Encapsulation header
    • Header Strip L3 MPLS Header: Remove Layer 3 MPLS encapsulation header
    • Header Strip LISP Header: Remove Locator Separation Protocol Encapsulation header
    • Header Strip VXLAN Header: Remove Virtual Extensible LAN Encapsulation header
    • IPFIX: Generate IPFIX by selecting matching traffic and forwarding it to specified collectors.
    • Mask: Mask sensitive information as specified by the user in packet fields.
    • NetFlow: Generate a NetFlow by selecting matching traffic and forwarding it to specified collectors.
    • Pattern-Drop: Drop matching traffic.
    • Pattern Match: Forward matching traffic.
    • Session Slice: Slice TCP sessions.
    • Slice: Slice the given number of bytes based on the specified starting point in the packet.
    • TCP Analysis
    • Timestamp: Identify the time that the service node receives the packet.
    • UDP Replication: Copy UDP messages to multiple IP destinations, such as Syslog or NetFlow messages.
  6. (Optional) Identify the starting point for service actions.
    Identify the start point for the deduplication, mask, pattern-match, pattern-drop services, or slice services using one of the keywords listed below.
    • packet-start: add the number of bytes specified by the integer value to the first byte in the packet.
    • l3-header-start: add the number of bytes specified by the integer value to the first byte in the Layer 3 header.
    • l4-header-start: add the number of bytes specified by the integer value to the first byte in the layer-4 header.
    • l4-payload-start: add the number of bytes specified by the integer value to the first byte in the layer-4 user data.
    • integer: specify the number of bytes to offset for determining the start location for the service action relative to the specified start keyword.
  7. To assign a managed service to a policy, enable the checkbox on the Managed Services page of the Create Policy or Edit Policy dialog.
  8. Select the backup service from the Backup Service selection list to create a backup service. The backup service is used when the primary service is not available.

Using the CLI to Define a Managed Service

Note: When connecting a LAG interface to the DANZ Monitoring Fabric (DMF) service node appliance, member links should be of the same speed and can span across multiple service nodes. The maximum number of supported member links per LAG interface is 32, which varies based on the switch platform. Please refer to the hardware guide for the exact details of the supported configuration.

To configure a service to direct traffic to a DMF service node, complete the following steps:

  1. Define an identifier for the managed service by entering the following command:
    controller-1(config)# managed-service DEDUPLICATE-1
    controller-1(config-managed-srv)#

    This step enters the config-managed-srv submode, where you can configure a DMF-managed service.

  2. (Optional) Configure a description for the current managed service by entering the following command:
    controller-1(config-managed-srv)# description “managed service for policy DEDUPLICATE-1”
    The following are the commands available from this submode:
    • description: provide a service description
    • post-service-match: select traffic after applying the header strip service
    • Action sequence number in the range [1 - 20000]: identifier of service action
    • service-interface: associate an interface with the service
  3. Use a number in the range [1 - 20000] to identify a service action for a managed service.
    The following summarizes the available service actions. See the subsequent sections for details and examples for specific service actions.
    • dedup {anchor-offset | full-packet | routed-packet}
    • header-strip {l4-header-start | l4-payload-start | packet-start }[offset]
    • decap-cisco-fp {drop}
    • decap-erspan {drop}
    • decap-geneve {drop}
    • decap-l3-mpls {drop}
    • decap-lisp {drop}
    • decap-vxlan {drop}
    • mask {mask/pattern} [{packet-start | l3-header-start | l4-header-start | l4-payload-start} mask/offset] [mask/mask-start mask/mask-end]}
    • netflow Delivery_interface Name
    • ipfix Delivery_interface Name
    • udp-replicate Delivery_interface Name
    • tcp-analysis Delivery_interface Name
    Note: The IPFIX, NetFlow, and udp-replicate service actions enable a separate submode for defining one or more specific configurations. One of these services must be the last service applied to the traffic selected by the policy.
    • pattern-drop pattern [{l3-header-start | l4-header-start | packet-start }]
    • pattern-match pattern [{l3-header-start | l4-header-start | packet-start }] |
    • slice {packet-start | l3-header-start | l4-header-start | l4-payload-start} integer}
    • timestamp
    For example, the following command enables packet deduplication on the routed packet:
    controller-1(config-managed-srv)# 1 dedup routed-packet
  4. Optionally, identify the start point for the mask, pattern-match, pattern-drop services, or slice services.
  5. Identify the service interface for the managed service by entering the following command:
    controller-1(config-managed-srv)# service-interface switch DMF-CORE-SWITCH-1 ethernet40
    Use a port channel instead of an interface to increase the bandwidth available to the managed service. The following example enables lag-interface1 for the service interface:
    controller-1(config-managed-srv)# service-interface switch DMF-CORE-SWITCH-1 lag1
  6. Apply the managed service within a policy like any other service, as shown in the following examples for deduplication, NetFlow, pattern matching (forwarding), and packet slicing services.
Note: Multiple DMF policies can use the same managed service, for example, a packet slicing managed service.

Monitoring Managed Services

To identify managed services bound to a service node interface and the health status of the respective interface, use the following commands:
controller-1# show managed-service-device <SN-Name> interfaces
controller-1# show managed-service-device <SN-Name> stats

For example, the following command shows the managed services handled by the Service Node Interface (SNI):


Note:The show managed-service-device <SN-Name> stats <Managed-service-name> command filters the statistics of a specific managed service.
The Load column shows no, low, moderate, high, and critical health indicators. These health indicators are represented by green, yellow, and red under DANZ Monitoring Fabric > Managed Services > Devices > Service Stats . They reflect the processor load on the service node interface at that instant but do not show the bandwidth of the respective data port (SNI) handling traffic, as shown in the following sample snapshot of the Service Stats output.
Figure 3. Service Node Interface Load Indicator

Deduplication Action

The DANZ Monitoring Fabric (DMF) Service Node enhances the efficiency of network monitoring tools by eliminating duplicate packets. Duplicate packets can be introduced into the out-of-band monitoring data stream by receiving the same flow from multiple TAP or SPAN ports spread across the production network. Deduplication eliminates these duplicate packets and allows more efficient use of passive monitoring tools.

The DMF Service Node provides three modes of deduplication for different types of duplicate packets.
  • Full packet deduplication: deduplicates incoming packets that are identical at the L2/L3/L4 layers.
  • Routed packet deduplication: as packets traverse an IP network, the mac address changes from hop to hop. Routed packet deduplication allows the user to match packets from the start of the L3 header onwards.
  • Natted packet deduplication: to perform natted deduplication, the service node compares packets in the configured window that are identical from the start of the L4 payload onwards. To use natted packet deduplication, complete the following fields as required:
    • Anchor: Packet Start, L2 Header Start, L3 Header Start, or L3 Payload Start fields.
    • Offset: the number of bytes from the anchor where the deduplication check begins.

The time window in which the service looks for duplicate packets is configurable. Select between 2ms (the default), 4ms, 6ms, and 8ms.

GUI Configuration

Figure 4. Create Managed Service>Action: Deduplication Action

CLI Configuration

Controller-1(config)# show running-config managed-service MS-DEDUP-FULL-PACKET
! managed-service
managed-service MS-DEDUP-FULL-PACKET
description 'This is a service that does Full Packet Deduplication'
1 dedup full-packet window 8
service-interface switch CORE-SWITCH-1 ethernet13/1
Controller-1(config)#
Controller-1(config)# show running-config managed-service MS-DEDUP-ROUTED-PACKET
! managed-service
managed-service MS-DEDUP-ROUTED-PACKET
description 'This is a service that does Routed Packet Deduplication'
1 dedup routed-packet window 8
service-interface switch CORE-SWITCH-1 ethernet13/2
Controller-1(config)#
Controller-1(config)# show running-config managed-service MS-DEDUP-NATTED-PACKET
! managed-service
managed-service MS-DEDUP-NATTED-PACKET
description 'This is a service that does Natted Packet Deduplication'
1 dedup anchor-offset l4-payload-start 0 window 8
service-interface switch CORE-SWITCH-1 ethernet13/3
Controller-1(config)#
Note: The existing command is augmented to show the deduplication percentage. The command syntax is show managed-service-device <SN- name> stats <dedup-service-name>

Header Strip Action

This action removes specific headers from the traffic selected by the associated DANZ Monitoring Fabric (DMF) policy. Alternatively, define custom header stripping based on the starting position of the Layer-3 header, the Layer-4 header, the Layer-4 payload, or the first byte in the packet.

Use the following decap actions isolated from the header-strip configuration stanza:
  • decap-erspan: remove the Encapsulated Remote Switch Port Analyzer (ERSPAN) header.
  • decap-cisco-fabric-path: remove the Cisco FabricPath protocol header.
  • decap-l3-mpls: remove the Layer-3 Multi-protocol Label Switching (MPLS) header.
  • decap-lisp: remove the LISP header.
  • decap-vxlan [udp-port vxlan port]: remove the Virtual Extensible LAN (VXLAN) header.
  • decap-geneve: remove the Geneve header.
Note:For the Header Strip and Decap actions, apply post-service rules to select traffic after stripping the original headers.
To customize the header-strip action, use one of the following keywords to strip up to the specified location in each packet:
  • l3-header-start
  • l4-header-start
  • l4-payload-start
  • packet-start

Input a positive integer representing the offset from which the strip action begins. When omitting an offset, the header stripping starts from the first byte in the packet.

GUI Configuration

Figure 5. Create Managed Service: Header Strip Action

After assigning the required actions to the header stripping service, click Next or Post-Service Match.

The system displays the Post Service Match page, used in conjunction with the header strip service action.
Figure 6. Create Managed Service: Post Service Match for Header Strip Action

CLI Configuration

The header-strip service action strips the header and replaces it in one of the following ways:

  • Add the original L2 src-mac, and dst-mac.
  • Add the original L2 src-mac, dst-mac, and ether-type.
  • Specify and adda custom src-mac, dst-mac, and ether-type.

The following are examples of custom header stripping:

This example strips the header and replaces it with the original L2 src-mac and dst-mac.
! managed-service
managed-service MS-HEADER-STRIP-1
1 header-strip packet-start 20 add-original-l2-dstmac-srcmac
service-interface switch CORE-SWITCH-1 ethernet13/1
This example adds the original L2 src-mac, dst-mac, and ether-type.
! managed-service
managed-service MS-HEADER-STRIP-2
1 header-strip packet-start 20 add-original-l2-dstmac-srcmac-ethertype
service-interface switch CORE-SWITCH-1 ethernet13/2
This example specifies the addition ofa customized src-mac, dst-mac, and ether-type.
! managed-service
managed-service MS-HEADER-STRIP-3
1 header-strip packet-start 20 add-custom-l2-header 00:11:01:02:03:04 00:12:01:02:03:04
0x800
service-interface switch CORE-SWITCH-1 ethernet13/3

Configuring the Post-service Match

The post-service match configuration option enables matching on inner packet fields after the DANZ Monitoring Fabric (DMF) Service Node performs header stripping. This option is applied on the post-service interface after the service node completes the strip service action. Feature benefits include the following:
  • The fabric can remain in L3/L4 mode. It is not necessary to change to offset match mode.
  • Easier configuration.
  • All match conditions are available for the inner packet.
  • The policy requires only one managed service to perform the strip service action.
With this feature enabled, DMF knows exactly where to apply the post-service match. The following is an example of this configuration.
! managed-service
managed-service MS-HEADER-STRIP-4
service-interface switch CORE-SWITCH-1 interface ethernet1
1 decap-l3-mpls
!
post-service-match
1 match ip src-ip 1.1.1.1
2 match tcp dst-ip 2.2.2.0 255.255.255.0
! policy
policy POLICY-1
filter-interface TAP-1
delivery-interface TOOL-1
use-managed-service MS-HEADER-STRIP-4 sequence 1

IPFIX and Netflow Actions

IP Flow Information Export (IP FIX), also known as NetFlow v10, is an IETF standard defined in RFC 7011. The IPFIX generator (agent) gathers and transmits information about flows, which are sets of packets that contain all the keys specified by the IPFIX template. The generator observes the packets received in each flow and forwards the information to the IPFIX collector (server) in the form as a flowset.

Starting with the DANZ Monitoring Fabric (DMF)-7.1.0 release, NetFlow v9 (Cisco proprietary) and IPFIX/NetFlow v10 are both supported. Configuration of the IPFIX managed service is similar to configuration for earlier versions of NetFlow except for the UDP port definition. NetFlow v5 collectors typically listen over UDP port 2055, while IFPIX collectors listen over UDP port 4739.

NetFlow records are typically exported using User Datagram Protocol (UDP) and collected using a flow collector. For a NetFlow service, the service node takes incoming traffic and generates NetFlow records. The service node drops the original packets, and the generated flow records, containing metadata about each flow, are forwarded out of the service node interface.

IPFIX Template

The IPXIF template consists of the key element IDs representing IP flow, field element IDs representing actions the exporter has to perform over IP flows matching key element IDs, the template ID number for uniqueness, collector information, and eviction timers.

To define a template, configure keys of interest representing the IP flow and fields that identify the values measured by the exporter, the exporter information, and the eviction timers. To define the template, select the Monitoring > Managed Service > IPFIX Template option from the DANZ Monitoring Fabric (DMF) GUI or enter the ipfix-template <template-name> command in config mode, replacing template- name with a unique identifier for the template instance.

IPFIX Keys

Use an IPFIX key to specify the characteristics of the traffic to monitor, such as source and destination MAC or IP address, VLAN ID, Layer-4 port number, and QoS marking. The generator includes flows in a flow set having all the attributes specified by the keys in the template applied. The flowset is updated only for packets that have all the specified attributes. If a single key is missing, the packet is ignored. To see a listing of the keys supported in the current release of the DANZ Monitoring Fabric (DMF) Service Node, select the Monitoring > Managed Service > IPFIX Template option from the DMF GUI or type help key in config-ipxif-template submode. The following are the keys supported in the current release:
  • destination-ipv4-address
  • destination-ipv6-address
  • destination-mac-address
  • destination-transport-port
  • dot1q-priority
  • dot1q-vlan-id
  • ethernet-type
  • icmp-type-code-ipv4
  • icmp-type-code-ipv6
  • ip-class-of-service
  • ip-diff-serv-code-point
  • ip-protocol-identifier
  • ip-ttl
  • ip-version
  • policy-vlan-id
  • records-per-dmf-interface
  • source-ipv4-address
  • source-ipv6-address
  • source-mac-address
  • source-transport-port
  • vlan id
Note: The policy-vlan-id and records-per-dmf-interface keys are Arista Proprietary Flow elements. The policy-vlan-id key helps to query per-policy flow information at Arista Analytics-node (Collector) in push-per-policy deployment mode. The records-per-dmf-interface key helps to identify filter interfaces tapping the traffic. The following limitations apply at the time of IPFIX template creation:
  • The Controller will not allow the key combination of source-mac-address and records-per-dmf-interface in push-per-policy mode.
  • The Controller will not allow the key combinations of policy-vlan-id and records-per-dmf-interface in push-per-filter mode.

IPFIX Fields

A field defines each value updated for the packets the generator receives that match the specified keys. For example, include fields in the template to record the number of packets, the largest and smallest packet sizes, or the start and end times of the flows. To see a listing of the fields supported in the current release of the DANZ Monitoring Fabric (DMF) Service Node, select the Monitoring > Managed Service > IPFIX Template option from the DMF GUI, or type help in config-ipxif-template submode. The following are the fields supported:

  • flow-end-milliseconds
  • flow-end-reason
  • flow-end-seconds
  • flow-start-milliseconds
  • flow-start-seconds
  • maximum-ip-total-length
  • maximum-layer2-total-length
  • maximum-ttl
  • minimum-ip-total-length
  • minimum-layer2-total-length
  • minimum-ttl
  • octet-delta-count
  • packet-delta-count
  • tcp-control-bits

Active and Inactive Timers

After the number of minutes specified by the active timer, the flow set is closed and forwarded to the IPFIX collector. The default active timer is one minute. During the number of seconds set by the inactive timer, if no packets that match the flow definition are received, the flow set is closed and forwarded without waiting for the active timer to expire. The default value for the inactive time is 15 seconds.

Example Flowset

The following is a Wireshark view of an IPFIX flowset.
Figure 7. Example IPFIX Flowset in Wireshark

The following is a running-config that shows the IPFIX template used to generate this flowset.

Example IPFIX Template

! ipfix-template
ipfix-template Perf-temp
template-id 22222
key destination-ipv4-address
key destination-transport-port
key dot1q-vlan-id
key source-ipv4-address
key source-transport-port
field flow-end-milliseconds
field flow-end-reason
field flow-start-milliseconds
field maximum-ttl
field minimum-ttl
field packet-delta-count

Using the GUI to Define an IPFIX Template

To define an IPFIX template, complete the following steps:
  1. Select the Monitoring > Managed Services option.
  2. On the DMF Managed Services page, select IPFIX Templates.
    The system displays the IPFIX Templates section.
    Figure 8. IPFIX Templates
  3. To create a new template, click the provision (+) icon in the IPFIX Templates section.
    Figure 9. Create IPFIX Template
  4. To add an IPFIX key to the template, click the Settings control in the Keys section. The system displays the following dialog.
    Figure 10. Select IPFIX Keys
  5. Enable each checkbox for the keys you want to add to the template and click Select.
  6. To add an IPFIX field to the template, click the Settings control in the Fields section. The system displays the following dialog:
    Figure 11. Select IPFIX Fields
  7. Enable the checkbox for each field you want to add to the template and click Select.
  8. On the Create IPFIX Template page, click Save.
The new template is added to the IPFIX Templates table, with each key and field listed in the appropriate column. You can now use this customized template to apply when defining an IPFIX-managed service.

Using the CLI to Define an IPFIX Template

  1. Create an IPFIX template.
    controller-1(config)# ipfix-template IPFIX-IP
    controller-1(config-ipfix-template)#

    This changes the CLI prompt to the config-ipfix-template submode.

  2. Define the keys to use for the current template, using the following command:

    [ no ] key { ethernet-type | source-mac-address | destination-mac-address | dot1q-vlan-id | dot1q-priority | ip-version | ip-protocol-identifier | ip-class-of-service | ip-diff-serv-code-point | ip-ttl | sourceipv4-address | destination-ipv4-address | icmp-type-code-ipv4 | source-ipv6-address | destination-ipv6-address | icmp-type-code-ipv6 | source-transport-port | destination-transport-port }

    The keys specify the attributes of the flows to be included in the flowset measurements.

  3. Define the fields to use for the current template, using the following command:
    [ no ] field { packet-delta-count | octet-delta-count | minimum-ip-total-length | maximum-ip- total-length | flow-start-seconds | flow-end-seconds | flow-end-reason | flow-start-milliseconds | flow-end-milliseconds | minimum-layer2-total-length | maximum-layer2-total- length | minimum-ttl | maximum-ttl }

    The fields specify the measurements to be included in the flowset.

Use the template when defining the IPFIX action.

Using the GUI to Define an IPFIX Service Action

Select IPFIX from the Action selection list on the Create Managed Service > Action page.

Figure 12. Selecting IPFIX Action in Create Managed Service
Complete the following required configuration:
  • Assign a delivery interface.
  • Configure the collector IP address.
  • Identify the IPFIX template.
The following configuration is optional:
  • Inactive timeout: the interval of inactivity that marks a flow inactive.
  • Active timeout: length of time between each IPFIX flows for a specific flow.
  • Source IP: source address to use for the IPFIX flowsets.
  • UDP port: UDP port to use for sending IPFIX flowsets.
  • MTU: MTU to use for sending IPFIX flowsets.

After completing the configuration, click Next, and then click Save.

Using the CLI to Define an IPFIX Service Action

Define a managed service and define the IPFIX action.
controller(config)# managed-service MS-IPFIX-SERVICE
controller(config-managed-srv)# 1 ipfix TO-DELIVERY-INTERFACE
controller(config-managed-srv-ipfix)# collector 10.106.1.60
controller(config-managed-srv-ipfix)# template IPFIX-TEMPLATE

The active-timeout and inactive-timeout commands are optional

To view the running-config for a managed service using the IPFIX action, enter the following command:
controller1# show running-config managed-service MS-IPFIX-ACTIVE
! managed-service
managed-service MS-IPFIX-ACTIVE
service-interface switch CORE-SWITCH-1 ethernet13/1
!
1 ipfix TO-DELIVERY-INTERFACE
collector 10.106.1.60
template IPFIX-TEMPLATE
To view the IPFIX templates, enter the following command:
config# show running-config ipfix-template
! ipfix-template
ipfix-template IPFIX-IP
template-id 1974
key destination-ipv4-address
key destination-ipv6-address
key ethernet-type
key source-ipv4-address
key source-ipv6-address
field flow-end-milliseconds
field flow-end-reason
field flow-start-milliseconds
field minimum-ttl
field tcp-control-bits
------------------------output truncated------------------------

Packet-masking Action

The packet-masking action can hide specific characters in a packet, such as a password or credit card number, based on offsets from different anchors and by matching characters using regular (regex) expressions.

The mask service action applies the specified mask to the matched packet region.

GUI Configuration

Figure 13. Create Managed Service: Packet Masking

CLI Configuration

Controller-1(config)# show running-config managed-service MS-PACKET-MASK
! managed-service
managed-service MS-PACKET-MASK
description "This service masks pattern matching an email address in payload with X"
1 mask ([a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+.[a-zA-Z0-9_-]+)
service-interface switch CORE-SWITCH-1 ethernet13/1

Arista Analytics Node Capability

Arista Analytics Node capabilities are enhanced to handle NetFlow V5/V9 and IPFIX Packets. All these flow data are represented with the Netflow index.

Note: NetFlow flow record generation is enhanced for selecting VxLAN traffic. For VxLAN traffic, flow processing is based on inner headers, with the VNI as part of the key for flow lookup because IP addresses can overlap between VNIs.
Figure 14. NetFlow Managed Service

NetFlow records are exported using User Datagram Protocol (UDP) to one or more specified NetFlow collectors. Use the DMF Service Node to configure the NetFlow collector IP address and the destination UDP port. The default UDP port is 2055.

Note: No other service action, except the UDP replication service, can be applied after a NetFlow service action because part of the NetFlow action is to drop the packets.

Configuring the Arista Analytics Node Using the GUI

From the Arista Analytics Node dashboard, apply filter rules to display specific flow information.

The following are the options available on this page:
  • Delivery interface: interface to use for delivering NetFlow records to collectors.
    Note: The next-hop address must be resolved for the service to be active.
  • Collector IP: identify the IP address of the NetFlow collector.
  • Inactive timeout: use the inactive-timeout command to configure the interval of inactivity before NetFlow times out. The default is 15 seconds.
  • Source IP: specify a source IP address to use as the source of the NetFlow packets.
  • Active timeout: use active timeout to configure a period that a NetFlow can be generated continuously before it is automatically terminated. The default is one minute.
  • UDP port: change the UDP port number used for the NetFlow packets. The default is 2055.
  • Flows: specify the maximum number of NetFlow packets allowed. The allowed range is 32768 to 1048576. The default is 262144.
  • Per-interface records: identify the filter interface where the NetFlow packets were originally received. This information can be used to identify the hop-by-hop path from the filter interface to the NetFlow collector.
  • MTU: change the Maximum Transmission Unit (MTU) used for NetFlow packets.
Figure 15. Create Managed Service: NetFlow Action

Configuring the Arista Analytics Node Using the CLI

Use the show managed-services command to display the ARP resolution status.
Note: The DANZ Monitoring Fabric (DMF) Controller resolves ARP messages for each NetFlow collector IP address on the delivery interface that matches the defined subnet. The subnets defined on the delivery interfaces cannot overlap and must be unique for each delivery interface.

Enter the 1 netflow command and identify the configuration name and the submode changes to the config-managed-srv-netflow mode for viewing and configuring a specific NetFlow configuration.

The DMF Service Node replicates NetFlow packets received without changing the source IP address. Packets that do not match the specified destination IP address and packets that are not IPv4 or UDP are passed through. To configure a NetFlow-managed service, complete the following steps:

  1. Configure the IP address on the delivery interface.
    This IP address should be the next-hop IP address from the DANZ Monitoring Fabric towards the NetFlow collector.
    CONTROLLER-1(config)# switch DMF-DELIVERY-SWITCH-1
    CONTROLLER-1(config-switch)# interface ethernet1
    CONTROLLER-1(config-switch-if)# role delivery interface-name NETFLOW-DELIVERY-PORT ip-address 172.43.75.1 nexthop-ip 172.43.75.2 255.255.255.252
  2. Configure the rate-limit for the NetFlow delivery interface.
    CONTROLLER-1(config)# switch DMF-DELIVERY-SWITCH-1
    CONTROLLER-1(config-switch)# interface ethernet1
    CONTROLLER-1(config-switch-if)# role delivery interface-name NETFLOW-DELIVERY-PORT ip-address 172.43.75.1 nexthop-ip 172.43.75.2 255.255.255.252
    CONTROLLER-1(config-switch-if)# rate-limit 256000
    Note: The rate limit must be configured when enabling Netflow. When upgrading from a version of DMF before release 6.3.1, the Netflow configuration is not applied until a rate limit is applied to the delivery interface.
  3. Configure the NetFlow managed service using the 1 netflow command followed by an identifier for the specific NetFlow configuration.
    
    CONTROLLER-1(config)# managed-service MS-NETFLOW-SERVICE CONTROLLER-1
    (config-managed-srv)# 1 netflow NETFLOW-DELIVERY-PORT CONTROLLER-1
    (config-managed-srv-netflow)#
    The following commands are available in this submode:
    • active-timeout: configure the maximum length of time the NetFlow is transmitted before it is ended (in minutes).
    • collector: configure the collector IP address, and change the UDP port number or the MTU.
    • inactive-timeout: configure the length of time that the NetFlow is inactive before it is ended (in seconds).
    • max-flows: configure the maximum number of flows managed.

    An option exists to limit the number of flows or change the inactivity timeout using the max-flows or active timeout, or inactive timeout commands.

  4. Configure the IP address of the NetFlow collector using the following command:
    collector <ip4-address>[udp-port<integer>][mtu <integer>][records-per-interface]
    

    The IP address, in IPV4 dotted-decimal notation, is required. The MTU and UDP port are required when changing these parameters from the defaults. Enable the records-per-interface option to allow identification of the filter interfaces from which the Netflow originated. Configure the Arista Analytics Node to display this information, as described in the DMF User Guide.

    The following is an example of changing the Netflow UDPF port to 9991.
    collector 10.181.19.31 udp-port 9991
    Note: The IP address must be in the same subnet as the configured next hop and unique. It cannot be the same as the Controller, service node, or any monitoring fabric switch IP address.
  5. Configure the DMF policy with the forward action and add the managed service to the policy.
    Note: A DMF policy does not require any configuration related to a delivery interface for NetFlow policies because the DMF Controller automatically assigns the delivery interface.
    The example below shows the configuration required to implement two NetFlow service instances (MS-NETFLOW-1 and MS-NETFLOW-1).
    ! switch
    switch DMF-DELIVERY-SWITCH-1
    !
    interface ethernet1
    role delivery interface-name NETFLOW-DELIVERY-PORT-1 ip-address 10.3.1.1
    nexthop-ip 10.3.1.2 255.255.255.0
    interface ethernet2
    role delivery interface-name NETFLOW-DELIVERY-PORT-2 ip-address 10.3.2.1
    nexthop-ip 10.3.2.2 255.255.255.0
    ! managed-service
    managed-service MS-NETFLOW-1
    service-interface switch DMF-CORE-SWITCH-1 interface ethernet11/1
    !
    1 netflow NETFLOW-DELIVERY-PORT-1
    collector-ip 10.106.1.60 udp-port 2055 mtu 1024
    managed-service MS-NETFLOW-2
    service-interface switch DMF-CORE-SWITCH-2 interface ethernet12/1
    !
    1 netflow NETFLOW-DELIVERY-PORT-1
    collector-ip 10.106.2.60 udp-port 2055 mtu 1024
    ! policy
    policy GENERATE-NETFLOW-1
    action forward
    filter-interface TAP-INTF-DC1-1
    filter-interface TAP-INTF-DC1-2
    use-managed-service MS-NETFLOW-1 sequence 1
    1 match any
    policy GENERATE-NETFLOW-2
    action forward
    filter-interface TAP-INTF-DC2-1
    filter-interface TAP-INTF-DC2-2
    use-managed-service MS-NETFLOW-2 sequence 1
    1 match any

Pattern-drop Action

The pattern-drop service action drops matching traffic.

Pattern matching allows content-based filtering beyond Layer-2, Layer-3, or Layer-4 Headers. This functionality allows filtering on the following packet fields and values:
  • URLs and user agents in the HTTP header
  • patterns in BitTorrent packets
  • encapsulation headers for specific parameters, including GTP, VXLAN, and VN-Tag
  • subscriber device IP (user-endpoint IP)

Pattern matching allows Session-aware Adaptive Packet Filtering (SAPF) to identify HTTPS transactions on non-standard SSL ports. It can filter custom applications and separate control traffic from user data traffic.

Pattern matching is also helpful in enforcing IT policies, such as identifying hosts using unsupported operating systems or dropping unsupported traffic. For example, the Windows OS version can be identified and filtered based on the user-agent field in the HTTP header. The user-agent field may appear at variable offsets, so a regular expression search is used to identify the specified value wherever it occurs in the packet.

GUI Configuration

Figure 16. Create Managed Service: Pattern Drop Action

CLI Configuration

Controller-1(config)# show running-config managed-service MS-PACKET-MASK
! managed-service
managed-service MS-PACKET-MASK
description "This service drops traffic that has an email address in its payload"
1 pattern-drop ([a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+.[a-zA-Z0-9_-]+)
service-interface switch CORE-SWITCH-1 ethernet13/1

Pattern-match Action

The pattern-match service action matches and forwards matching traffic and is similar to the pattern-drop service action.

Pattern matching allows content-based filtering beyond Layer-2, Layer-3, or Layer-4 Headers. This functionality allows filtering on the following packet fields and values:
  • URLs and user agents in the HTTP header
  • patterns in BitTorrent packets
  • encapsulation headers for specific parameters including, GTP, VXLAN, and VN-Tag
  • subscriber device IP (user-endpoint IP)
  • Pattern matching allows Session Aware Adaptive Packet Filtering and can identify HTTPS transactions on non-standard SSL ports. It can filter custom applications and can separate control traffic from user data traffic.

Pattern matching allows Session-aware Adaptive Packet Filtering (SAPF) to identify HTTPS transactions on non-standard SSL ports. It can filter custom applications and separate control traffic from user data traffic.

Pattern matching is also helpful in enforcing IT policies, such as identifying hosts using unsupported operating systems or dropping unsupported traffic. For example, the Windows OS version can be identified and filtered based on the user-agent field in the HTTP header. The user-agent field may appear at variable offsets, so a regular expression search is used to identify the specified value wherever it occurs in the packet.

GUI Configuration

Figure 17. Create Managed Service: Pattern Match Action

CLI Configuration

Use the pattern-match pattern keyword to enable the pattern-matching service action. Specify the pattern to match for packets to submit to the packet slicing operation.

The following example matches traffic with the string Windows NT 5.(0-1) anywhere in the packet and delivers the packets to the delivery interface TOOL-PORT-TO-WIRESHARK-1. This service is optional and is applied to TCP traffic to destination port 80.
! managed-service
managed-service MS-PATTERN-MATCH
description 'regular expression filtering'
1 pattern-match 'Windows\\sNT\\s5\\.[0-1]'
service-interface switch CORE-SWITCH-1 ethernet13/1
! policy
policy PATTERN-MATCH
action forward
delivery-interface TOOL-PORT-TO-WIRESHARK-1
description 'match regular expression pattern'
filter-interface TAP-INTF-FROM-PRODUCTION
priority 100
use-managed-service MS-PATTERN-MATCH sequence 1 optional
1 match tcp dst-port 80

Slice Action

The slice service action slices the given number of packets based on the specified starting point in the packet. Packet slicing reduces packet size to increase processing and monitoring throughput. Passive monitoring tools process fewer bits while maintaining each packet's vital, relevant portions. Packet slicing can significantly increase the capacity of forensic recording tools. Apply packet slicing by specifying the number of bytes to forward based on an offset from the following locations in the packet:
  • Packet start
  • L3 header start
  • L4 header start
  • L4 payload start

GUI Configuration

Figure 18. Create Managed Service: Slice Action

This page allows inserting an additional header containing the original header length.

CLI Configuration

Use the slice keyword to enable the packet slicing service action and insert an additional header containing the original header length, as shown in the following example:
! managed-service
managed-service my-service-name
1 slice l3-header-start 20 insert-original-packet-length
service-interface switch DMF-CORE-SWITCH-1 ethernet20/1
The following example truncates the packet from the first byte of the Layer-4 payload, preserving just the original Ethernet header. The service is optional and is applied to all TCP traffic from port 80 with the destination IP address 10.2.19.119
! managed-service
managed-service MS-SLICE-1
description 'slicing service'
1 slice l4-payload-start 1
service-interface switch DMF-CORE-SWITCH-1 ethernet40/1
! policy
policy slicing-policy
action forward
delivery-interface TOOL-PORT-TO-WIRESHARK-1
description 'remove payload'
filter-interface TAP-INTF-FROM-PRODUCTION
priority 100
use-managed-service MS-SLICE-1 sequence 1 optional
1 match tcp dst-ip 10.2.19.119 255.255.255.255 src-port 80
.

Packet Slicing on the 7280 Switch

This feature removes unwanted or unneeded bytes from a packet at a configurable byte position (offset). This approach is beneficial when the data of interest is situated within the headers or early in the packet payload. This action reduces the volume of the monitoring stream, particularly in cases where payload data is not necessary.

Another use case for packet slicing (slice action) can be removing payload data to ensure compliance with the captured traffic.

Within the DANZ Monitoring Fabric (DMF) fabric, two types of slice-managed services (packet slicing service) now exist. These types are distinguished based on whether installing the service on a service node or on an interface of a supported switch. The scope of this document is limited to the slice-managed service configured on a switch. The managed service interface is the switch interface used to configure this service.

All DMF 8.4 compatible 7280 switches support this feature. Use the show switch all property command to check which switch in DMF fabric supports this feature. The feature is supported if the Min Truncate Offset and Max Truncate Offset properties have a non-zero value.

# show switch all property
# Switch Min Truncate Offset...Max Truncate Offset
-|------|-------------------| ... |---------------------------------
1 7280 100...9236
2 core1 ... 
Note: The CLI output example above is truncated for illustrative purposes. The actual output will differ.

Using the CLI to Configure Packet Slicing - 7280 Switch

Configure a slice-managed service on a switch using the following steps.
  1. Create a managed service using the managed-service service name command.
  2. Add slice action with packet-start anchor and an offset value between the supported range as reported by the show switch all property command.
  3. Configure the service interface under the config-managed-srv submode using the service-interface switch switch-name interface-name command as shown in the following example.
    > enable
    # config
    (config)# managed-service slice-action-7280-J2-J2C
    (config-managed-srv)# 1 slice packet-start 101
    (config-managed-srv)# service-interface switch 7280-J2-J2C Ethernet10/1

This feature requires the service interface to be in MAC loopback mode.

  1. To set the service interface in MAC loopback mode, navigate to the config-switch-if submode and configure using the loopback-mode mac command, as shown in the following example.
    (config)# switch 7280-J2-J2C
    (config-switch)# interface Ethernet10/1
    (config-switch-if)# loopback-mode mac

Once a managed service for slice action exists, any policy can use it.

  1. Enter the config-policy submode, and chain the managed service using the use-managed-service service same sequence sequence command.
    (config)# policy timestamping-policy
    (config-policy)# use-managed-service slice-action-7280-J2-J2C sequence 1

Key points to consider while configuring the slice action on a supported switch:

  1. Only the packet-start anchor is supported.
  2. Offset should be within the Min/Max truncate size bounds reported by the show switch all property command. If the configured value is beyond the bound, then we choose the closest value of the range.

    For example, if a user configures the offset as 64, and the min truncate offset reported by switch properties is 100, then the offset used is 100. If the configured offset is 10,000 and the max truncate offset reported by the switch properties is 9236, then the offset used is 9236.

  3. A configured offset for slice-managed service includes FCS when programmed on a switch interface, which means an offset of 100 will result in a packet size of 96 bytes (accounting for 4-byte FCS).
  4. Configuring an offset below 17 is not allowed.
  5. The same service interface cannot chain multiple managed services.
  6. The insert-original-packet-length option is not applicable for switch-based slice-managed service.

CLI Show Commands

Use the show policy policy name command to see the runtime state of a policy using the slice-managed service. The command shows the service interface information and stats.

Controller# show policy packet-slicing-policy
Policy Name: packet-slicing-policy
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 1
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 0
# of services: 1
# of pre service interfaces: 1
# of post service interfaces : 1
Push VLAN: 1
Post Match Filter Traffic: -
Total Delivery Rate: -
Total Pre Service Rate : -
Total Post Service Rate: -
Overlapping Policies : none
Component Policies : none
Runtime Service Names: packet-slicing-7280
Installed Time : 2023-08-09 19:00:40 UTC
Installed Duration : 1 hour, 17 minutes
~ Match Rules ~
# Rule
-|-----------|
1 1 match any

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|-----------|-----|---|-------|-----|--------|--------|------------------------------|
1 f1 7280 Ethernet2/1 uprx0 0 0-2023-08-09 19:00:40.305000 UTC

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF Name State Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|------|------|-----------|-----|---|-------|-----|--------|--------|------------------------------|
1 d1 7280 Ethernet3/1 uptx0 0 0-2023-08-09 19:00:40.306000 UTC

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Service Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service nameRole Switch IF NameState Dir Packets Bytes Pkt Rate Bit Rate Counter Reset Time
-|-------------------|----|------|------------|-----|---|-------|-----|--------|--------|------------------------------|
1 packet-slicing-7280 pre7280 Ethernet10/1 uptx0 0 0-2023-08-09 19:00:40.305000 UTC
2 packet-slicing-7280 post 7280 Ethernet10/1 uprx0 0 0-2023-08-09 19:00:40.306000 UTC

~ Core Interface(s) ~
None.

~ Failed Path(s) ~
None.

Use the show managed-services command to view the status of all the managed services, including the packet-slicing managed service on a switch.

Controller# show managed-services
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Managed-services ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service NameSwitch Switch Interface Installed Max Post-Service BW Max Pre-Service BW Total Post-Service BW Total Pre-Service BW
-|-------------------|------|----------------|---------|-------------------|------------------|---------------------|--------------------|
1 packet-slicing-7280 7280 Ethernet10/1 True400Gbps 400Gbps80bps 80bps

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Actions of Service Names ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service NameSequence Service Action Slice Anchor Insert original packet length Slice Offset
-|-------------------|--------|--------------|------------|-----------------------------|------------|
1 packet-slicing-7280 1slicepacket-start False 101

Using the GUI to Configure Packet Slicing - 7820 Switch

Perform the following steps to configure or edit a managed service.

Managed Service Configuration

  1. To configure or edit a managed service, navigate to the DMF Managed Services page from the Monitoring menu and click Managed Services.
    Figure 19. DANZ Monitoring Fabric (DMF) Managed Services
    Figure 20. DMF Managed Services Add Managed Service
  2. Configure a managed service interface on a switch that supports packet slicing. Make sure to deselect the Show Managed Device Switches Only checkbox.
    Figure 21. Create Managed Service
  3. Configure a new managed service action using Add Managed service action. The action chain supports only one action when configuring packet slicing on a switch.
    Figure 22. Add Managed service action
  4. Use Action > Slice with Anchor > Packet Start to configure the packet slicing managed service on a switch.
    Figure 23. Configure Managed Service Action
  5. Click Append to continue. The slice action appears on the Managed Services page.
    Figure 24. Slice Action Added

Interface Loopback Configuration

The managed service interface used for slice action must be in MAC loopback mode.

  1. Configure the loopback mode in the Fabric > Interfaces page by clicking on the configuration icon of the interface.
    Figure 25. Interfaces
    Note: The image above has been edited for documentation purposes. The actual output will differ.
  2. Enable the toggle for MAC Loopback Mode (set the toggle to Yes).
    Figure 26. Edit Interface
  3. After all configuration changes are done Save the changes.

Policy Configuration

  1. Create a new policy from the DMF Policies page.
    Figure 27. DMF Policies Page
  2. Add the previously configured packet slicing managed service.
    Figure 28. Create Policy
  3. Select Add Service under the + Add Service(s) option shown above.
    Figure 29. Add Service
    Figure 30. Service Type - Service - slice action
  4. Click Add 1 Service and the slice-managed service (packet-slicing-policy) appears in the Create Policy page.
    Figure 31. Manage Service Added
  5. Click Create Policy and the new policy appears in DMF Policies.
    Figure 32. DMF Policy Configured
    Note: The images above have been edited for documentation purposes. The actual outputs may differ.

Troubleshooting Packet Slicing

The show switch all property command provides upper and lower bounds of packet slicing action’s offset. If bounds are present, the feature is supported; otherwise, the switch does not support the packet slicing feature.

The show fabric errors managed-service-error command provides information when DANZ Monitoring Fabric (DMF) fails to install a configured packet slicing managed service on a switch.

The following are some of the failure cases:
  1. The managed service interface is down.
  2. More than one action is configured on a managed service interface of the switch.
  3. The managed service interface on a switch is neither a physical interface nor a LAG port.
  4. A non-slice managed service is configured on a managed service interface of a switch.
  5. The switch does not support packet slicing managed service, and its interface is configured with slice action.
  6. Slice action configured on a switch interface is not using a packet-start anchor.
  7. The managed service interface is not in MAC loopback mode.

Use the following commands to troubleshoot packet-slicing issues.

Controller# show fabric errors managed-service-error
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Managed Service related error~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Error Service Name
-|---------------------------------------------------------------------------------------------------------------------------------------------|-------------------|
1 Pre-service interface 7280-Ethernet10/1-to-managed-service on switch 7280 is inactive; Service interface Ethernet10/1 on switch 7280 is downpacket-slicing-7280
2 Post-service interface 7280-Ethernet10/1-to-managed-service on switch 7280 is inactive; Service interface Ethernet10/1 on switch 7280 is down packet-slicing-7280

The show switch switch name interface interface name dmf-stats command provides Rx and Tx rate information for the managed service interface.

Controller# show switch 7280 interface Ethernet10/1 dmf-stats
# Switch DPID Name State Rx Rate Pkt Rate Peak Rate Peak Pkt Rate TX Rate Pkt Rate Peak Rate Peak Pkt Rate Pkt Drop Rate
-|-----------|------------|-----|-------|--------|---------|-------------|-------|--------|---------|-------------|-------------|
1 7280Ethernet10/1 down- 0128bps0 - 0128bps0 0

The show switch switch name interface interface name stats command provides Rx and Tx counter information for the managed service interface.

Controller# show switch 7280 interface Ethernet10/1 stats
# Name Rx Pkts Rx Bytes Rx Drop Tx Pkts Tx Bytes Tx Drop
-|------------|-------|--------|-------|-------|--------|-------|
1 Ethernet10/1 22843477 0 5140845937 0

Considerations

  1. Managed service action chaining is not supported when using a switch interface as a managed service interface.
  2. When configured for a supported switch, the managed service interface for slice action can only be a physical interface or a LAG.
  3. When using packet slicing managed service, packets ingressing on the managed service interface are not counted in the ingress interface counters, affecting the output of the show switch switch name interface interface name stats and show switch switch name interface interface name dmf-stats commands. This issue does not impact byte counters; all byte counters will show the original packet size, not the truncated size.

Session-slice Action

The session-slice action tracks the state of a TCP session (distinguished by its source IP address, source port, destination IP address, and destination port) and counts the number of packets sent in both directions (client-to-server and server-to-client). After recognizing the session, the action transmits a user-configured number of packets to the tool node.

A session is usually identified by tracking the packets of the three-way TCP handshake that establishes the session. However, observing the three-way handshake is unnecessary since DANZ Monitoring Fabric (DMF) creates a new session for any TCP packet that does not match an existing session.

Once a TCP session has been recognized and the session-slice action is applied, the service node tracks packets in both directions and drops them after the counts in both directions meet a threshold configured by the user on the Controller.

Note: The count of packets in one direction may exceed the user-configured threshold because fewer packets have arrived in the other direction. Counts in both directions must be greater than or equal to the threshold before dropping packets.

A maximum of 512K IPv4 and 512K IPv6 sessions can be tracked and sliced simultaneously per service-node interface.

GUI Configuration

Figure 33. Create Managed Service: Slice Action

This page provides the option to configure the number of packets the service node accounts for.

CLI Configuration

Use the session-slice keyword to enable TCP the session-slice service action.

The following example allows the service node to account for a new TCP session followed with 10 packets. Additional packets received from this TCP session are dropped.
! managed-service
managed-service SESSION-SLICE
!
1 session-slice
slice-after 10

Timestamp Action

The timestamp service action identifies and timestamps every packet it receives with the time the service node receives the packet for matching traffic.

GUI Configuration

Figure 34. Create Managed Service: Timestamp Action

CLI Configuration

! managed-service
managed-service MS-TIMESTAMP-1
1 timestamp
service-interface switch CORE-SWITCH-1 ethernet15/3

UDP-replication Action

The UDP-replication service action copies UDP messages, such as Syslog or NetFlow messages, and sends the copied packets to a new destination IP address.

Configure a rate limit when enabling UDP replication. When upgrading from a version of DANZ Monitoring Fabric (DMF) before release 6.3.1, the UDP-replication configuration is not applied until a rate limit is applied to the delivery interface.

The following is an example of applying a rate limit to a delivery interface used for UDP replication:
CONTROLLER-1(config)# switch DMF-DELIVERY-SWITCH-1
CONTROLLER-1(config-switch)# interface ethernet1
CONTROLLER-1(config-switch-if)# role delivery interface-name udp-delivery-1
CONTROLLER-1(config-switch-if)# rate-limit 256000
Note: No other service action can be applied after a UDP-replication service action.

GUI Configuration

Use the UDP-replication service to copy UDP traffic, such as Syslog messages or NetFlow packets, and send the copied packets to a new destination IP address. This function sends traffic to more destination syslog servers or NetFlow collectors than would otherwise be allowed.

Enable the checkbox for the destination for the copied output, or click the provision control (+) and add the IP address in the dialog that appears.
Figure 35. Configure Output Packet Destination IP

For the header-strip service action only, configure the policy rules for matching traffic after applying the header-strip service action. After completing pages 1-4, click Append and enable the checkbox to apply the policy.

Click Save to save the managed service.

CLI Configuration

Enter the 1 udp-replicate command and identify the name of the configuration, the submode changes to the config-managed-srv-udp-replicate submode, to view and configure a specific UDP-replication configuration.
controller-1(config)# managed-service MS-UDP-REPLICATE-1
controller-1(config-managed-srv)# 1 udp-replicate DELIVERY-INTF-TO-COLLECTOR
controller-1(config-managed-srv-udp-replicate)#
From this submode, define the destination address of the packets to copy and the destination address for sending the copied packets.
controller-1(config-managed-srv-udp-replicate)# in-dst-ip 10.1.1.1
controller-1(config-managed-srv-udp-replicate)# out-dst-ip 10.1.2.1

Redundancy of Managed Services in Same DMF Policy

In this method, users can use a second managed service as a backup service in the same DANZ Monitoring Fabric (DMF) policy. The backup service is activated only when the primary service becomes unavailable. The backup service can be on the same service node or core switch or a different service node and core switch.
Note: Transitioning from active to backup managed service requires reprogramming switches and associated managed appliances. This reprogramming, done seamlessly, will result in a slight traffic loss.

Using the GUI to Configure a Backup Managed Service

To assign a managed service as a backup service in a DANZ Monitoring Fabric (DMF) policy, complete the following steps:
  1. Select Monitoring > Policies and click the Provision control (+) to create a new policy.
  2. Configure the policy as required. From the Services section, click the Provision control (+) in the Managed Services table.
    Figure 36. Policy with Backup Managed Service
  3. Select the primary managed service from the Managed Service selection list.
  4. Select the backup service from the Backup Service selection list and click Append.

Using the CLI to Configure a Backup Managed Service

To implement backup-managed services, complete the following steps:
  1. Identify the first managed service.
    managed-service MS-SLICE-1
    1 slice l3-header-start 20
    service-interface switch CORE-SWITCH-1 lag1
  2. Identify the second managed service.
    managed-service MS-SLICE-2
    1 slice l3-header-start 20
    service-interface switch CORE-SWITCH-1 lag2
  3. Configure the policy referring to the backup managed service.
    policy SLICE-PACKETS
    action forward
    delivery-interface TOOL-PORT-1
    filter-interface TAP-PORT-1
    use-managed-service MS-SLICE-1 sequence 1 backup-managed-service MS-SLICE-2
    1 match ip

Application Identification

The DMF Application Identification feature allows monitoring of applications identified from packets taken from filter interfaces and sent through the fabric by sending IPFIX reports to a collector. The feature provides a filtering function by forwarding or dropping packets from specific applications before sending the packet to the analysis tools.
Note: Application identification is supported on R640 Service Nodes (DCA-DM-SC and DCA-DM-SC2) and R740 Service Nodes (DCA-DM-SDL and DCA-DM-SEL).

Using the CLI to Configure app-id

Perform the following steps to configure app-id.
  1. Create a managed service and enter the service interface.
  2. Choose the app-id managed service using the <seq num> app-id command.
    Note: The above command should enter the app-id submode, which supports two configuration parameters: collector and l3-delivery-interface. Both are required.
  3. To configure the IP address of the IPFIX collector, enter the following command: collector ip-address.
    The UDP port and MTU parameters are optional; the default values are 4739 and 1500, respectively.
  4. Enter the command: l3-delivery-interface delivery interface name to configure the delivery interface.
Below is an example of app-id configuration that sends IPFIX application records to the collector (analytics node) at IP address 192.168.1.1 over the configured delivery interface named app-to-analytics:
managed-service ms
service-interface switch core1 ethernet2
!
1 app-id
collector 192.168.1.1
l3-delivery-interface app-to-analytics

After configuring the app-id, refer to the analytics node for application reports and visualizations. For instance, a flow is classified internally with the following tuple: ip, tcp, http, google, and google_maps. Consequently, the analytics node displays the most specific app ID for this flow as google_maps under appName.

On the Analytics Node, there are AppIDs 0-4 representing applications according to their numerical IDs. 0 is the most specific application identified in that flow, while 4 is the least. In the example above, ID 0 would be the numerical ID for google_maps, ID 1 google, ID 2 http, ID 3 tcp, and ID 4 IP address. Use the appName in place of these since these require an ID to name mapping to interpret.

Using the CLI to Configure app-id-filter

Perform the following steps to configure app-id-filter:
  1. Create a managed service and enter the service interface.
  2. Choose the app-id managed service using the <seq num> app-id-filter command.
    Note: The above command should enter the app-id-filter submode, which supports three configuration parameters: app, app-category, and filter-mode. The category app is required, while app-category and filter-mode are optional. The option filter-mode has a default value of forward.
  3. Enter the command: app application name to configure the application name.
    Tip: Press the Tab key after entering the app keyword to see all possible application names. Type in a partial name and press the Tab to see all possible choices to auto-complete the name. The application name provided must match a name in this list of app names. A service node must be connected to the Controller for this list to appear. Any number of apps can be entered one at a time using the app application-name command. The following is an example of a (partial) list of names:
    dmf-controller-1 (config-managed-srv-app-id-filter)# app ibm
    ibm ibm_as_central ibm_as_dtaqibm_as_netprt ibm_as_srvmap ibm_iseries ibm_tsm
    ibm_app ibm_as_databaseibm_as_fileibm_as_rmtcmd ibm_db2 ibm_tealeaf
  4. Filter applications by category using the app-category category name command. Currently, the applications contained in these categories are not displayed.
    dmf-controller-1(config-managed-srv-app-id-filter)# app-category
    <Category> <String> :<String>
    aaaCategory selection
    adult_contentCategory selection
    advertisingCategory selection
    aetlsCategory selection
    analyticsCategory selection
    anonymizer Category selection
    audio_chat Category selection
    basicCategory selection
    blog Category selection
    cdnCategory selection
    certif_authCategory selection
    chat Category selection
    classified_ads Category selection
    cloud_services Category selection
    crowdfunding Category selection
    cryptocurrency Category selection
    db Category selection
    dea_mail Category selection
    ebook_reader Category selection
    educationCategory selection
    emailCategory selection
    enterprise Category selection
    file_mngtCategory selection
    file_transferCategory selection
    forumCategory selection
    gaming Category selection
    healthcare Category selection
    im_mcCategory selection
    iotCategory selection
    map_serviceCategory selection
    mm_streaming Category selection
    mobile Category selection
    networking Category selection
    news_portalCategory selection
    p2pCategory selection
    payment_serviceCategory selection
    remote_accessCategory selection
    scadaCategory selection
    social_network Category selection
    speedtestCategory selection
    standardized Category selection
    transportation Category selection
    update Category selection
    video_chat Category selection
    voip Category selection
    vpn_tunCategory selection
    webCategory selection
    web_ecom Category selection
    web_search Category selection
    web_sitesCategory selection
    webmailCategory selection
  5. The filter-mode parameter supports two modes: forward and drop. Enter filter-mode forward to allow the packets to be forwarded based on the configured applications. Enter filter-mode drop to drop these packets.
    An example of an app-id-filter configuration that drops all Facebook and IBM Tealeaf packets:
    managed-service MS
    	service-interface switch CORE-SWITCH-1 ethernet2
    	!
    	1 app-id-filter
    		app facebook
    		app ibm_tealeaf
    filter-mode drop
CAUTION: The app-id-filter configuration filters based on flows. For example, if a session is internally identified with the following tuple: ip, tcp, http, google, or google_maps, adding any of these parameters to the filter list permits or drops all the packets matching after determining classification (e.g., adding tcp to the filter list permits or blocks packets from the aforementioned 5-tuple flow as well as all other tcp flows). Use caution when filtering using the lower-layer protocols and apps. Also, when forwarding an application, packets will be dropped at the beginning of the session until the application is identified. When dropping, packets at the beginning of the session will be passed until the application is identified.

Using the CLI to Configure app-id and app-id-filter Combined

Follow the configuration steps described in the services above to configure app-id-filter and app-id together. However, in this case, app-id should use a higher seq num than app-id-filter. Thus, the traffic is processed through the app-id-filter policy first, then through app-id.

This behavior can be helpful to monitor certain types of traffic. The following is an example of a combined app-id-filter and app-id configuration.
! managed-service
managed-service MS1
service-interface switch CORE-SWITCH-1 ethernet2
!
!
1 app-id-filter
app facebook
filter-mode forward
!
2 app-id
collector 1.1.1.1
l3-delivery-interface L3-INTF-1
Note: The two drawbacks of this configuration are app-id dropping all traffic except facebook, and this type of service chaining can cause a performance hit and high memory utilization.

Using the GUI to Configure app-id and app-id-filter

App ID and App ID Filter are in the Managed Service workflow. Perform the following steps to complete the configuration.
  1. Navigate to the Monitoring > Managed Services page. Click the table action + icon button to add a new managed service.
    Figure 37. DANZ Monitoring Fabric (DMF) Managed Services
  2. Configure the Name, Switch, and Interface inputs in the Info step.
    Figure 38. Info Step
  3. In the Actions step, click the + icon to add a new managed service action.
    Figure 39. Add App ID Action
  4. To Add the App ID Action, select App ID from the action selection input:
    Figure 40. Select App ID
  5. Fill in the Delivery Interface, Collector IP, UDP Port, and MTU inputs and click Append to include the action in the managed service:
    Figure 41. Delivery Interface
  6. To Add the App ID Filter Action, select App ID Filter from the action selection input:
    Figure 42. Select App ID Filter
  7. Select the Filter input as Forward or Drop action:
    Figure 43. Select Filter Input
  8. Use the App Names section to add app names.
    1. Click the + button to open a modal pane to add an app name.

    2. The table lists all app names. Use the text search to filter out app names. Select the checkbox for app names to include and click Append Selected.

    3. Repeat the above step to add more app names as necessary.

    Figure 44. Associate App Names
  9. The selected app names are now listed. Use the - icon button to remove any app names, if necessary:
    Figure 45. Application Names
  10. Click the Append button to add the action to the managed service and Save to save the managed service.
For existing managed services, add App ID or App ID Filter using the Edit workflow of a managed service.

Dynamic Signature Updates (Beta Version)

This beta feature allows the app-id and app-id-filter services to classify newly supported applications at runtime rather than waiting for an update in the next DANZ Monitoring Fabric (DMF) release. Perform such runtime service updates during a maintenance cycle. There can be issues with backward compatibility if attempting to revert to an older bundle. Adopt only supported versions. In the Controller’s CLI, perform the following recommended steps:
  1. Remove all policies containing app-id or app-id-filter. Remove the app-id and app-id-filter managed services from the policies using the command: no use-managed-service in policy config.
    Arista Networks recommends this step to avoid errors and service node reboots during the update process. A warning message is printed right before confirming a push. Proceeding without this step may work but is not recommended as there is a risk of service node reboots.
    Note: Arista Networks provides the specific update file in the command example below.
  2. To pull the signature file onto the Controller node, use the command:
    dmf-controller-1(config)# app-id pull-signature-file user@host:path to file.tar.gz
    Password:
    file.tar.gz							5.47MB 1.63MBps 00:03
  3. Fetch and validate the file using the command:
    dmf-controller-1(config)# app-id fetch-signature-file file://file.tar.gz
    Fetch successful.
    Checksum : abcdefgh12345
    Fetch time : 2023-08-02 22:20:49.422000 UTC
    Filename : file.tar.gz
  4. To view files currently saved on the Controller node after the fetch operation is successful, use the following command:
    dmf-controller-1(config)# app-id list-signature-files
    # Signature-file	Checksum 		Fetch time
    -|-----------------|-----------------|------------------------------|
    1 file.tar.gz	abcdefgh12345	2023-08-02 22:20:49.422000 UTC
    Note: Only the files listed by this command can be pushed to service nodes.
  5. Push the file from the Controller to the service nodes using the following command:
    dmf-controller-1(config)# app-id push-signature-file file.tar.gz
    App ID update: WARNING: This push will affect all service nodes
    App ID update: Remove policies configured with app-id or app-id-filter before continuing to avoid errors
    App ID update: Signature file: file.tar.gz
    App ID update: Push app ID signatures to all Service Nodes? Update ("y" or "yes" to continue): yes
    Push successful.
    
    Checksum : abcdefgh12345
    Fetch time : 2023-08-02 22:20:49.422000 UTC
    Filename : file.tar.gz
    Sn push time : 2023-08-02 22:21:49.422000 UTC
  6. Add the app-id and app-id-filter managed services back to the policies.
    As a result of adding app-id, service nodes can now identify and report new applications to the analytics node.
    After adding back app-id-filter, new application names should appear in the app-id-filter Controller app list. To test this, enter app-id-filter submode and press the Tab to see the full list of applications. New identified applications should appear in this list.
  7. To delete a signature file from the Controller, use the command below.
    Note: DMF only allows deleting a signature file that is not actively in use by any service node, which needs to keep a working file in case of issues—attempting to delete an active file causes the command to fail.
    dmf-controller-1(config)# app-id delete-signature-file file.tar.gz
    Delete successful for file: file.tar.gz
Useful Information
The fetch and delete operations are synced with standby controllers as follows:
  • fetch: after a successful fetch on the active Controller, it invokes the fetch RPC on the standby Controller by providing a signed HTTP URL as the source. This URL points to an internal REST API that provides the recently fetched signature file.
  • delete: the active Controller invokes the delete RPC call on the standby controllers.

The Controller stores the signature files in this location: /var/lib/capture/appidsignatureupdate.

On a service node, files are overwritten and always contain the complete set of applications.
Note: An analytics node cannot display these applications in the current version.
This step is only for informational purposes:
  • Verify the bundle version on the service node by entering the show service-node app-id-bundle-version command in the service node CLI, as shown below.
    Figure 46. Before Update
    Figure 46. After Update

CLI Show Commands

In the service node CLI, use the following show command:
show service-node app-id-bundle-version
This command shows the version of the bundle in use. An app-id or app-id-filter instance must be configured, or an error message is displayed.
dmf-servicenode-1# show app-id bundle-version
Name : bundle_version
Data : 1.680.0-22 (build date Sep 26 2023)
dmf-servicenode-1#

Syslog Messages

Syslog messages for configuring the app-id and app-id-filter services appear in a service node’s syslog through journalctl.

A Service Node syslog registers events for the app-id add, modify, and delete actions.

These events contain the keywords dpi and dpi-filter, which correspond to app-id and app-id-filter.

For example:

Adding dpi for port, 
Modifying dpi for port, 
Deleting dpi for port,
Adding dpi filter for port, 
Modifying dpi filter for port, 
Deleting dpi filter for port, 
App appname does not exist - An invalid app name was entered.

The addition, modification, or deletion of app names in an app-id-filter managed-service in the Controller node’s CLI influences the policy refresh activity, and these events register in floodlight.log.

Scale

  • Max concurrent sessions are currently set to permit less than 200,000 active flows per core. Performance may drop the more concurrent flows there are. This value is a maximum value to prevent the service from overloading. Surpassing this threshold may cause some flows not to be processed, and the new flows will not be identified or filtered. Entries for inactive flows will time out after a few minutes for ongoing sessions and a few seconds after the session ends.
  • If there are many inactive sessions, DMF holds the flow contexts, reducing the number of available flows used for DPI. The timeouts are approximately 7 minutes for TCP sessions and 1 minute for UDP.
  • Heavy application traffic load degrades performance.

Troubleshooting and Considerations

Troubleshooting

  • If IPFIX reports do not appear on an analytics node or collector, ensure the UDP port is correctly configured and verify the analytics node is receiving traffic.
  • If the app-id-filter app list does not appear, ensure a service node is connected using the show service-node command on the Controller.
  • Be aware a flow may contain other IDs and protocols when using app-id-filter. For example, the specific application for a flow may be google_maps, but there may be protocols or broader applications under it, such as ssh, http, or google. Adding google_maps will filter this flow. However, adding ssh will also filter this flow. Therefore, adding any of these to the filter list will cause packets of this flow to be forwarded or dropped.
  • During a dynamic signature update, if a Service Node reboot occurs, it will likely boot up with the correct version. To avoid issues of traffic loss, perform the update during a maintenance window. Also, during an update, the Service Node will temporarily not send LLDP packets to the controller and disconnect for a short while.
  • After a dynamic signature update, do not change configurations or push another signature file for several minutes. The update will take some time to process. If there are any VFT changes, it may lead to warning messages in floodlight, such as:
    Sync job 2853: still waiting after 50002 ms 
    Stuck switch update: R740-25G[00:00:e4:43:4b:bb:38:ca], duration=50002ms, stage=COMMIT

    These messages may also be seen when configuring DPI on a large number of ports.

Considerations

  • If using a drop filter, a small amount of packets may slip through the filter before determining an application ID for a flow. When using a forward filter, a few packets may not be forwarded, estimated to be between 1 and 6 packets at the beginning of a flow.
  • If using a drop filter, add the unknown app ID to the filter list to drop any unidentified traffic if these packets are unwanted.
  • The Controller must be connected to a service node for the app-id-filter app list to appear. If the list does not appear and the application names are unknown, use the app-id to send reports to the analytics node and use the listed application names to configure an app-id-filter. The name must match exactly.
  • Since app-category does not currently show the applications included in that category, do not use it when targeting specific apps. Categories like basic, which include all basic networking protocols like TCP and UDP, may affect all flows.
  • For app-id, a report is only generated for a flow once that flow has been fully classified. Therefore, the number of reported applications may not match the total number of flows. DANZ Monitoring Fabric (DMF) sends these reports after identifying enough applications on the service node. If DMF identifies numerous applications, it dispatches the reports quickly. However, DMF sends these reports every 10 seconds when identifying only a few applications.
  • DMF treats a bidirectional flow as part of the same n-tuple. As such, generated reports contain the client's source IP address and the server's destination IP address.
  • While configuring many ports with the app-id, there may occasionally be a few RX drops on the 16 port machines at a high traffic rate in the first few seconds.
  • The app-id and app-id-filter services are more resource-intensive than other services. Combining them in a service chain or configuring many instances of them may lead to degradation in performance.
  • At scale, such as configuring 16 ports on the R740 DCA-DM-SEL, app-id may take a few minutes to set up on all these ports. This is also true when doing a dynamic signature update.

Redundancy of Managed Services Using Two DMF Policies

In this method, users can employ a second policy with a second managed service to provide redundancy. The idea here is to duplicate the policies but assign a lower policy priority to the second DANZ Monitoring Fabric (DMF) policy. In this case, the backup policy (and, by extension, the backup service) will always be active but only receive relevant traffic once the primary policy goes down. This method provides true redundancy at the policy, service-node, and core switch levels but uses additional network and node resources.

Example
! managed-service
managed-service MS-SLICE-1
1 slice l3-header-start 20
service-interface switch CORE-SWITCH-1 lag1
!
managed-service MS-SLICE-2
1 slice l3-header-start 20
service-interface switch CORE-SWITCH-1 lag2
! policy
policy ACTIVE-POLICY
priority 101
action forward
delivery-interface TOOL-PORT-1
filter-interface TAP-PORT-1
use-managed-service MS-SLICE-1 sequence 1
1 match ip
!
policy BACKUP-POLICY
priority 100
action forward
delivery-interface TOOL-PORT-1
filter-interface TAP-PORT-1
use-managed-service MS-SLICE-2 sequence 1
1 match ip

Cloud Services Filtering

The DANZ Monitoring Fabric (DMF) supports traffic filtering to specific services hosted in the public cloud and redirecting filtered traffic to customer tools. DMF achieves this functionality by reading the source and destination IP addresses of specific flows, identifying the Autonomous System number they belong to, tagging the flows with their respective AS numbers, and redirecting them to customer tools for consumption.

The following is the list of services supported:

  • amazon: traffic with src/dst IP belonging to Amazon
  • ebay: traffic with src/dst IP belonging to eBay
  • facebook: traffic with src/dst IP belonging to FaceBook
  • google: traffic with src/dst IP belonging to Google
  • microsoft: traffic with src/dst IP belonging to Microsoft
  • netflix: traffic with src/dst IP belonging to Netflix
  • office365: traffic for Microsoft Office365
  • sharepoint: traffic for Microsoft Sharepoint
  • skype: traffic for Microsoft Skype
  • twitter: traffic with src/dst IP belonging to Twitter
  • default: traffic not matching other rules in this service. Supported types are match or drop.

The option drop instructs the DMF Service Node to drop packets matching the configured application.

The option match instructs the DMF Service Node to deliver packets to the delivery interfaces connected to the customer tool.

A default drop action is auto-applied as the last rule, except when configuring the last rule as match default. It instructs the DMF Service Node to drop packets when either of the following conditions occurs:
  • The stream's source IP address or destination IP address doesn't belong to any AS number.
  • The stream's source IP address or destination IP address is associated with an AS number but has no specific action set.

Cloud Services Filtering Configuration

Managed Service Configuration
Controller(config)# managed-service <name>
Controller(config-managed-srv)#
Service Action Configuration
Controller(config-managed-srv)# 1 app-filter
Controller(config-managed-srv-appfilter)#
Filter Rules Configuration
Controller(config-managed-srv-appfilter)# 1 drop sharepoint
Controller(config-managed-srv-appfilter)# 2 match google
Controller(config-managed-srv-appfilter)# show this
! managed-service
managed-service sf3
service-interface switch CORE-SWITCH-1 ethernet13/1
!
1 service- app-filter
1 drop sharepoint
2 match google
A policy having a managed service with app-filter as the managed service, but with no matches specified will fail to install. The example below shows a policy incomplete-policy having failed due to the absence of a Match/Drop rule in the managed service incomplete-managed-service.
Controller(config)# show running-config managed-service incomplete-managed-service
! managed-service
managed-service incomplete-managed-service
1 app-filter
Controller(config)# show running-config policy R730-sf3
! policy
policy incomplete-policy
action forward
delivery-interface TOOL-PORT-1
filter-interface TAP-PORT-1
use-managed-service incomplete-managed-service sequence 1
1 match any
Controller(config-managed-srv-appfilter)# show policy incomplete-policy
Policy Name : incomplete-policy
Config Status : active - forward
Runtime Status : one or more required service down
Detailed Status : one or more required service down - installed to
forward
Priority : 100
Overlap Priority : 0

Multiple Services Per Service Node Interface

The service-node capability is augmented to support more than one service action per service-node interface. Though this feature is economical regarding per-interface cost, it could cause packet drops in high-volume traffic environments. Arista Networks recommends using this feature judiciously.

Example
controller-1# show running-config managed-service Test
! managed-service
managed-service Test
service-interface switch CORE-SWITCH-1 ethernet13/1
1 dedup full-packet window 2
2 mask BIGSWITCH
3 slice l4-payload-start 0
!
4 netflow an-collector
collector 10.106.6.15 udp-port 2055 mtu 1500
This feature replaces the service-action command with sequential numbers. The allowed range of sequence numbers is 1 -20000. In the above example, the sequence numbering impacts the order in which the managed services influence the traffic.
Note: After upgrading to DANZ Monitoring Fabric (DMF) release 8.1.0 and later, the service-action CLI is automatically replaced with sequence number(s).
Specific managed service statistics can be viewed via the following CLI command:
When using the DMF GUI, view the above information in Monitoring > Managed Services > Devices > Service Stats .
Note: The following limitations apply to this mode of configuration:
  • The NetFlow/IPFIX-action configuration should not be followed by the timestamp service action.
  • The UDP-replication action configuration should be the last service in the sequence.
  • The header-stripping service with post-service-match rule configured should not be followed by the NetFlow, IPFIX, udp-replication, timestamp and TCP-analysis services.

Sample Service

The Service Node forwards packets based on the max-tokens and tokens-per-refresh parameters using the DANZ Monitoring Fabric (DMF) Sample Service feature. The sample service uses one token to forward one packet.

After consuming all the initial tokens from the max-tokens bucket, the system drops subsequent packets until the max-tokens bucket refills using the tokens-per-refresh counter at a recurring predefined time interval of 10ms. Packet sizes do not affect this service.

Arista Networks recommends keeping the tokens-per-refresh value at or below max-tokens. For example, max-tokens = 1000 and tokens-per-refresh = 500.

Setting the max-tokens value to 1000 means that the initial number of tokens is 1000, and the maximum number of tokens stored at any time is 1000.

The max-tokens bucket will be zero when the Service Node has forwarded 1000 packets before the first 10 ms period ends, leading to a situation where the Service Node is no longer forwarding packets. After every 10ms time interval, if the tokens-per-refresh value is set to 500, the max-tokens bucket is refilled using the tokens-per-refresh configured value, 500 tokens in this case, to pass packets the service tries to use immediately.

Suppose the traffic rate is higher than the refresh amount added. In that case, available tokens will eventually drop back to 0, and every 10ms, only 500 packets will be forwarded, with subsequent packets being dropped.

If the traffic rate is lower than the refresh amount added, a surplus of tokens will result in all packets passing. Since the system only consumes some of the tokens before the next refresh interval, available tokens will accumulate until they reach the max-tokens value of 1000. After 1000, the system does not store any surplus tokens above the max-tokens value.

To estimate the maximum possible packets passed per second (pps), use the calculation (1000ms/10ms) * tokens-per-refresh and assume the max-tokens value is larger than tokens-per-refresh. For example, if the tokens-per-refresh value is 5000, then 500000 pps are passed.

The Sample Service feature can be used as a standalone Managed Service or chained with other Managed Services.

Use Cases and Compatibility

  • Applies to Service Nodes
  • Limit traffic to tools that cannot handle a large amount of traffic.
  • Use the Sample Service before another managed service to decrease the load on that service.
  • The Sample Service is applicable when needing only a portion of the total packets without specifically choosing which packets to forward.

Sample Service CLI Configuration

  1. Create a managed service and enter the service interface.
  2. Choose the sample managed service with the seq num sample command.
    1. There are two required configuration values: max-tokens and tokens-per-refresh. There are no default values, and the service requires both values.
    2. The max-tokens value is the maximum size of tokens in the token bucket. The service will start with the number of tokens specified when first configured. Each packet passed consumes one token. If no tokens remain, packet forwarding stops. Configure the max-tokens value from a range of 1 to the maximum uint64 (unsigned integer) value of 9,223,372,036,854,775,807.
    3. DMF refreshes the token bucket every 10 ms. The tokens-per-refresh value is the number of tokens added to the token bucket on each refresh. Each packet passed consumes one token, and when the number of tokens drops to zero, the system drops all subsequent packets until the next refresh. The number of tokens in the bucket cannot exceed the value of max-tokens. Configure the tokens-per-refresh value from a range of 1 to the maximum uint64 (unsigned integer) value of 9,223,372,036,854,775,807.
    The following example illustrates a typical Sample Service configuration
    dmf-controller-1(config-managed-srv-sample)# show this
    ! managed-service
    managed-service MS
    !
    3 sample
    max-tokens 50000
    tokens-per-refresh 20000
  3. Add the managed service to the policy.

Show Commands

Use the show running-config managed-service sample_service_name command to view pertinent details. In this example, the sample_service_name is techpubs.

DMF-SCALE-R450> show running-config managed-service techpubs

! managed-service
managed-service techpubs
!
1 sample
max-tokens 1000
tokens-per-refresh 500
DMF-SCALE-R450>

Sample Service GUI Configuration

Use the following steps to add a Sample Service.
  1. Navigate to the Monitoring > Managed Services page.
    Figure 48. DMF Managed Services
  2. Under the Managed Services section, click the + icon to create a new managed service. Go to the Actions, and select the Sample option in the Action drop-down. Enter values for Max tokens and Tokens per refresh.
    Figure 49. Configure Managed Service Action
  3. Click Append and then Save.

Troubleshooting Sample Service

Troubleshooting

  • If the number of packets forwarded by the Service Node interfaces is few, the max-tokens and tokens-per-refresh values likely need to be higher.

  • If fewer packets than the tokens-per-refresh value forward, ensure the max-tokens value is larger than the tokens-per-refresh value. The system discards any surplus refresh tokens above the max-tokens value.

  • When all traffic forwards, the initial max-tokens value is too large, or the tokens refreshed by tokens-per-refresh are higher than the packet rate.

  • When experiencing packet drops after the first 10ms post commencement of traffic, it may be due to a low tokens-per-refresh value. For example, calculate the minimum value of max-tokens and tokens-per-refresh that would lead to forwarding all packets.

Calculation Example

Traffic Rate : 400 Mbps
Packet Size - 64 bytes
400 Mbps = 400000000 bps
400000000 bps = 50000000 Bps
50000000 Bps = 595238 pps (Includes 20 bytes of inter packet gap in addition to the 64 bytes)
1000 ms = 595238 pps
1 ms = 595.238 pps
10 ms = 5952 pps
max-tokens : 5952 (the minimum value)
tokens-per-refresh : 5952 ( the minimum value)

Limitations

  • In the current implementation, the Service Sample action is bursty. The token consumption rate is not configured to withhold tokens over time, so a large burst of incoming packets can immediately consume all the tokens in the bucket. There is currently no way to select what traffic is forwarded or dropped; it only depends on when the packets arrive concerning the refresh interval.

  • Setting the max-tokens and tokens-per-refresh values too high will forward all packets. The maximum value is 9,223,372,036,854,775,807, but Arista Networks recommends staying within the maximum values stated under the description section.

Latency and Drop Analysis (Beta Version)

Latency and drop information help determine if there is a loss in a particular flow and where the loss occurred. A Service Node action configured as a DANZ Monitoring Fabric (DMF) managed service has two separate taps or spans in the production network and can measure the latency of a flow traversing through these two points. It can also detect packet drops between two points in the network if the packet only appears on one point within a specified time frame, currently set to 100ms.

Latency and drop analysis require PTP time-stamped packets. The DMF PTP timestamping feature can do this as the packets enter the monitoring fabric, or the production network switches can also timestamp the packet.

The Service Node accumulates latency values by flow and sends IPFIX data records with each flow's 5-tuple and ingress and egress identifiers. It sends IPFIX data records to the Analytics Node after collecting a specified number of values for a flow or when a timeout occurs for the flow entry. The threshold count is 10,000, and the flow timeout is 4 seconds.

Note: Only basic statistics are available: min, max, and mean. Use the Analytics Node to build custom dashboards to view and check the data.

Configure Latency and Drop Analysis Using the CLI

Configure this feature through the Controller as a Service Node action in managed services. There is one new managed service action: latency.

Latency configuration configures two distinct tap points, identified by the filter interface, policy name, or a user-configured VLAN tag. Based on the latency configuration, configuring the Service Node with traffic metadata tells the Service Node where to look for tap point information, timestamps, and the IPFIX collector.

Configure appropriate DMF Policies such that traffic tapped from two distinct tap points in the network is delivered to the configured Service Node interface for analysis.

  1. Create a managed service and enter the service interface.
  2. Choose the latency service action with the command: seq num latency
    Note: The above command should enter the latency submode, which supports four configuration parameters: collector, l3-delivery-interface, and left and right tap points. These all are required parameters.
  3. Configure the IP address of the IPFIX collector by entering the following command: collector ip-address (the UDP port and MTU parameters are optional; the default values are 4739 and 1500, respectively).
  4. Configure the delivery interface by entering the command l3-delivery-interface delivery interface name.
  5. Configure the points for latency and drop analysis using left-point and right-point parameters as specified in the following section.
    dmf-controller-1(config)# managed-service managed_service_1
    dmf-controller-1(config)# service-interface switch delivery1 ethernet1
    dmf-controller-1(config)# 1 latency
    dmf-controller-1(config)# collector 192.168.1.1
    dmf-controller-1(config)# l3-delivery-interface l3-iface-1
    dmf-controller-1(config)# left filter-interface left-point-f1
    dmf-controller-1(config)# right filter-interface right-point-f1

Configuring Tap Points

Configure tap points using left and right point parameters in the latency submode specifying three identifiers: filter interface name, policy name, and user-configured vlan tag.
dmf-controller-1(config-managed-srv-latency)# left <tab>
filter-interface policy-nameuser-configured-vlan-id

Both left and right point must use the same identifier; for example, if the left uses the filter-interface option to define a tap point, the right must also use the filter-interface.

Delete the previous configuration when changing the identifier type for the left and right point; otherwise, validation will fail.

The user-configured-vlan-id, as the name suggests, is the VLAN tag configured by a user. It doesn’t support the fabric-applied VLAN tags. The best way to know what values will be accepted is to use suggestions provided by DMF when using the Tab key after entering the command, as shown below:
dmf-controller-1(config-managed-srv-latency)# left user-configured-vlan-id <tab>
115125

There are several caveats on what is accepted based on the VLAN mode:

Push per Filter

  • Use the filter-interface option to provide the filter interface name.
  • The user-configured-vlan-id refers to the VLAN tag of the filter interface configured by a user outside the auto-vlan-range.
  • The policy-name identifier is invalid in this mode.

Push per Policy

  • Accepts a policy-name identifier as a tap point identifier.
  • The filter-interface identifier is invalid in this mode.
  • The user-configured-vlan-id refers to the VLAN tag of the policy configured by a user outside the auto-vlan-range.
  • Policies configured as left and right tap points must not overlap.

Configuring Policy

Irrespective of the VLAN mode, configure a policy or policies so that the same packet can be tapped from two independent points in the network and then sent to the Service Node.

After creating a policy, add the managed service with latency action as shown below:

dmf-controller-1 (config-policy)# use-managed-service service name sequence 1

There are several things to consider while configuring policies depending on the VLAN mode:

Push per Filter

  • Only one policy can contain the Latency service action.
  • A policy should have both filter interfaces configured as tap points in the latency configuration and must not have any other filter interface.
  • When using the user-configured-vlan-id option to configure tap points, ensure the filter interfaces have rewrite-vlan configured to match what’s in the latency configuration.

Push per Policy

  • Multiple policies can contain the Latency service action; however, adding more than two should not be required.
  • Add the latency service to two policies when using policy-name or user-configured-vlan-id as left and right identifiers. In this case, there are no restrictions on how many filter interfaces a policy can have.
  • When using the user-configured-vlan-id option to configure tap points, ensure the policies have push-vlan configured to match what’s in the latency configuration.
  • A policy configured as one of the tap points will fail if it overlaps with the other policy or if the other policy does not exist.
In both VLAN modes, policies must have PTP timestamping enabled. To do so, use the following command:
dmf-controller-1 (config-policy)# use-timestamping

Configuring PTP Timestamping

This feature depends on configuring PTP timestamping for the packet stream going through the tap points. Refer to the Resources section for more information on how to set up PTP timestamping functionality.

Show Commands

The following show commands provide helpful information.

The show running-config managed-service managed service command helps check whether the latency configuration is complete.
dmf-controller-1(config)# show running-config managed-service Latency 
! managed-service
managed-service Latency
service-interface switch DCS-7050CX3-32S ethernet2/4
!
1 latency
collector 192.168.1.1
l3-delivery-interface AN-Data
left policy-name LATENCY-1
right policy-name LATENCY-2
The show managed-services managed service command provides status information about the service.
dmf-controller-1(config)# show managed-services Latency 
# Service Name SwitchSwitch Interface Installed Max Post-Service BW Max Pre-Service BW Total Post-Service BW Total Pre-Service BW 
-|------------|---------------|----------------|---------|-------------------|------------------|---------------------|--------------------|
1 LatencyDCS-7050CX3-32S ethernet2/4True25Gbps25Gbps 624Kbps 432Mbps

The show running-config policy policy command checks whether the policy latency service exists, whether use-timestamping is enabled, and the use of the correct filter interfaces.

The show policy policy command provides detailed information about a policy and whether any errors are related to the latency service. The Service Interfaces tab section shows the packets transmitted to the Service Node and IPFIX packets received from the Service Node.

Note: Regarding two policies (one left and one right), only one policy will show stats about packets received from the Service Node. This output is by design, as only a single VLAN exists for the IPFIX packets transmitted from the Service Node, so there can only be one policy.
dmf-controller-1 (config)# show policy LATENCY-1
Policy Name: LATENCY-1
Config Status: active - forward
Runtime Status : installed
Detailed Status: installed - installed to forward
Priority : 100
Overlap Priority : 0
# of switches with filter interfaces : 1
# of switches with delivery interfaces : 1
# of switches with service interfaces: 1
# of filter interfaces : 1
# of delivery interfaces : 1
# of core interfaces : 4
# of services: 1
# of pre service interfaces: 1
# of post service interfaces : 1
Push VLAN: 2
Post Match Filter Traffic: 215Mbps
Total Delivery Rate: -
Total Pre Service Rate : 217Mbps
Total Post Service Rate: -
Overlapping Policies : none
Component Policies : none
Runtime Service Names: Latency
Installed Time : 2023-11-16 18:15:27 PST
Installed Duration : 19 minutes, 45 secs
~ Match Rules ~
# Rule
-|-----------|
1 1 match any
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Filter Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IF Switch IF NameState Dir PacketsBytes Pkt Rate Bit Rate Counter Reset Time 
-|------|--------|----------|-----|---|--------|-----------|--------|--------|------------------------------|
1 BP17280SR3E Ethernet25 uprx24319476 27484991953 23313215Mbps2023-11-16 18:18:18.837000 PST
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Delivery Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# DMF IFSwitchIF NameState Dir Packets BytesPkt Rate Bit Rate Counter Reset Time 
-|-------|---------|----------|-----|---|-------|------|--------|--------|------------------------------|
1 AN-Data 7050SX3-1 ethernet41 uptx81117222 0-2023-11-16 18:18:18.837000 PST
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Service Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Service name Role SwitchIF Name State Dir PacketsBytes Pkt Rate Bit Rate Counter Reset Time 
-|------------|------------|---------------|-----------|-----|---|--------|-----------|--------|--------|------------------------------|
1 Latencypre-serviceDCS-7050CX3-32S ethernet2/4 uptx23950846 27175761734 23418217Mbps2023-11-16 18:18:18.837000 PST
2 Latencypost-service DCS-7050CX3-32S ethernet2/4 uprx81 1175460-2023-11-16 18:18:18.837000 PST
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Core Interface(s) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# SwitchIF NameState Dir PacketsBytes Pkt Rate Bit Rate Counter Reset Time 
-|---------------|----------|-----|---|--------|-----------|--------|--------|------------------------------|
1 7050SX3-1 ethernet7uprx23950773 27175675524 23415217Mbps2023-11-16 18:18:18.837000 PST
2 7050SX3-1 ethernet56 uprx81 1172220-2023-11-16 18:18:18.837000 PST
3 7050SX3-1 ethernet56 uptx23950773 27175675524 23415217Mbps2023-11-16 18:18:18.837000 PST
4 7280SR3EEthernet7uptx24319476 27484991953 23313215Mbps2023-11-16 18:18:18.837000 PST
5 DCS-7050CX3-32S ethernet28 uptx81 1175460-2023-11-16 18:18:18.837000 PST
6 DCS-7050CX3-32S ethernet28 uprx23950846 27175761734 23418217Mbps2023-11-16 18:18:18.837000 PST
~ Failed Path(s) ~

Troubleshooting

Controller

Policies dictate how and what packets are directed to the Service Node. Policies must be able to stream packets from two distinct tap points so that the same packet gets delivered to the Service Node for latency and drop analysis.

Possible reasons for latency and drop analysis not working are:

  • Latency action only exists in one policy in push-per-policy mode.

  • Latency action exists in a policy that does not have precisely two filter interfaces in push-per-filter mode.

A policy programmed to use latency service action can fail for multiple reasons:

  • The Latency configuration is incomplete, left or right tap points are missing, or using a policy-name identifier in push-per-filter or filter-interface in push-per-policy mode.

  • In push-per-filter mode, the policy has a filter interface that doesn’t match with the left or right tap point.

  • In the push-per-policy mode, the policy doesn’t match with the left or right tap point.

  • Policy names configured as tap points overlap or do not exist.

  • When using a user-configured-vlan-id, the fabric created the VLAN ID, not the user.

  • The policy (push-vlan) or filter interface (rewrite-vlan) configuration changed without updating the Latency configuration to point to the correct user-configured-vlan-id.

Reasons for failure are available in the runtime state of the policy and viewed using the show policy policy name command.

Once verifying the correct configuration of the policy and latency action, enabling the log debug mode and reviewing the floodlight logs (use the fl-log command in bash mode) should show the ipfix-collector, traffic-metadata, and latency gentable entries sent to the Service Node.

Limitations

  • Only one left and one right tap point can be configured.
  • Hardware RSS firmware in the Service Node currently cannot parse L2 header timestamps, so packets for all will be sent to the same lcore; however, RSS does distribute packets correctly to multiple lcores if src-mac timestamping is used.
  • PTP timestamping doesn’t allow rewrite-dst-mac, so filter interfaces cannot be used as tap points in push-per-policy mode.
  • The following parameters are not configurable in this release:
    • gentable size, the default is large.
    • packet entry timeout, the default is 100ms.
    • flow timeout, the default is 4s.
    • count threshold at which an IPFIX data record is generated; the default is 10000.
  • Each packet from the L3 header and onwards gets hashed to a 64-bit value; if two packets hash to the same value, we assume the underlying packets are the same.
  • Currently, the latency action in the Service Node assumes it will only receive two copies of the same packet, storing the timestamp and identifier of the first packet and computing latency upon receipt of the second packet; if packets are duplicated so that N copies of the same packet are received:
    • N-1 latencies will be computed.
    • The ingress identifier will be that of the first packet; the egress identifier will be that of the Kth packet.
  • Timestamps are reported as unsigned 32-bit values, with the maximum timestamp being 2^32-1, corresponding to ~4.29 seconds.
  • Only min, mean, max latencies are currently reported.